WorldWideScience

Sample records for give quantitative estimates

  1. Smile line assessment comparing quantitative measurement and visual estimation.

    Science.gov (United States)

    Van der Geld, Pieter; Oosterveld, Paul; Schols, Jan; Kuijpers-Jagtman, Anne Marie

    2011-02-01

    Esthetic analysis of dynamic functions such as spontaneous smiling is feasible by using digital videography and computer measurement for lip line height and tooth display. Because quantitative measurements are time-consuming, digital videography and semiquantitative (visual) estimation according to a standard categorization are more practical for regular diagnostics. Our objective in this study was to compare 2 semiquantitative methods with quantitative measurements for reliability and agreement. The faces of 122 male participants were individually registered by using digital videography. Spontaneous and posed smiles were captured. On the records, maxillary lip line heights and tooth display were digitally measured on each tooth and also visually estimated according to 3-grade and 4-grade scales. Two raters were involved. An error analysis was performed. Reliability was established with kappa statistics. Interexaminer and intraexaminer reliability values were high, with median kappa values from 0.79 to 0.88. Agreement of the 3-grade scale estimation with quantitative measurement showed higher median kappa values (0.76) than the 4-grade scale estimation (0.66). Differentiating high and gummy smile lines (4-grade scale) resulted in greater inaccuracies. The estimation of a high, average, or low smile line for each tooth showed high reliability close to quantitative measurements. Smile line analysis can be performed reliably with a 3-grade scale (visual) semiquantitative estimation. For a more comprehensive diagnosis, additional measuring is proposed, especially in patients with disproportional gingival display. Copyright © 2011 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  2. To give or not to give, that's the question: How methodology is destiny in Dutch giving data

    NARCIS (Netherlands)

    Bekkers, R.H.F.P.; Wiepking, P.

    2006-01-01

    In research on giving, methodology is destiny. The volume of donations estimated from sample surveys strongly depends on the length of the questionnaire used to measure giving. By comparing two giving surveys from the Netherlands, the authors show that a short questionnaire on giving not only

  3. To Give or Not to Give, That Is the Question : How Methodology Is Destiny in Dutch Giving Data

    NARCIS (Netherlands)

    Bekkers, René; Wiepking, Pamala

    2006-01-01

    In research on giving, methodology is destiny. The volume of donations estimated from sample surveys strongly depends on the length of the questionnaire used to measure giving. By comparing two giving surveys from the Netherlands, the authors show that a short questionnaire on giving not only

  4. System for the chemical professing and evaluation gives the residual thickness the gives detecting for gives appearances LR115 type 2

    International Nuclear Information System (INIS)

    Carrazana Gonzalez, J.A.; Tomas Zerquera, J.; Prendes Alonso, M.

    1998-01-01

    In this work the system is described built in the CPHR for the homogeneous chemical processing gives detecting gives nuclear appearances. A new developed method is exposed, based on the application gives the technique optical densitometry, for the precise estimate gives the residual thickness, gives detecting, gives nuclear appearances LR115 type 2 after the process gives chemical engraving

  5. Quantitative estimation of pollution in groundwater and surface ...

    African Journals Online (AJOL)

    Quantitative estimation of pollution in groundwater and surface water in Benin City and environs. ... Ethiopian Journal of Environmental Studies and Management ... Physico-chemical parameters were compared with regulatory standards from Federal Ministry of Environment for drinking water and they all fell within ...

  6. Smile line assessment comparing quantitative measurement and visual estimation

    NARCIS (Netherlands)

    Geld, P. Van der; Oosterveld, P.; Schols, J.; Kuijpers-Jagtman, A.M.

    2011-01-01

    INTRODUCTION: Esthetic analysis of dynamic functions such as spontaneous smiling is feasible by using digital videography and computer measurement for lip line height and tooth display. Because quantitative measurements are time-consuming, digital videography and semiquantitative (visual) estimation

  7. Quantitative estimation of muscle fatigue using surface electromyography during static muscle contraction.

    Science.gov (United States)

    Soo, Yewguan; Sugi, Masao; Nishino, Masataka; Yokoi, Hiroshi; Arai, Tamio; Kato, Ryu; Nakamura, Tatsuhiro; Ota, Jun

    2009-01-01

    Muscle fatigue is commonly associated with the musculoskeletal disorder problem. Previously, various techniques were proposed to index the muscle fatigue from electromyography signal. However, quantitative measurement is still difficult to achieve. This study aimed at proposing a method to estimate the degree of muscle fatigue quantitatively. A fatigue model was first constructed using handgrip dynamometer by conducting a series of static contraction tasks. Then the degree muscle fatigue can be estimated from electromyography signal with reasonable accuracy. The error of the estimated muscle fatigue was less than 10% MVC and no significant difference was found between the estimated value and the one measured using force sensor. Although the results were promising, there were still some limitations that need to be overcome in future study.

  8. Quantitative Estimation for the Effectiveness of Automation

    International Nuclear Information System (INIS)

    Lee, Seung Min; Seong, Poong Hyun

    2012-01-01

    In advanced MCR, various automation systems are applied to enhance the human performance and reduce the human errors in industrial fields. It is expected that automation provides greater efficiency, lower workload, and fewer human errors. However, these promises are not always fulfilled. As the new types of events related to application of the imperfect and complex automation are occurred, it is required to analyze the effects of automation system for the performance of human operators. Therefore, we suggest the quantitative estimation method to analyze the effectiveness of the automation systems according to Level of Automation (LOA) classification, which has been developed over 30 years. The estimation of the effectiveness of automation will be achieved by calculating the failure probability of human performance related to the cognitive activities

  9. Quantitative Compactness Estimates for Hamilton-Jacobi Equations

    Science.gov (United States)

    Ancona, Fabio; Cannarsa, Piermarco; Nguyen, Khai T.

    2016-02-01

    We study quantitative compactness estimates in {W^{1,1}_{loc}} for the map {S_t}, {t > 0} that is associated with the given initial data {u_0in Lip (R^N)} for the corresponding solution {S_t u_0} of a Hamilton-Jacobi equation u_t+Hbig(nabla_{x} ubig)=0, qquad t≥ 0,quad xinR^N, with a uniformly convex Hamiltonian {H=H(p)}. We provide upper and lower estimates of order {1/\\varepsilon^N} on the Kolmogorov {\\varepsilon}-entropy in {W^{1,1}} of the image through the map S t of sets of bounded, compactly supported initial data. Estimates of this type are inspired by a question posed by Lax (Course on Hyperbolic Systems of Conservation Laws. XXVII Scuola Estiva di Fisica Matematica, Ravello, 2002) within the context of conservation laws, and could provide a measure of the order of "resolution" of a numerical method implemented for this equation.

  10. Stochastic evaluation of tsunami inundation and quantitative estimating tsunami risk

    International Nuclear Information System (INIS)

    Fukutani, Yo; Anawat, Suppasri; Abe, Yoshi; Imamura, Fumihiko

    2014-01-01

    We performed a stochastic evaluation of tsunami inundation by using results of stochastic tsunami hazard assessment at the Soma port in the Tohoku coastal area. Eleven fault zones along the Japan trench were selected as earthquake faults generating tsunamis. The results show that estimated inundation area of return period about 1200 years had good agreement with that in the 2011 Tohoku earthquake. In addition, we evaluated quantitatively tsunami risk for four types of building; a reinforced concrete, a steel, a brick and a wood at the Soma port by combining the results of inundation assessment and tsunami fragility assessment. The results of quantitative estimating risk would reflect properly vulnerability of the buildings, that the wood building has high risk and the reinforced concrete building has low risk. (author)

  11. Quantitative estimation of time-variable earthquake hazard by using fuzzy set theory

    Science.gov (United States)

    Deyi, Feng; Ichikawa, M.

    1989-11-01

    In this paper, the various methods of fuzzy set theory, called fuzzy mathematics, have been applied to the quantitative estimation of the time-variable earthquake hazard. The results obtained consist of the following. (1) Quantitative estimation of the earthquake hazard on the basis of seismicity data. By using some methods of fuzzy mathematics, seismicity patterns before large earthquakes can be studied more clearly and more quantitatively, highly active periods in a given region and quiet periods of seismic activity before large earthquakes can be recognized, similarities in temporal variation of seismic activity and seismic gaps can be examined and, on the other hand, the time-variable earthquake hazard can be assessed directly on the basis of a series of statistical indices of seismicity. Two methods of fuzzy clustering analysis, the method of fuzzy similarity, and the direct method of fuzzy pattern recognition, have been studied is particular. One method of fuzzy clustering analysis is based on fuzzy netting, and another is based on the fuzzy equivalent relation. (2) Quantitative estimation of the earthquake hazard on the basis of observational data for different precursors. The direct method of fuzzy pattern recognition has been applied to research on earthquake precursors of different kinds. On the basis of the temporal and spatial characteristics of recognized precursors, earthquake hazards in different terms can be estimated. This paper mainly deals with medium-short-term precursors observed in Japan and China.

  12. Accuracy in the estimation of quantitative minimal area from the diversity/area curve.

    Science.gov (United States)

    Vives, Sergi; Salicrú, Miquel

    2005-05-01

    The problem of representativity is fundamental in ecological studies. A qualitative minimal area that gives a good representation of species pool [C.M. Bouderesque, Methodes d'etude qualitative et quantitative du benthos (en particulier du phytobenthos), Tethys 3(1) (1971) 79] can be discerned from a quantitative minimal area which reflects the structural complexity of community [F.X. Niell, Sobre la biologia de Ascophyllum nosodum (L.) Le Jolis en Galicia, Invest. Pesq. 43 (1979) 501]. This suggests that the populational diversity can be considered as the value of the horizontal asymptote corresponding to the curve sample diversity/biomass [F.X. Niell, Les applications de l'index de Shannon a l'etude de la vegetation interdidale, Soc. Phycol. Fr. Bull. 19 (1974) 238]. In this study we develop a expression to determine minimal areas and use it to obtain certain information about the community structure based on diversity/area curve graphs. This expression is based on the functional relationship between the expected value of the diversity and the sample size used to estimate it. In order to establish the quality of the estimation process, we obtained the confidence intervals as a particularization of the functional (h-phi)-entropies proposed in [M. Salicru, M.L. Menendez, D. Morales, L. Pardo, Asymptotic distribution of (h,phi)-entropies, Commun. Stat. (Theory Methods) 22 (7) (1993) 2015]. As an example used to demonstrate the possibilities of this method, and only for illustrative purposes, data about a study on the rocky intertidal seawed populations in the Ria of Vigo (N.W. Spain) are analyzed [F.X. Niell, Estudios sobre la estructura, dinamica y produccion del Fitobentos intermareal (Facies rocosa) de la Ria de Vigo. Ph.D. Mem. University of Barcelona, Barcelona, 1979].

  13. Comparison of blood flow models and acquisitions for quantitative myocardial perfusion estimation from dynamic CT

    International Nuclear Information System (INIS)

    Bindschadler, Michael; Alessio, Adam M; Modgil, Dimple; La Riviere, Patrick J; Branch, Kelley R

    2014-01-01

    Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g) −1 , cardiac output = 3, 5, 8 L min −1 ). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This

  14. Quantitative estimation of seafloor features from photographs and their application to nodule mining

    Digital Repository Service at National Institute of Oceanography (India)

    Sharma, R.

    Methods developed for quantitative estimation of seafloor features from seabed photographs and their application for estimation of nodule sizes, coverage, abundance, burial, sediment thickness, extent of rock exposure, density of benthic organisms...

  15. Quantitative genetic tools for insecticide resistance risk assessment: estimating the heritability of resistance

    Science.gov (United States)

    Michael J. Firko; Jane Leslie Hayes

    1990-01-01

    Quantitative genetic studies of resistance can provide estimates of genetic parameters not available with other types of genetic analyses. Three methods are discussed for estimating the amount of additive genetic variation in resistance to individual insecticides and subsequent estimation of heritability (h2) of resistance. Sibling analysis and...

  16. Comparison of conventional, model-based quantitative planar, and quantitative SPECT image processing methods for organ activity estimation using In-111 agents

    International Nuclear Information System (INIS)

    He, Bin; Frey, Eric C

    2006-01-01

    Accurate quantification of organ radionuclide uptake is important for patient-specific dosimetry. The quantitative accuracy from conventional conjugate view methods is limited by overlap of projections from different organs and background activity, and attenuation and scatter. In this work, we propose and validate a quantitative planar (QPlanar) processing method based on maximum likelihood (ML) estimation of organ activities using 3D organ VOIs and a projector that models the image degrading effects. Both a physical phantom experiment and Monte Carlo simulation (MCS) studies were used to evaluate the new method. In these studies, the accuracies and precisions of organ activity estimates for the QPlanar method were compared with those from conventional planar (CPlanar) processing methods with various corrections for scatter, attenuation and organ overlap, and a quantitative SPECT (QSPECT) processing method. Experimental planar and SPECT projections and registered CT data from an RSD Torso phantom were obtained using a GE Millenium VH/Hawkeye system. The MCS data were obtained from the 3D NCAT phantom with organ activity distributions that modelled the uptake of 111 In ibritumomab tiuxetan. The simulations were performed using parameters appropriate for the same system used in the RSD torso phantom experiment. The organ activity estimates obtained from the CPlanar, QPlanar and QSPECT methods from both experiments were compared. From the results of the MCS experiment, even with ideal organ overlap correction and background subtraction, CPlanar methods provided limited quantitative accuracy. The QPlanar method with accurate modelling of the physical factors increased the quantitative accuracy at the cost of requiring estimates of the organ VOIs in 3D. The accuracy of QPlanar approached that of QSPECT, but required much less acquisition and computation time. Similar results were obtained from the physical phantom experiment. We conclude that the QPlanar method, based

  17. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    Science.gov (United States)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  18. Quantitative Risk reduction estimation Tool For Control Systems, Suggested Approach and Research Needs

    Energy Technology Data Exchange (ETDEWEB)

    Miles McQueen; Wayne Boyer; Mark Flynn; Sam Alessi

    2006-03-01

    For the past year we have applied a variety of risk assessment technologies to evaluate the risk to critical infrastructure from cyber attacks on control systems. More recently, we identified the need for a stand alone control system risk reduction estimation tool to provide owners and operators of control systems with a more useable, reliable, and credible method for managing the risks from cyber attack. Risk is defined as the probability of a successful attack times the value of the resulting loss, typically measured in lives and dollars. Qualitative and ad hoc techniques for measuring risk do not provide sufficient support for cost benefit analyses associated with cyber security mitigation actions. To address the need for better quantitative risk reduction models we surveyed previous quantitative risk assessment research; evaluated currently available tools; developed new quantitative techniques [17] [18]; implemented a prototype analysis tool to demonstrate how such a tool might be used; used the prototype to test a variety of underlying risk calculational engines (e.g. attack tree, attack graph); and identified technical and research needs. We concluded that significant gaps still exist and difficult research problems remain for quantitatively assessing the risk to control system components and networks, but that a useable quantitative risk reduction estimation tool is not beyond reach.

  19. Quantitative pre-surgical lung function estimation with SPECT/CT

    International Nuclear Information System (INIS)

    Bailey, D. L.; Willowson, K. P.; Timmins, S.; Harris, B. E.; Bailey, E. A.; Roach, P. J.

    2009-01-01

    Full text:Objectives: To develop methodology to predict lobar lung function based on SPECT/CT ventilation and perfusion (V/Q) scanning in candidates for lobectomy for lung cancer. Methods: This combines two development areas from our group: quantitative SPECT based on CT-derived corrections for scattering and attenuation of photons, and SPECT V/Q scanning with lobar segmentation from CT. Eight patients underwent baseline pulmonary function testing (PFT) including spirometry, measure of DLCO and cario-pulmonary exercise testing. A SPECT/CT V/Q scan was acquired at baseline. Using in-house software each lobe was anatomically defined using CT to provide lobar ROIs which could be applied to the SPECT data. From these, individual lobar contribution to overall function was calculated from counts within the lobe and post-operative FEV1, DLCO and VO2 peak were predicted. This was compared with the quantitative planar scan method using 3 rectangular ROIs over each lung. Results: Post-operative FEV1 most closely matched that predicted by the planar quantification method, with SPECT V/Q over-estimating the loss of function by 8% (range - 7 - +23%). However, post-operative DLCO and VO2 peak were both accurately predicted by SPECT V/Q (average error of 0 and 2% respectively) compared with planar. Conclusions: More accurate anatomical definition of lobar anatomy provides better estimates of post-operative loss of function for DLCO and VO2 peak than traditional planar methods. SPECT/CT provides the tools for accurate anatomical defintions of the surgical target as well as being useful in producing quantitative 3D functional images for ventilation and perfusion.

  20. Quantitative estimation of diacetylmorphine by preparative TLC and UV spectroscopy

    International Nuclear Information System (INIS)

    Khan, L.; Siddiqui, M.T.; Ahmad, N.; Shafi, N.

    2001-01-01

    A simple and efficient method for the quantitative estimation of di acetylmorphine in narcotic products has been described. Comparative TLC of narcotic specimens with standards showed presence of morphine, monoacetylmorphine, diacetylmorphine papaverine and noscapine, Resolution of the mixtures was achieved by preparative TLC. Bands corresponding to diacetylmorphine scraped, eluted UV absorption of extracts measured and contents quantified. (author)

  1. Dual respiratory and cardiac motion estimation in PET imaging: Methods design and quantitative evaluation.

    Science.gov (United States)

    Feng, Tao; Wang, Jizhe; Tsui, Benjamin M W

    2018-04-01

    The goal of this study was to develop and evaluate four post-reconstruction respiratory and cardiac (R&C) motion vector field (MVF) estimation methods for cardiac 4D PET data. In Method 1, the dual R&C motions were estimated directly from the dual R&C gated images. In Method 2, respiratory motion (RM) and cardiac motion (CM) were separately estimated from the respiratory gated only and cardiac gated only images. The effects of RM on CM estimation were modeled in Method 3 by applying an image-based RM correction on the cardiac gated images before CM estimation, the effects of CM on RM estimation were neglected. Method 4 iteratively models the mutual effects of RM and CM during dual R&C motion estimations. Realistic simulation data were generated for quantitative evaluation of four methods. Almost noise-free PET projection data were generated from the 4D XCAT phantom with realistic R&C MVF using Monte Carlo simulation. Poisson noise was added to the scaled projection data to generate additional datasets of two more different noise levels. All the projection data were reconstructed using a 4D image reconstruction method to obtain dual R&C gated images. The four dual R&C MVF estimation methods were applied to the dual R&C gated images and the accuracy of motion estimation was quantitatively evaluated using the root mean square error (RMSE) of the estimated MVFs. Results show that among the four estimation methods, Methods 2 performed the worst for noise-free case while Method 1 performed the worst for noisy cases in terms of quantitative accuracy of the estimated MVF. Methods 4 and 3 showed comparable results and achieved RMSE lower by up to 35% than that in Method 1 for noisy cases. In conclusion, we have developed and evaluated 4 different post-reconstruction R&C MVF estimation methods for use in 4D PET imaging. Comparison of the performance of four methods on simulated data indicates separate R&C estimation with modeling of RM before CM estimation (Method 3) to be

  2. Quantitative estimation of Nipah virus replication kinetics in vitro

    Directory of Open Access Journals (Sweden)

    Hassan Sharifah

    2006-06-01

    Full Text Available Abstract Background Nipah virus is a zoonotic virus isolated from an outbreak in Malaysia in 1998. The virus causes infections in humans, pigs, and several other domestic animals. It has also been isolated from fruit bats. The pathogenesis of Nipah virus infection is still not well described. In the present study, Nipah virus replication kinetics were estimated from infection of African green monkey kidney cells (Vero using the one-step SYBR® Green I-based quantitative real-time reverse transcriptase-polymerase chain reaction (qRT-PCR assay. Results The qRT-PCR had a dynamic range of at least seven orders of magnitude and can detect Nipah virus from as low as one PFU/μL. Following initiation of infection, it was estimated that Nipah virus RNA doubles at every ~40 minutes and attained peak intracellular virus RNA level of ~8.4 log PFU/μL at about 32 hours post-infection (PI. Significant extracellular Nipah virus RNA release occurred only after 8 hours PI and the level peaked at ~7.9 log PFU/μL at 64 hours PI. The estimated rate of Nipah virus RNA released into the cell culture medium was ~0.07 log PFU/μL per hour and less than 10% of the released Nipah virus RNA was infectious. Conclusion The SYBR® Green I-based qRT-PCR assay enabled quantitative assessment of Nipah virus RNA synthesis in Vero cells. A low rate of Nipah virus extracellular RNA release and low infectious virus yield together with extensive syncytial formation during the infection support a cell-to-cell spread mechanism for Nipah virus infection.

  3. Improving statistical inference on pathogen densities estimated by quantitative molecular methods: malaria gametocytaemia as a case study.

    Science.gov (United States)

    Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S

    2015-01-16

    Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.

  4. Epithelium percentage estimation facilitates epithelial quantitative protein measurement in tissue specimens.

    Science.gov (United States)

    Chen, Jing; Toghi Eshghi, Shadi; Bova, George Steven; Li, Qing Kay; Li, Xingde; Zhang, Hui

    2013-12-01

    The rapid advancement of high-throughput tools for quantitative measurement of proteins has demonstrated the potential for the identification of proteins associated with cancer. However, the quantitative results on cancer tissue specimens are usually confounded by tissue heterogeneity, e.g. regions with cancer usually have significantly higher epithelium content yet lower stromal content. It is therefore necessary to develop a tool to facilitate the interpretation of the results of protein measurements in tissue specimens. Epithelial cell adhesion molecule (EpCAM) and cathepsin L (CTSL) are two epithelial proteins whose expressions in normal and tumorous prostate tissues were confirmed by measuring staining intensity with immunohistochemical staining (IHC). The expressions of these proteins were measured by ELISA in protein extracts from OCT embedded frozen prostate tissues. To eliminate the influence of tissue heterogeneity on epithelial protein quantification measured by ELISA, a color-based segmentation method was developed in-house for estimation of epithelium content using H&E histology slides from the same prostate tissues and the estimated epithelium percentage was used to normalize the ELISA results. The epithelium contents of the same slides were also estimated by a pathologist and used to normalize the ELISA results. The computer based results were compared with the pathologist's reading. We found that both EpCAM and CTSL levels, measured by ELISA assays itself, were greatly affected by epithelium content in the tissue specimens. Without adjusting for epithelium percentage, both EpCAM and CTSL levels appeared significantly higher in tumor tissues than normal tissues with a p value less than 0.001. However, after normalization by the epithelium percentage, ELISA measurements of both EpCAM and CTSL were in agreement with IHC staining results, showing a significant increase only in EpCAM with no difference in CTSL expression in cancer tissues. These results

  5. Using the ''Epiquant'' automatic analyzer for quantitative estimation of grain size

    Energy Technology Data Exchange (ETDEWEB)

    Tsivirko, E I; Ulitenko, A N; Stetsenko, I A; Burova, N M [Zaporozhskij Mashinostroitel' nyj Inst. (Ukrainian SSR)

    1979-01-01

    Application possibility of the ''Epiquant'' automatic analyzer to estimate qualitatively austenite grain in the 18Kh2N4VA steel has been investigated. Austenite grain has been clarified using the methods of cementation, oxidation and etching of the grain boundaries. Average linear size of grain at the length of 15 mm has been determined according to the total length of grain intersection line and the number of intersections at the boundaries. It is shown that the ''Epiquant'' analyzer ensures quantitative estimation of austenite grain size with relative error of 2-4 %.

  6. Physiological frailty index (PFI): quantitative in-life estimate of individual biological age in mice.

    Science.gov (United States)

    Antoch, Marina P; Wrobel, Michelle; Kuropatwinski, Karen K; Gitlin, Ilya; Leonova, Katerina I; Toshkov, Ilia; Gleiberman, Anatoli S; Hutson, Alan D; Chernova, Olga B; Gudkov, Andrei V

    2017-03-19

    The development of healthspan-extending pharmaceuticals requires quantitative estimation of age-related progressive physiological decline. In humans, individual health status can be quantitatively assessed by means of a frailty index (FI), a parameter which reflects the scale of accumulation of age-related deficits. However, adaptation of this methodology to animal models is a challenging task since it includes multiple subjective parameters. Here we report a development of a quantitative non-invasive procedure to estimate biological age of an individual animal by creating physiological frailty index (PFI). We demonstrated the dynamics of PFI increase during chronological aging of male and female NIH Swiss mice. We also demonstrated acceleration of growth of PFI in animals placed on a high fat diet, reflecting aging acceleration by obesity and provide a tool for its quantitative assessment. Additionally, we showed that PFI could reveal anti-aging effect of mTOR inhibitor rapatar (bioavailable formulation of rapamycin) prior to registration of its effects on longevity. PFI revealed substantial sex-related differences in normal chronological aging and in the efficacy of detrimental (high fat diet) or beneficial (rapatar) aging modulatory factors. Together, these data introduce PFI as a reliable, non-invasive, quantitative tool suitable for testing potential anti-aging pharmaceuticals in pre-clinical studies.

  7. Merging Radar Quantitative Precipitation Estimates (QPEs) from the High-resolution NEXRAD Reanalysis over CONUS with Rain-gauge Observations

    Science.gov (United States)

    Prat, O. P.; Nelson, B. R.; Stevens, S. E.; Nickl, E.; Seo, D. J.; Kim, B.; Zhang, J.; Qi, Y.

    2015-12-01

    The processing of radar-only precipitation via the reanalysis from the National Mosaic and Multi-Sensor Quantitative (NMQ/Q2) based on the WSR-88D Next-generation Radar (Nexrad) network over the Continental United States (CONUS) is completed for the period covering from 2002 to 2011. While this constitutes a unique opportunity to study precipitation processes at higher resolution than conventionally possible (1-km, 5-min), the long-term radar-only product needs to be merged with in-situ information in order to be suitable for hydrological, meteorological and climatological applications. The radar-gauge merging is performed by using rain gauge information at daily (Global Historical Climatology Network-Daily: GHCN-D), hourly (Hydrometeorological Automated Data System: HADS), and 5-min (Automated Surface Observing Systems: ASOS; Climate Reference Network: CRN) resolution. The challenges related to incorporating differing resolution and quality networks to generate long-term large-scale gridded estimates of precipitation are enormous. In that perspective, we are implementing techniques for merging the rain gauge datasets and the radar-only estimates such as Inverse Distance Weighting (IDW), Simple Kriging (SK), Ordinary Kriging (OK), and Conditional Bias-Penalized Kriging (CBPK). An evaluation of the different radar-gauge merging techniques is presented and we provide an estimate of uncertainty for the gridded estimates. In addition, comparisons with a suite of lower resolution QPEs derived from ground based radar measurements (Stage IV) are provided in order to give a detailed picture of the improvements and remaining challenges.

  8. The quantitative Morse theorem

    OpenAIRE

    Loi, Ta Le; Phien, Phan

    2013-01-01

    In this paper, we give a proof of the quantitative Morse theorem stated by {Y. Yomdin} in \\cite{Y1}. The proof is based on the quantitative Sard theorem, the quantitative inverse function theorem and the quantitative Morse lemma.

  9. Giving in Europe : The state of research on giving in 20 European countries

    NARCIS (Netherlands)

    Hoolwerf, L.K.; Schuyt, T.N.M.

    2017-01-01

    This study is in intitial attempt to map philanthropy in Europe and presents a first overall estimation of the European philanthropic sector. Containing an overview of what we know about research on the philanthropy sector, it provides data and and assesment of the data on giving by households,

  10. A simple bias correction in linear regression for quantitative trait association under two-tail extreme selection.

    Science.gov (United States)

    Kwan, Johnny S H; Kung, Annie W C; Sham, Pak C

    2011-09-01

    Selective genotyping can increase power in quantitative trait association. One example of selective genotyping is two-tail extreme selection, but simple linear regression analysis gives a biased genetic effect estimate. Here, we present a simple correction for the bias.

  11. Proficiency testing as a basis for estimating uncertainty of measurement: application to forensic alcohol and toxicology quantitations.

    Science.gov (United States)

    Wallace, Jack

    2010-05-01

    While forensic laboratories will soon be required to estimate uncertainties of measurement for those quantitations reported to the end users of the information, the procedures for estimating this have been little discussed in the forensic literature. This article illustrates how proficiency test results provide the basis for estimating uncertainties in three instances: (i) For breath alcohol analyzers the interlaboratory precision is taken as a direct measure of uncertainty. This approach applies when the number of proficiency tests is small. (ii) For blood alcohol, the uncertainty is calculated from the differences between the laboratory's proficiency testing results and the mean quantitations determined by the participants; this approach applies when the laboratory has participated in a large number of tests. (iii) For toxicology, either of these approaches is useful for estimating comparability between laboratories, but not for estimating absolute accuracy. It is seen that data from proficiency tests enable estimates of uncertainty that are empirical, simple, thorough, and applicable to a wide range of concentrations.

  12. Apparent polyploidization after gamma irradiation: pitfalls in the use of quantitative polymerase chain reaction (qPCR) for the estimation of mitochondrial and nuclear DNA gene copy numbers.

    Science.gov (United States)

    Kam, Winnie W Y; Lake, Vanessa; Banos, Connie; Davies, Justin; Banati, Richard

    2013-05-30

    Quantitative polymerase chain reaction (qPCR) has been widely used to quantify changes in gene copy numbers after radiation exposure. Here, we show that gamma irradiation ranging from 10 to 100 Gy of cells and cell-free DNA samples significantly affects the measured qPCR yield, due to radiation-induced fragmentation of the DNA template and, therefore, introduces errors into the estimation of gene copy numbers. The radiation-induced DNA fragmentation and, thus, measured qPCR yield varies with temperature not only in living cells, but also in isolated DNA irradiated under cell-free conditions. In summary, the variability in measured qPCR yield from irradiated samples introduces a significant error into the estimation of both mitochondrial and nuclear gene copy numbers and may give spurious evidence for polyploidization.

  13. Quantitative estimation of groundwater recharge with special reference to the use of natural radioactive isotopes and hydrological simulation

    International Nuclear Information System (INIS)

    Bredenkamp, D.B.

    1978-01-01

    Methods of quantitative estimation of groundwater recharge have been estimated to 1) illustrate uncertainties associated with methods usually applied 2) indicate some of the simplifying assumptions inherent to a specific method 3) propagate the use of more than one technique in order to improve the reliability of the combined recharge estimate and 4) propose a hydrological model by which the annual recharge and annual variability of recharge could be ascertained. Classical methods such as the water balance equation and flow nets have been reviewed. The use of environmental tritium and radiocarbon have been illustrated as a means of obaining qualitative answers to the occurence of recharge and in revealing the effective mechanism of groundwater recharge through the soil. Quantitative estimation of recharge from the ratio of recharge to storage have been demonstrated for the Kuruman recharge basin. Methods of interpreting tritium profiles in order to obtain a quantitative estimate of recharge have been shown with application of the technique for Rietondale and a dolomitic aquifer in the Western Transvaal. The major part of the thesis has been devoted to the use of hydrological model as a means of estimating groundwater recharge. Subsequent to a general discussion of the conceptual logic, various models have been proposed and tested

  14. Quantitative estimates of the volatility of ambient organic aerosol

    Science.gov (United States)

    Cappa, C. D.; Jimenez, J. L.

    2010-06-01

    Measurements of the sensitivity of organic aerosol (OA, and its components) mass to changes in temperature were recently reported by Huffman et al.~(2009) using a tandem thermodenuder-aerosol mass spectrometer (TD-AMS) system in Mexico City and the Los Angeles area. Here, we use these measurements to derive quantitative estimates of aerosol volatility within the framework of absorptive partitioning theory using a kinetic model of aerosol evaporation in the TD. OA volatility distributions (or "basis-sets") are determined using several assumptions as to the enthalpy of vaporization (ΔHvap). We present two definitions of "non-volatile OA," one being a global and one a local definition. Based on these definitions, our analysis indicates that a substantial fraction of the organic aerosol is comprised of non-volatile components that will not evaporate under any atmospheric conditions; on the order of 50-80% when the most realistic ΔHvap assumptions are considered. The sensitivity of the total OA mass to dilution and ambient changes in temperature has been assessed for the various ΔHvap assumptions. The temperature sensitivity is relatively independent of the particular ΔHvap assumptions whereas dilution sensitivity is found to be greatest for the low (ΔHvap = 50 kJ/mol) and lowest for the high (ΔHvap = 150 kJ/mol) assumptions. This difference arises from the high ΔHvap assumptions yielding volatility distributions with a greater fraction of non-volatile material than the low ΔHvap assumptions. If the observations are fit using a 1 or 2-component model the sensitivity of the OA to dilution is unrealistically high. An empirical method introduced by Faulhaber et al. (2009) has also been used to independently estimate a volatility distribution for the ambient OA and is found to give results consistent with the high and variable ΔHvap assumptions. Our results also show that the amount of semivolatile gas-phase organics in equilibrium with the OA could range from ~20

  15. A simple bias correction in linear regression for quantitative trait association under two-tail extreme selection

    OpenAIRE

    Kwan, Johnny S. H.; Kung, Annie W. C.; Sham, Pak C.

    2011-01-01

    Selective genotyping can increase power in quantitative trait association. One example of selective genotyping is two-tail extreme selection, but simple linear regression analysis gives a biased genetic effect estimate. Here, we present a simple correction for the bias. © The Author(s) 2011.

  16. Methodologies for Quantitative Systems Pharmacology (QSP) Models: Design and Estimation.

    Science.gov (United States)

    Ribba, B; Grimm, H P; Agoram, B; Davies, M R; Gadkar, K; Niederer, S; van Riel, N; Timmis, J; van der Graaf, P H

    2017-08-01

    With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early Development to focus discussions on two critical methodological aspects of QSP model development: optimal structural granularity and parameter estimation. We here report in a perspective article a summary of presentations and discussions. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  17. Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation

    Science.gov (United States)

    Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien

    2018-04-01

    We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.

  18. Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation

    Science.gov (United States)

    Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien

    2018-06-01

    We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.

  19. Quantitative analysis of low-density SNP data for parentage assignment and estimation of family contributions to pooled samples.

    Science.gov (United States)

    Henshall, John M; Dierens, Leanne; Sellars, Melony J

    2014-09-02

    While much attention has focused on the development of high-density single nucleotide polymorphism (SNP) assays, the costs of developing and running low-density assays have fallen dramatically. This makes it feasible to develop and apply SNP assays for agricultural species beyond the major livestock species. Although low-cost low-density assays may not have the accuracy of the high-density assays widely used in human and livestock species, we show that when combined with statistical analysis approaches that use quantitative instead of discrete genotypes, their utility may be improved. The data used in this study are from a 63-SNP marker Sequenom® iPLEX Platinum panel for the Black Tiger shrimp, for which high-density SNP assays are not currently available. For quantitative genotypes that could be estimated, in 5% of cases the most likely genotype for an individual at a SNP had a probability of less than 0.99. Matrix formulations of maximum likelihood equations for parentage assignment were developed for the quantitative genotypes and also for discrete genotypes perturbed by an assumed error term. Assignment rates that were based on maximum likelihood with quantitative genotypes were similar to those based on maximum likelihood with perturbed genotypes but, for more than 50% of cases, the two methods resulted in individuals being assigned to different families. Treating genotypes as quantitative values allows the same analysis framework to be used for pooled samples of DNA from multiple individuals. Resulting correlations between allele frequency estimates from pooled DNA and individual samples were consistently greater than 0.90, and as high as 0.97 for some pools. Estimates of family contributions to the pools based on quantitative genotypes in pooled DNA had a correlation of 0.85 with estimates of contributions from DNA-derived pedigree. Even with low numbers of SNPs of variable quality, parentage testing and family assignment from pooled samples are

  20. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative-quantitative modeling.

    Science.gov (United States)

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-05-01

    Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if-then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLab(TM)-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/

  1. Quantitative Estimation of Transmitted and Reflected Lamb Waves at Discontinuity

    International Nuclear Information System (INIS)

    Lim, Hyung Jin; Sohn, Hoon

    2010-01-01

    For the application of Lamb wave to structural health monitoring(SHM), understanding its physical characteristic and interaction between Lamb wave and defect of the host structure is an important issue. In this study, reflected, transmitted and mode converted Lamb waves at discontinuity of a plate structure were simulated and the amplitude ratios are calculated theoretically using Modal decomposition method. The predicted results were verified comparing with finite element method(FEM) and experimental results simulating attached PZTs. The result shows that the theoretical prediction is close to the FEM and the experimental verification. Moreover, quantitative estimation method was suggested using amplitude ratio of Lamb wave at discontinuity

  2. Noninvasive IDH1 mutation estimation based on a quantitative radiomics approach for grade II glioma.

    Science.gov (United States)

    Yu, Jinhua; Shi, Zhifeng; Lian, Yuxi; Li, Zeju; Liu, Tongtong; Gao, Yuan; Wang, Yuanyuan; Chen, Liang; Mao, Ying

    2017-08-01

    The status of isocitrate dehydrogenase 1 (IDH1) is highly correlated with the development, treatment and prognosis of glioma. We explored a noninvasive method to reveal IDH1 status by using a quantitative radiomics approach for grade II glioma. A primary cohort consisting of 110 patients pathologically diagnosed with grade II glioma was retrospectively studied. The radiomics method developed in this paper includes image segmentation, high-throughput feature extraction, radiomics sequencing, feature selection and classification. Using the leave-one-out cross-validation (LOOCV) method, the classification result was compared with the real IDH1 situation from Sanger sequencing. Another independent validation cohort containing 30 patients was utilised to further test the method. A total of 671 high-throughput features were extracted and quantized. 110 features were selected by improved genetic algorithm. In LOOCV, the noninvasive IDH1 status estimation based on the proposed approach presented an estimation accuracy of 0.80, sensitivity of 0.83 and specificity of 0.74. Area under the receiver operating characteristic curve reached 0.86. Further validation on the independent cohort of 30 patients produced similar results. Radiomics is a potentially useful approach for estimating IDH1 mutation status noninvasively using conventional T2-FLAIR MRI images. The estimation accuracy could potentially be improved by using multiple imaging modalities. • Noninvasive IDH1 status estimation can be obtained with a radiomics approach. • Automatic and quantitative processes were established for noninvasive biomarker estimation. • High-throughput MRI features are highly correlated to IDH1 states. • Area under the ROC curve of the proposed estimation method reached 0.86.

  3. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction

    Directory of Open Access Journals (Sweden)

    Cobbs Gary

    2012-08-01

    literature. They also give better estimates of initial target concentration. Model 1 was found to be slightly more robust than model 2 giving better estimates of initial target concentration when estimation of parameters was done for qPCR curves with very different initial target concentration. Both models may be used to estimate the initial absolute concentration of target sequence when a standard curve is not available. Conclusions It is argued that the kinetic approach to modeling and interpreting quantitative PCR data has the potential to give more precise estimates of the true initial target concentrations than other methods currently used for analysis of qPCR data. The two models presented here give a unified model of the qPCR process in that they explain the shape of the qPCR curve for a wide variety of initial target concentrations.

  4. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction.

    Science.gov (United States)

    Cobbs, Gary

    2012-08-16

    Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the literature. They also give better estimates of

  5. Radar-Derived Quantitative Precipitation Estimation Based on Precipitation Classification

    Directory of Open Access Journals (Sweden)

    Lili Yang

    2016-01-01

    Full Text Available A method for improving radar-derived quantitative precipitation estimation is proposed. Tropical vertical profiles of reflectivity (VPRs are first determined from multiple VPRs. Upon identifying a tropical VPR, the event can be further classified as either tropical-stratiform or tropical-convective rainfall by a fuzzy logic (FL algorithm. Based on the precipitation-type fields, the reflectivity values are converted into rainfall rate using a Z-R relationship. In order to evaluate the performance of this rainfall classification scheme, three experiments were conducted using three months of data and two study cases. In Experiment I, the Weather Surveillance Radar-1988 Doppler (WSR-88D default Z-R relationship was applied. In Experiment II, the precipitation regime was separated into convective and stratiform rainfall using the FL algorithm, and corresponding Z-R relationships were used. In Experiment III, the precipitation regime was separated into convective, stratiform, and tropical rainfall, and the corresponding Z-R relationships were applied. The results show that the rainfall rates obtained from all three experiments match closely with the gauge observations, although Experiment II could solve the underestimation, when compared to Experiment I. Experiment III significantly reduced this underestimation and generated the most accurate radar estimates of rain rate among the three experiments.

  6. In Vivo Validation of a Blood Vector Velocity Estimator with MR Angiography

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Lindskov; Udesen, Jesper; Thomsen, Carsten

    2009-01-01

    Conventional Doppler methods for blood velocity estimation only estimate the velocity component along the ultrasound beam direction. This implies that a Doppler angle under examination close to 90° results in unreliable information about the true blood direction and blood velocity. The novel method...... indicate that reliable vector velocity estimates can be obtained in vivo using the presented angle-independent 2-D vector velocity method. The TO method can be a useful alternative to conventional Doppler systems by avoiding the angle artifact, thus giving quantitative velocity information....

  7. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative–quantitative modeling

    Science.gov (United States)

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-01-01

    Summary: Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if–then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLabTM-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. Availability: ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/ Contact: stefan.streif@ovgu.de PMID:22451270

  8. Quantitative estimates of the volatility of ambient organic aerosol

    Directory of Open Access Journals (Sweden)

    C. D. Cappa

    2010-06-01

    Full Text Available Measurements of the sensitivity of organic aerosol (OA, and its components mass to changes in temperature were recently reported by Huffman et al.~(2009 using a tandem thermodenuder-aerosol mass spectrometer (TD-AMS system in Mexico City and the Los Angeles area. Here, we use these measurements to derive quantitative estimates of aerosol volatility within the framework of absorptive partitioning theory using a kinetic model of aerosol evaporation in the TD. OA volatility distributions (or "basis-sets" are determined using several assumptions as to the enthalpy of vaporization (ΔHvap. We present two definitions of "non-volatile OA," one being a global and one a local definition. Based on these definitions, our analysis indicates that a substantial fraction of the organic aerosol is comprised of non-volatile components that will not evaporate under any atmospheric conditions; on the order of 50–80% when the most realistic ΔHvap assumptions are considered. The sensitivity of the total OA mass to dilution and ambient changes in temperature has been assessed for the various ΔHvap assumptions. The temperature sensitivity is relatively independent of the particular ΔHvap assumptions whereas dilution sensitivity is found to be greatest for the low (ΔHvap = 50 kJ/mol and lowest for the high (ΔHvap = 150 kJ/mol assumptions. This difference arises from the high ΔHvap assumptions yielding volatility distributions with a greater fraction of non-volatile material than the low ΔHvap assumptions. If the observations are fit using a 1 or 2-component model the sensitivity of the OA to dilution is unrealistically high. An empirical method introduced by Faulhaber et al. (2009 has also been used to independently estimate a volatility distribution for the ambient OA and is found to give results consistent with the

  9. Bayesian estimation in homodyne interferometry

    International Nuclear Information System (INIS)

    Olivares, Stefano; Paris, Matteo G A

    2009-01-01

    We address phase-shift estimation by means of squeezed vacuum probe and homodyne detection. We analyse Bayesian estimator, which is known to asymptotically saturate the classical Cramer-Rao bound to the variance, and discuss convergence looking at the a posteriori distribution as the number of measurements increases. We also suggest two feasible adaptive methods, acting on the squeezing parameter and/or the homodyne local oscillator phase, which allow us to optimize homodyne detection and approach the ultimate bound to precision imposed by the quantum Cramer-Rao theorem. The performances of our two-step methods are investigated by means of Monte Carlo simulated experiments with a small number of homodyne data, thus giving a quantitative meaning to the notion of asymptotic optimality.

  10. Estimating diversification rates for higher taxa: BAMM can give problematic estimates of rates and rate shifts.

    Science.gov (United States)

    Meyer, Andreas L S; Wiens, John J

    2018-01-01

    Estimates of diversification rates are invaluable for many macroevolutionary studies. Recently, an approach called BAMM (Bayesian Analysis of Macro-evolutionary Mixtures) has become widely used for estimating diversification rates and rate shifts. At the same time, several articles have concluded that estimates of net diversification rates from the method-of-moments (MS) estimators are inaccurate. Yet, no studies have compared the ability of these two methods to accurately estimate clade diversification rates. Here, we use simulations to compare their performance. We found that BAMM yielded relatively weak relationships between true and estimated diversification rates. This occurred because BAMM underestimated the number of rates shifts across each tree, and assigned high rates to small clades with low rates. Errors in both speciation and extinction rates contributed to these errors, showing that using BAMM to estimate only speciation rates is also problematic. In contrast, the MS estimators (particularly using stem group ages), yielded stronger relationships between true and estimated diversification rates, by roughly twofold. Furthermore, the MS approach remained relatively accurate when diversification rates were heterogeneous within clades, despite the widespread assumption that it requires constant rates within clades. Overall, we caution that BAMM may be problematic for estimating diversification rates and rate shifts. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.

  11. Testing for altruism and social pressure in charitable giving.

    Science.gov (United States)

    DellaVigna, Stefano; List, John A; Malmendier, Ulrike

    2012-01-01

    Every year, 90% of Americans give money to charities. Is such generosity necessarily welfare enhancing for the giver? We present a theoretical framework that distinguishes two types of motivation: individuals like to give, for example, due to altruism or warm glow, and individuals would rather not give but dislike saying no, for example, due to social pressure. We design a door-to-door fund-raiser in which some households are informed about the exact time of solicitation with a flyer on their doorknobs. Thus, they can seek or avoid the fund-raiser. We find that the flyer reduces the share of households opening the door by 9% to 25% and, if the flyer allows checking a Do Not Disturb box, reduces giving by 28% to 42%. The latter decrease is concentrated among donations smaller than $10. These findings suggest that social pressure is an important determinant of door-to-door giving. Combining data from this and a complementary field experiment, we structurally estimate the model. The estimated social pressure cost of saying no to a solicitor is $3.80 for an in-state charity and $1.40 for an out-of-state charity. Our welfare calculations suggest that our door-to-door fund-raising campaigns on average lower the utility of the potential donors.

  12. Estimation of genetic parameters and detection of quantitative trait loci for metabolites in Danish Holstein milk

    DEFF Research Database (Denmark)

    Buitenhuis, Albert Johannes; Sundekilde, Ulrik; Poulsen, Nina Aagaard

    2013-01-01

    Small components and metabolites in milk are significant for the utilization of milk, not only in dairy food production but also as disease predictors in dairy cattle. This study focused on estimation of genetic parameters and detection of quantitative trait loci for metabolites in bovine milk. F...... for lactic acid to >0.8 for orotic acid and β-hydroxybutyrate. A single SNP association analysis revealed 7 genome-wide significant quantitative trait loci [malonate: Bos taurus autosome (BTA)2 and BTA7; galactose-1-phosphate: BTA2; cis-aconitate: BTA11; urea: BTA12; carnitine: BTA25...

  13. Fatalities in high altitude mountaineering: a review of quantitative risk estimates.

    Science.gov (United States)

    Weinbruch, Stephan; Nordby, Karl-Christian

    2013-12-01

    Quantitative estimates for mortality in high altitude mountaineering are reviewed. Special emphasis is placed on the heterogeneity of the risk estimates and on confounding. Crude estimates for mortality are on the order of 1/1000 to 40/1000 persons above base camp, for both expedition members and high altitude porters. High altitude porters have mostly a lower risk than expedition members (risk ratio for all Nepalese peaks requiring an expedition permit: 0.73; 95 % confidence interval 0.59-0.89). The summit bid is generally the most dangerous part of an expedition for members, whereas most high altitude porters die during route preparation. On 8000 m peaks, the mortality during descent from summit varies between 4/1000 and 134/1000 summiteers (members plus porters). The risk estimates are confounded by human and environmental factors. Information on confounding by gender and age is contradictory and requires further work. There are indications for safety segregation of men and women, with women being more risk averse than men. Citizenship appears to be a significant confounder. Prior high altitude mountaineering experience in Nepal has no protective effect. Commercial expeditions in the Nepalese Himalayas have a lower mortality than traditional expeditions, though after controlling for confounding, the difference is not statistically significant. The overall mortality is increasing with increasing peak altitude for expedition members but not for high altitude porters. In the Nepalese Himalayas and in Alaska, a significant decrease of mortality with calendar year was observed. A few suggestions for further work are made at the end of the article.

  14. Quantitative Estimates of Bio-Remodeling on Coastal Rock Surfaces

    Directory of Open Access Journals (Sweden)

    Marta Pappalardo

    2016-05-01

    Full Text Available Remodeling of rocky coasts and erosion rates have been widely studied in past years, but not all the involved processes acting over rocks surface have been quantitatively evaluated yet. The first goal of this paper is to revise the different methodologies employed in the quantification of the effect of biotic agents on rocks exposed to coastal morphologic agents, comparing their efficiency. Secondly, we focus on geological methods to assess and quantify bio-remodeling, presenting some case studies in an area of the Mediterranean Sea in which different geological methods, inspired from the revised literature, have been tested in order to provide a quantitative assessment of the effects some biological covers exert over rocky platforms in tidal and supra-tidal environments. In particular, different experimental designs based on Schmidt hammer test results have been applied in order to estimate rock hardness related to different orders of littoral platforms and the bio-erosive/bio-protective role of Chthamalus ssp. and Verrucariaadriatica. All data collected have been analyzed using statistical tests to evaluate the significance of the measures and methodologies. The effectiveness of this approach is analyzed, and its limits are highlighted. In order to overcome the latter, a strategy combining geological and experimental–computational approaches is proposed, potentially capable of revealing novel clues on bio-erosion dynamics. An experimental-computational proposal, to assess the indirect effects of the biofilm coverage of rocky shores, is presented in this paper, focusing on the shear forces exerted during hydration-dehydration cycles. The results of computational modeling can be compared to experimental evidence, from nanoscopic to macroscopic scales.

  15. [Quantitative estimation of vegetation cover and management factor in USLE and RUSLE models by using remote sensing data: a review].

    Science.gov (United States)

    Wu, Chang-Guang; Li, Sheng; Ren, Hua-Dong; Yao, Xiao-Hua; Huang, Zi-Jie

    2012-06-01

    Soil loss prediction models such as universal soil loss equation (USLE) and its revised universal soil loss equation (RUSLE) are the useful tools for risk assessment of soil erosion and planning of soil conservation at regional scale. To make a rational estimation of vegetation cover and management factor, the most important parameters in USLE or RUSLE, is particularly important for the accurate prediction of soil erosion. The traditional estimation based on field survey and measurement is time-consuming, laborious, and costly, and cannot rapidly extract the vegetation cover and management factor at macro-scale. In recent years, the development of remote sensing technology has provided both data and methods for the estimation of vegetation cover and management factor over broad geographic areas. This paper summarized the research findings on the quantitative estimation of vegetation cover and management factor by using remote sensing data, and analyzed the advantages and the disadvantages of various methods, aimed to provide reference for the further research and quantitative estimation of vegetation cover and management factor at large scale.

  16. Estimation of the number of fluorescent end-members for quantitative analysis of multispectral FLIM data.

    Science.gov (United States)

    Gutierrez-Navarro, Omar; Campos-Delgado, Daniel U; Arce-Santana, Edgar R; Maitland, Kristen C; Cheng, Shuna; Jabbour, Joey; Malik, Bilal; Cuenca, Rodrigo; Jo, Javier A

    2014-05-19

    Multispectral fluorescence lifetime imaging (m-FLIM) can potentially allow identifying the endogenous fluorophores present in biological tissue. Quantitative description of such data requires estimating the number of components in the sample, their characteristic fluorescent decays, and their relative contributions or abundances. Unfortunately, this inverse problem usually requires prior knowledge about the data, which is seldom available in biomedical applications. This work presents a new methodology to estimate the number of potential endogenous fluorophores present in biological tissue samples from time-domain m-FLIM data. Furthermore, a completely blind linear unmixing algorithm is proposed. The method was validated using both synthetic and experimental m-FLIM data. The experimental m-FLIM data include in-vivo measurements from healthy and cancerous hamster cheek-pouch epithelial tissue, and ex-vivo measurements from human coronary atherosclerotic plaques. The analysis of m-FLIM data from in-vivo hamster oral mucosa identified healthy from precancerous lesions, based on the relative concentration of their characteristic fluorophores. The algorithm also provided a better description of atherosclerotic plaques in term of their endogenous fluorophores. These results demonstrate the potential of this methodology to provide quantitative description of tissue biochemical composition.

  17. Identification and uncertainty estimation of vertical reflectivity profiles using a Lagrangian approach to support quantitative precipitation measurements by weather radar

    Science.gov (United States)

    Hazenberg, P.; Torfs, P. J. J. F.; Leijnse, H.; Delrieu, G.; Uijlenhoet, R.

    2013-09-01

    This paper presents a novel approach to estimate the vertical profile of reflectivity (VPR) from volumetric weather radar data using both a traditional Eulerian as well as a newly proposed Lagrangian implementation. For this latter implementation, the recently developed Rotational Carpenter Square Cluster Algorithm (RoCaSCA) is used to delineate precipitation regions at different reflectivity levels. A piecewise linear VPR is estimated for either stratiform or neither stratiform/convective precipitation. As a second aspect of this paper, a novel approach is presented which is able to account for the impact of VPR uncertainty on the estimated radar rainfall variability. Results show that implementation of the VPR identification and correction procedure has a positive impact on quantitative precipitation estimates from radar. Unfortunately, visibility problems severely limit the impact of the Lagrangian implementation beyond distances of 100 km. However, by combining this procedure with the global Eulerian VPR estimation procedure for a given rainfall type (stratiform and neither stratiform/convective), the quality of the quantitative precipitation estimates increases up to a distance of 150 km. Analyses of the impact of VPR uncertainty shows that this aspect accounts for a large fraction of the differences between weather radar rainfall estimates and rain gauge measurements.

  18. Operations management research methodologies using quantitative modeling

    NARCIS (Netherlands)

    Bertrand, J.W.M.; Fransoo, J.C.

    2002-01-01

    Gives an overview of quantitative model-based research in operations management, focusing on research methodology. Distinguishes between empirical and axiomatic research, and furthermore between descriptive and normative research. Presents guidelines for doing quantitative model-based research in

  19. Quantitative ultrasound characterization of locally advanced breast cancer by estimation of its scatterer properties

    Energy Technology Data Exchange (ETDEWEB)

    Tadayyon, Hadi [Physical Sciences, Sunnybrook Research Institute, Sunnybrook Health Sciences Centre, Toronto, Ontario M4N 3M5 (Canada); Department of Medical Biophysics, Faculty of Medicine, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Sadeghi-Naini, Ali; Czarnota, Gregory, E-mail: Gregory.Czarnota@sunnybrook.ca [Physical Sciences, Sunnybrook Research Institute, Sunnybrook Health Sciences Centre, Toronto, Ontario M4N 3M5 (Canada); Department of Medical Biophysics, Faculty of Medicine, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, Odette Cancer Centre, Sunnybrook Health Sciences Centre, Toronto, Ontario M4N 3M5 (Canada); Department of Radiation Oncology, Faculty of Medicine, University of Toronto, Toronto, Ontario M5T 1P5 (Canada); Wirtzfeld, Lauren [Department of Physics, Ryerson University, Toronto, Ontario M5B 2K3 (Canada); Wright, Frances C. [Division of Surgical Oncology, Sunnybrook Health Sciences Centre, Toronto, Ontario M4N 3M5 (Canada)

    2014-01-15

    Purpose: Tumor grading is an important part of breast cancer diagnosis and currently requires biopsy as its standard. Here, the authors investigate quantitative ultrasound parameters in locally advanced breast cancers that can potentially separate tumors from normal breast tissue and differentiate tumor grades. Methods: Ultrasound images and radiofrequency data from 42 locally advanced breast cancer patients were acquired and analyzed. Parameters related to the linear regression of the power spectrum—midband fit, slope, and 0-MHz-intercept—were determined from breast tumors and normal breast tissues. Mean scatterer spacing was estimated from the spectral autocorrelation, and the effective scatterer diameter and effective acoustic concentration were estimated from the Gaussian form factor. Parametric maps of each quantitative ultrasound parameter were constructed from the gated radiofrequency segments in tumor and normal tissue regions of interest. In addition to the mean values of the parametric maps, higher order statistical features, computed from gray-level co-occurrence matrices were also determined and used for characterization. Finally, linear and quadratic discriminant analyses were performed using combinations of quantitative ultrasound parameters to classify breast tissues. Results: Quantitative ultrasound parameters were found to be statistically different between tumor and normal tissue (p < 0.05). The combination of effective acoustic concentration and mean scatterer spacing could separate tumor from normal tissue with 82% accuracy, while the addition of effective scatterer diameter to the combination did not provide significant improvement (83% accuracy). Furthermore, the two advanced parameters, including effective scatterer diameter and mean scatterer spacing, were found to be statistically differentiating among grade I, II, and III tumors (p = 0.014 for scatterer spacing, p = 0.035 for effective scatterer diameter). The separation of the tumor

  20. Quantitative ultrasound characterization of locally advanced breast cancer by estimation of its scatterer properties

    International Nuclear Information System (INIS)

    Tadayyon, Hadi; Sadeghi-Naini, Ali; Czarnota, Gregory; Wirtzfeld, Lauren; Wright, Frances C.

    2014-01-01

    Purpose: Tumor grading is an important part of breast cancer diagnosis and currently requires biopsy as its standard. Here, the authors investigate quantitative ultrasound parameters in locally advanced breast cancers that can potentially separate tumors from normal breast tissue and differentiate tumor grades. Methods: Ultrasound images and radiofrequency data from 42 locally advanced breast cancer patients were acquired and analyzed. Parameters related to the linear regression of the power spectrum—midband fit, slope, and 0-MHz-intercept—were determined from breast tumors and normal breast tissues. Mean scatterer spacing was estimated from the spectral autocorrelation, and the effective scatterer diameter and effective acoustic concentration were estimated from the Gaussian form factor. Parametric maps of each quantitative ultrasound parameter were constructed from the gated radiofrequency segments in tumor and normal tissue regions of interest. In addition to the mean values of the parametric maps, higher order statistical features, computed from gray-level co-occurrence matrices were also determined and used for characterization. Finally, linear and quadratic discriminant analyses were performed using combinations of quantitative ultrasound parameters to classify breast tissues. Results: Quantitative ultrasound parameters were found to be statistically different between tumor and normal tissue (p < 0.05). The combination of effective acoustic concentration and mean scatterer spacing could separate tumor from normal tissue with 82% accuracy, while the addition of effective scatterer diameter to the combination did not provide significant improvement (83% accuracy). Furthermore, the two advanced parameters, including effective scatterer diameter and mean scatterer spacing, were found to be statistically differentiating among grade I, II, and III tumors (p = 0.014 for scatterer spacing, p = 0.035 for effective scatterer diameter). The separation of the tumor

  1. Improved dose–volume histogram estimates for radiopharmaceutical therapy by optimizing quantitative SPECT reconstruction parameters

    International Nuclear Information System (INIS)

    Cheng Lishui; Hobbs, Robert F; Sgouros, George; Frey, Eric C; Segars, Paul W

    2013-01-01

    In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose–volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator–detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less

  2. Improved dose-volume histogram estimates for radiopharmaceutical therapy by optimizing quantitative SPECT reconstruction parameters

    Science.gov (United States)

    Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.

    2013-06-01

    In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less

  3. A direct method for estimating the alpha/beta ratio from quantitative dose-response data

    International Nuclear Information System (INIS)

    Stuschke, M.

    1989-01-01

    A one-step optimization method based on a least squares fit of the linear quadratic model to quantitative tissue response data after fractionated irradiation is proposed. Suitable end-points that can be analysed by this method are growth delay, host survival and quantitative biochemical or clinical laboratory data. The functional dependence between the transformed dose and the measured response is approximated by a polynomial. The method allows for the estimation of the alpha/beta ratio and its confidence limits from all observed responses of the different fractionation schedules. Censored data can be included in the analysis. A method to test the appropriateness of the fit is presented. A computer simulation illustrates the method and its accuracy as examplified by the growth delay end point. A comparison with a fit of the linear quadratic model to interpolated isoeffect doses shows the advantages of the direct method. (orig./HP) [de

  4. Application of short-wave infrared (SWIR) spectroscopy in quantitative estimation of clay mineral contents

    International Nuclear Information System (INIS)

    You, Jinfeng; Xing, Lixin; Pan, Jun; Meng, Tao; Liang, Liheng

    2014-01-01

    Clay minerals are significant constituents of soil which are necessary for life. This paper studied three types of clay minerals, kaolinite, illite, and montmorillonite, for they are not only the most common soil forming materials, but also important indicators of soil expansion and shrinkage potential. These clay minerals showed diagnostic absorption bands resulting from vibrations of hydroxyl groups and structural water molecules in the SWIR wavelength region. The short-wave infrared reflectance spectra of the soil was obtained from a Portable Near Infrared Spectrometer (PNIS, spectrum range: 1300∼2500 nm, interval: 2 nm). Due to the simplicity, quickness, and the non-destructiveness analysis, SWIR spectroscopy has been widely used in geological prospecting, chemical engineering and many other fields. The aim of this study was to use multiple linear regression (MLR) and partial least squares (PLS) regression to establish the optimizing quantitative estimation models of the kaolinite, illite and montmorillonite contents from soil reflectance spectra. Here, the soil reflectance spectra mainly refers to the spectral reflectivity of soil (SRS) corresponding to the absorption-band position (AP) of kaolinite, illite, and montmorillonite representative spectra from USGS spectral library, the SRS corresponding to the AP of soil spectral and soil overall spectrum reflectance values. The optimal estimation models of three kinds of clay mineral contents showed that the retrieval accuracy was satisfactory (Kaolinite content: a Root Mean Square Error of Calibration (RMSEC) of 1.671 with a coefficient of determination (R 2 ) of 0.791; Illite content: a RMSEC of 1.126 with a R 2 of 0.616; Montmorillonite content: a RMSEC of 1.814 with a R 2 of 0.707). Thus, the reflectance spectra of soil obtained form PNIS could be used for quantitative estimation of kaolinite, illite and montmorillonite contents in soil

  5. A Novel Method of Quantitative Anterior Chamber Depth Estimation Using Temporal Perpendicular Digital Photography.

    Science.gov (United States)

    Zamir, Ehud; Kong, George Y X; Kowalski, Tanya; Coote, Michael; Ang, Ghee Soon

    2016-07-01

    We hypothesize that: (1) Anterior chamber depth (ACD) is correlated with the relative anteroposterior position of the pupillary image, as viewed from the temporal side. (2) Such a correlation may be used as a simple quantitative tool for estimation of ACD. Two hundred sixty-six phakic eyes had lateral digital photographs taken from the temporal side, perpendicular to the visual axis, and underwent optical biometry (Nidek AL scanner). The relative anteroposterior position of the pupillary image was expressed using the ratio between: (1) lateral photographic temporal limbus to pupil distance ("E") and (2) lateral photographic temporal limbus to cornea distance ("Z"). In the first chronological half of patients (Correlation Series), E:Z ratio (EZR) was correlated with optical biometric ACD. The correlation equation was then used to predict ACD in the second half of patients (Prediction Series) and compared to their biometric ACD for agreement analysis. A strong linear correlation was found between EZR and ACD, R = -0.91, R 2 = 0.81. Bland-Altman analysis showed good agreement between predicted ACD using this method and the optical biometric ACD. The mean error was -0.013 mm (range -0.377 to 0.336 mm), standard deviation 0.166 mm. The 95% limits of agreement were ±0.33 mm. Lateral digital photography and EZR calculation is a novel method to quantitatively estimate ACD, requiring minimal equipment and training. EZ ratio may be employed in screening for angle closure glaucoma. It may also be helpful in outpatient medical clinic settings, where doctors need to judge the safety of topical or systemic pupil-dilating medications versus their risk of triggering acute angle closure glaucoma. Similarly, non ophthalmologists may use it to estimate the likelihood of acute angle closure glaucoma in emergency presentations.

  6. Unrecorded Alcohol Consumption: Quantitative Methods of Estimation

    OpenAIRE

    Razvodovsky, Y. E.

    2010-01-01

    unrecorded alcohol; methods of estimation In this paper we focused on methods of estimation of unrecorded alcohol consumption level. Present methods of estimation of unrevorded alcohol consumption allow only approximate estimation of unrecorded alcohol consumption level. Tacking into consideration the extreme importance of such kind of data, further investigation is necessary to improve the reliability of methods estimation of unrecorded alcohol consumption.

  7. Estimation of quantitative levels of diesel exhaust exposure and the health impact in the contemporary Australian mining industry

    NARCIS (Netherlands)

    Peters, Susan; de Klerk, Nicholas; Reid, Alison; Fritschi, Lin; Musk, Aw Bill; Vermeulen, Roel

    2017-01-01

    OBJECTIVES: To estimate quantitative levels of exposure to diesel exhaust expressed by elemental carbon (EC) in the contemporary mining industry and to describe the excess risk of lung cancer that may result from those levels. METHODS: EC exposure has been monitored in Western Australian miners

  8. Quantitative estimation of the extent of alkylation of DNA following treatment of mammalian cells with non-radioactive alkylating agents

    Energy Technology Data Exchange (ETDEWEB)

    Snyder, R.D. (Univ. of Tennessee, Oak Ridge); Regan, J.D.

    1981-01-01

    Alkaline sucrose sedimentation has been used to quantitate phosphotriester formation following treatment of human cells with the monofunctional alkylating agents methyl and ethyl methanesulfonate. These persistent alkaline-labile lesions are not repaired during short-term culture conditions and thus serve as a useful and precise index of the total alkylation of the DNA.Estimates of alkylation by this procedure compare favorably with direct estimates by use of labeled alkylating agents.

  9. Inter- and intra-observer agreement of BI-RADS-based subjective visual estimation of amount of fibroglandular breast tissue with magnetic resonance imaging: comparison to automated quantitative assessment

    International Nuclear Information System (INIS)

    Wengert, G.J.; Helbich, T.H.; Woitek, R.; Kapetas, P.; Clauser, P.; Baltzer, P.A.; Vogl, W.D.; Weber, M.; Meyer-Baese, A.; Pinker, Katja

    2016-01-01

    To evaluate the inter-/intra-observer agreement of BI-RADS-based subjective visual estimation of the amount of fibroglandular tissue (FGT) with magnetic resonance imaging (MRI), and to investigate whether FGT assessment benefits from an automated, observer-independent, quantitative MRI measurement by comparing both approaches. Eighty women with no imaging abnormalities (BI-RADS 1 and 2) were included in this institutional review board (IRB)-approved prospective study. All women underwent un-enhanced breast MRI. Four radiologists independently assessed FGT with MRI by subjective visual estimation according to BI-RADS. Automated observer-independent quantitative measurement of FGT with MRI was performed using a previously described measurement system. Inter-/intra-observer agreements of qualitative and quantitative FGT measurements were assessed using Cohen's kappa (k). Inexperienced readers achieved moderate inter-/intra-observer agreement and experienced readers a substantial inter- and perfect intra-observer agreement for subjective visual estimation of FGT. Practice and experience reduced observer-dependency. Automated observer-independent quantitative measurement of FGT was successfully performed and revealed only fair to moderate agreement (k = 0.209-0.497) with subjective visual estimations of FGT. Subjective visual estimation of FGT with MRI shows moderate intra-/inter-observer agreement, which can be improved by practice and experience. Automated observer-independent quantitative measurements of FGT are necessary to allow a standardized risk evaluation. (orig.)

  10. Inter- and intra-observer agreement of BI-RADS-based subjective visual estimation of amount of fibroglandular breast tissue with magnetic resonance imaging: comparison to automated quantitative assessment

    Energy Technology Data Exchange (ETDEWEB)

    Wengert, G.J.; Helbich, T.H.; Woitek, R.; Kapetas, P.; Clauser, P.; Baltzer, P.A. [Medical University of Vienna/ Vienna General Hospital, Department of Biomedical Imaging and Image-guided Therapy, Division of Molecular and Gender Imaging, Vienna (Austria); Vogl, W.D. [Medical University of Vienna, Department of Biomedical Imaging and Image-guided Therapy, Computational Imaging Research Lab, Wien (Austria); Weber, M. [Medical University of Vienna, Department of Biomedical Imaging and Image-guided Therapy, Division of General and Pediatric Radiology, Wien (Austria); Meyer-Baese, A. [State University of Florida, Department of Scientific Computing in Medicine, Tallahassee, FL (United States); Pinker, Katja [Medical University of Vienna/ Vienna General Hospital, Department of Biomedical Imaging and Image-guided Therapy, Division of Molecular and Gender Imaging, Vienna (Austria); State University of Florida, Department of Scientific Computing in Medicine, Tallahassee, FL (United States); Memorial Sloan-Kettering Cancer Center, Department of Radiology, Molecular Imaging and Therapy Services, New York City, NY (United States)

    2016-11-15

    To evaluate the inter-/intra-observer agreement of BI-RADS-based subjective visual estimation of the amount of fibroglandular tissue (FGT) with magnetic resonance imaging (MRI), and to investigate whether FGT assessment benefits from an automated, observer-independent, quantitative MRI measurement by comparing both approaches. Eighty women with no imaging abnormalities (BI-RADS 1 and 2) were included in this institutional review board (IRB)-approved prospective study. All women underwent un-enhanced breast MRI. Four radiologists independently assessed FGT with MRI by subjective visual estimation according to BI-RADS. Automated observer-independent quantitative measurement of FGT with MRI was performed using a previously described measurement system. Inter-/intra-observer agreements of qualitative and quantitative FGT measurements were assessed using Cohen's kappa (k). Inexperienced readers achieved moderate inter-/intra-observer agreement and experienced readers a substantial inter- and perfect intra-observer agreement for subjective visual estimation of FGT. Practice and experience reduced observer-dependency. Automated observer-independent quantitative measurement of FGT was successfully performed and revealed only fair to moderate agreement (k = 0.209-0.497) with subjective visual estimations of FGT. Subjective visual estimation of FGT with MRI shows moderate intra-/inter-observer agreement, which can be improved by practice and experience. Automated observer-independent quantitative measurements of FGT are necessary to allow a standardized risk evaluation. (orig.)

  11. Developing Daily Quantitative Damage Estimates From Geospatial Layers To Support Post Event Recovery

    Science.gov (United States)

    Woods, B. K.; Wei, L. H.; Connor, T. C.

    2014-12-01

    With the growth of natural hazard data available in near real-time it is increasingly feasible to deliver damage estimates caused by natural disasters. These estimates can be used in disaster management setting or by commercial entities to optimize the deployment of resources and/or routing of goods and materials. This work outlines an end-to-end, modular process to generate estimates of damage caused by severe weather. The processing stream consists of five generic components: 1) Hazard modules that provide quantitate data layers for each peril. 2) Standardized methods to map the hazard data to an exposure layer based on atomic geospatial blocks. 3) Peril-specific damage functions that compute damage metrics at the atomic geospatial block level. 4) Standardized data aggregators, which map damage to user-specific geometries. 5) Data dissemination modules, which provide resulting damage estimates in a variety of output forms. This presentation provides a description of this generic tool set, and an illustrated example using HWRF-based hazard data for Hurricane Arthur (2014). In this example, the Python-based real-time processing ingests GRIB2 output from the HWRF numerical model, dynamically downscales it in conjunctions with a land cover database using a multiprocessing pool, and a just-in-time compiler (JIT). The resulting wind fields are contoured, and ingested into a PostGIS database using OGR. Finally, the damage estimates are calculated at the atomic block level and aggregated to user-defined regions using PostgreSQL queries to construct application specific tabular and graphics output.

  12. A Quantitative Property-Property Relationship for Estimating Packaging-Food Partition Coefficients of Organic Compounds

    DEFF Research Database (Denmark)

    Huang, L.; Ernstoff, Alexi; Xu, H.

    2017-01-01

    Organic chemicals encapsulated in beverage and food packaging can migrate to the food and lead to human exposures via ingestion. The packaging-food (Kpf) partition coefficient is a key parameter to estimate the chemical migration from packaging materials. Previous studies have simply set Kpf to 1...... or 1000, or provided separate linear correlations for several discrete values of ethanol equivalencies of food simulants (EtOH-eq). The aim of the present study is to develop a single quantitative property-property relationship (QPPR) valid for different chemical-packaging combinations and for water...... because only two packaging types are included. This preliminary QPPR demonstrates that the Kpf for various chemicalpackaging-food combinations can be estimated by a single linear correlation. Based on more than 1000 collected Kpf in 15 materials, we will present extensive results for other packaging types...

  13. Infrared thermography quantitative image processing

    Science.gov (United States)

    Skouroliakou, A.; Kalatzis, I.; Kalyvas, N.; Grivas, TB

    2017-11-01

    Infrared thermography is an imaging technique that has the ability to provide a map of temperature distribution of an object’s surface. It is considered for a wide range of applications in medicine as well as in non-destructive testing procedures. One of its promising medical applications is in orthopaedics and diseases of the musculoskeletal system where temperature distribution of the body’s surface can contribute to the diagnosis and follow up of certain disorders. Although the thermographic image can give a fairly good visual estimation of distribution homogeneity and temperature pattern differences between two symmetric body parts, it is important to extract a quantitative measurement characterising temperature. Certain approaches use temperature of enantiomorphic anatomical points, or parameters extracted from a Region of Interest (ROI). A number of indices have been developed by researchers to that end. In this study a quantitative approach in thermographic image processing is attempted based on extracting different indices for symmetric ROIs on thermograms of the lower back area of scoliotic patients. The indices are based on first order statistical parameters describing temperature distribution. Analysis and comparison of these indices result in evaluating the temperature distribution pattern of the back trunk expected in healthy, regarding spinal problems, subjects.

  14. The effects of dominance, regular inbreeding and sampling design on Q(ST), an estimator of population differentiation for quantitative traits.

    Science.gov (United States)

    Goudet, Jérôme; Büchi, Lucie

    2006-02-01

    To test whether quantitative traits are under directional or homogenizing selection, it is common practice to compare population differentiation estimates at molecular markers (F(ST)) and quantitative traits (Q(ST)). If the trait is neutral and its determinism is additive, then theory predicts that Q(ST) = F(ST), while Q(ST) > F(ST) is predicted under directional selection for different local optima, and Q(ST) sampling designs and find that it is always best to sample many populations (>20) with few families (five) rather than few populations with many families. Provided that estimates of Q(ST) are derived from individuals originating from many populations, we conclude that the pattern Q(ST) > F(ST), and hence the inference of directional selection for different local optima, is robust to the effect of nonadditive gene actions.

  15. ANALYSIS AND QUANTITATIVE ASSESSMENT FOR RESULTS OF EDUCATIONAL PROGRAMS APPLICATION BY MEANS OF DIAGNOSTIC TESTS

    Directory of Open Access Journals (Sweden)

    E. L. Kon

    2015-07-01

    Full Text Available Subject of Research.The problem actuality for creation, control and estimation of results for competence-oriented educational programs is formulated and proved. Competences elements and components, assembled in modules, course units and parts of educational program, are defined as objects of control. Specific tasks of proficiency examination for competences and their components are stated; subject matter of the paper is formulated. Methods of Research. Some adapted statements and methods of technical science are offered to be applied for control tasks solution, decoding and estimation of education results. The approach to quantitative estimation of testing results with the use of additive integrated differential criterion of estimation is offered. Main Results. Statements, defining conditions of certain and uncertain (indeterminacy decision-making about proficiency examination for elements of discipline components controlled by test according to test realization results, are formulated and proved. Probabilistic characteristicsof both decision-making variants are estimated. Variants of determinate and fuzzy logic mathematic methods application for decreasing decision-making indeterminancy are offered; further research direction is selected for development of methods and algorithms for results decoding of diagnostic tests set realization. Practical Relevance. It is shown, that proposed approach to quantitative estimation of testing results will give the possibility to automate the procedure of formation and analysis for education results, specified in the competence format.

  16. A revival of the autoregressive distributed lag model in estimating energy demand relationships

    Energy Technology Data Exchange (ETDEWEB)

    Bentzen, J.; Engsted, T.

    1999-07-01

    The findings in the recent energy economics literature that energy economic variables are non-stationary, have led to an implicit or explicit dismissal of the standard autoregressive distribution lag (ARDL) model in estimating energy demand relationships. However, Pesaran and Shin (1997) show that the ARDL model remains valid when the underlying variables are non-stationary, provided the variables are co-integrated. In this paper we use the ARDL approach to estimate a demand relationship for Danish residential energy consumption, and the ARDL estimates are compared to the estimates obtained using co-integration techniques and error-correction models (ECM's). It turns out that both quantitatively and qualitatively, the ARDL approach and the co-integration/ECM approach give very similar results. (au)

  17. A revival of the autoregressive distributed lag model in estimating energy demand relationships

    Energy Technology Data Exchange (ETDEWEB)

    Bentzen, J; Engsted, T

    1999-07-01

    The findings in the recent energy economics literature that energy economic variables are non-stationary, have led to an implicit or explicit dismissal of the standard autoregressive distribution lag (ARDL) model in estimating energy demand relationships. However, Pesaran and Shin (1997) show that the ARDL model remains valid when the underlying variables are non-stationary, provided the variables are co-integrated. In this paper we use the ARDL approach to estimate a demand relationship for Danish residential energy consumption, and the ARDL estimates are compared to the estimates obtained using co-integration techniques and error-correction models (ECM's). It turns out that both quantitatively and qualitatively, the ARDL approach and the co-integration/ECM approach give very similar results. (au)

  18. Quantifying the Extent of Emphysema : Factors Associated with Radiologists' Estimations and Quantitative Indices of Emphysema Severity Using the ECLIPSE Cohort

    NARCIS (Netherlands)

    Gietema, Hester A.; Mueller, Nestor L.; Fauerbach, Paola V. Nasute; Sharma, Sanjay; Edwards, Lisa D.; Camp, Pat G.; Coxson, Harvey O.

    Rationale and Objectives: This study investigated what factors radiologists take into account when estimating emphysema severity and assessed quantitative computed tomography (CT) measurements of low attenuation areas. Materials and Methods: CT scans and spirometry were obtained on 1519 chronic

  19. [Progress in stable isotope labeled quantitative proteomics methods].

    Science.gov (United States)

    Zhou, Yuan; Shan, Yichu; Zhang, Lihua; Zhang, Yukui

    2013-06-01

    Quantitative proteomics is an important research field in post-genomics era. There are two strategies for proteome quantification: label-free methods and stable isotope labeling methods which have become the most important strategy for quantitative proteomics at present. In the past few years, a number of quantitative methods have been developed, which support the fast development in biology research. In this work, we discuss the progress in the stable isotope labeling methods for quantitative proteomics including relative and absolute quantitative proteomics, and then give our opinions on the outlook of proteome quantification methods.

  20. Noninvasive IDH1 mutation estimation based on a quantitative radiomics approach for grade II glioma

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Jinhua [Fudan University, Department of Electronic Engineering, Shanghai (China); Computing and Computer-Assisted Intervention, Key Laboratory of Medical Imaging, Shanghai (China); Shi, Zhifeng; Chen, Liang; Mao, Ying [Fudan University, Department of Neurosurgery, Huashan Hospital, Shanghai (China); Lian, Yuxi; Li, Zeju; Liu, Tongtong; Gao, Yuan; Wang, Yuanyuan [Fudan University, Department of Electronic Engineering, Shanghai (China)

    2017-08-15

    The status of isocitrate dehydrogenase 1 (IDH1) is highly correlated with the development, treatment and prognosis of glioma. We explored a noninvasive method to reveal IDH1 status by using a quantitative radiomics approach for grade II glioma. A primary cohort consisting of 110 patients pathologically diagnosed with grade II glioma was retrospectively studied. The radiomics method developed in this paper includes image segmentation, high-throughput feature extraction, radiomics sequencing, feature selection and classification. Using the leave-one-out cross-validation (LOOCV) method, the classification result was compared with the real IDH1 situation from Sanger sequencing. Another independent validation cohort containing 30 patients was utilised to further test the method. A total of 671 high-throughput features were extracted and quantized. 110 features were selected by improved genetic algorithm. In LOOCV, the noninvasive IDH1 status estimation based on the proposed approach presented an estimation accuracy of 0.80, sensitivity of 0.83 and specificity of 0.74. Area under the receiver operating characteristic curve reached 0.86. Further validation on the independent cohort of 30 patients produced similar results. Radiomics is a potentially useful approach for estimating IDH1 mutation status noninvasively using conventional T2-FLAIR MRI images. The estimation accuracy could potentially be improved by using multiple imaging modalities. (orig.)

  1. IMPACT OF THE “GIVING CIGARETTES IS GIVING HARM” CAMPAIGN ON KNOWLEDGE AND ATTITUDES OF CHINESE SMOKERS

    Science.gov (United States)

    Huang, Li-Ling; Thrasher, James F.; Jiang, Yuan; Li, Qiang; Fong, Geoffrey T.; Chang, Yvette; Walsemann, Katrina M.; Friedman, Daniela B.

    2015-01-01

    Objective To date there is limited published evidence on the efficacy of tobacco control mass media campaigns in China. This study aimed to evaluate the impact of a mass media campaign “Giving Cigarettes is Giving Harm” (GCGH) on Chinese smokers’ knowledge of smoking-related harms and attitudes toward cigarette gifts. Methods Population-based, representative data were analyzed from a longitudinal cohort of 3,709 adult smokers who participated in the International Tobacco Control China Survey conducted in six Chinese cities before and after the campaign. Logistic regression models were estimated to examine associations between campaign exposure and attitudes about cigarettes as gifts measured post-campaign. Poisson regression models were estimated to assess the effects of campaign exposure on post-campaign knowledge, adjusting for pre-campaign knowledge. Findings Fourteen percent (n=335) of participants recalled the campaign within the cities where the GCGH campaign was implemented. Participants in the intervention cities who recalled the campaign were more likely to disagree that cigarettes are good gifts (71% vs. 58%, pcampaign-targeted knowledge than those who did not recall the campaign (Mean=1.97 vs. 1.62, pcampaign-targeted knowledge were similar in both cities, perhaps due to a secular trend, low campaign recall, or contamination issues. Conclusions These findings suggest that the GCGH campaign increased knowledge of smoking harms, which could promote downstream cessation. Findings provide evidence to support future campaign development to effectively fight the tobacco epidemic in China. PMID:24813427

  2. Unit rupture work as a criterion for quantitative estimation of hardenability in steel

    International Nuclear Information System (INIS)

    Kramarov, M.A.; Orlov, E.D.; Rybakov, A.B.

    1980-01-01

    Shown is possible utilization of high sensitivity of resistance to fracture of structural steel to the hardenability degree in the course of hardening to find the quantitative estimation of the latter one. Proposed is a criterion kappa, the ratio of the unit rupture work in the case of incomplete hardenability (asub(Tsub(ih))) under investigation, and the analoguc value obtained in the case of complete hardenability Asub(Tsub(Ch)) at the testing temperature corresponding to the critical temperature Tsub(100(M). Confirmed is high criterion sensitivity of the hardened steel structure on the basis of experimental investigation of the 40Kh, 38KhNM and 38KhNMFA steels after isothermal hold-up at different temperatures, corresponding to production of various products of austenite decomposition

  3. The giving standard: conditional cooperation in the case of charitable giving

    NARCIS (Netherlands)

    P. Wiepking (Pamala); M. Heijnen (Merijn)

    2011-01-01

    textabstractIn this study, we make a first attempt to investigate the mechanisms of conditional cooperation in giving outside experiments, using retrospective survey data on charitable giving (the Giving the Netherlands Panel Study 2005 (GINPS05, 2005 ; N  = 1474)). Our results show that in the case

  4. Quantitative hard x-ray phase contrast imaging of micropipes in SiC

    International Nuclear Information System (INIS)

    Kohn, V. G.; Argunova, T. S.; Je, J. H.

    2013-01-01

    Peculiarities of quantitative hard x-ray phase contrast imaging of micropipes in SiC are discussed. The micropipe is assumed as a hollow cylinder with an elliptical cross section. The major and minor diameters can be restored using the least square fitting procedure by comparing the experimental data, i.e. the profile across the micropipe axis, with those calculated based on phase contrast theory. It is shown that one projection image gives an information which does not allow a complete determination of the elliptical cross section, if an orientation of micropipe is not known. Another problem is a weak accuracy in estimating the diameters, partly because of using pink synchrotron radiation, which is necessary because a monochromatic beam intensity is not sufficient to reveal the weak contrast from a very small object. The general problems of accuracy in estimating the two diameters using the least square procedure are discussed. Two experimental examples are considered to demonstrate small as well as modest accuracies in estimating the diameters

  5. Reef-associated crustacean fauna: biodiversity estimates using semi-quantitative sampling and DNA barcoding

    Science.gov (United States)

    Plaisance, L.; Knowlton, N.; Paulay, G.; Meyer, C.

    2009-12-01

    The cryptofauna associated with coral reefs accounts for a major part of the biodiversity in these ecosystems but has been largely overlooked in biodiversity estimates because the organisms are hard to collect and identify. We combine a semi-quantitative sampling design and a DNA barcoding approach to provide metrics for the diversity of reef-associated crustacean. Twenty-two similar-sized dead heads of Pocillopora were sampled at 10 m depth from five central Pacific Ocean localities (four atolls in the Northern Line Islands and in Moorea, French Polynesia). All crustaceans were removed, and partial cytochrome oxidase subunit I was sequenced from 403 individuals, yielding 135 distinct taxa using a species-level criterion of 5% similarity. Most crustacean species were rare; 44% of the OTUs were represented by a single individual, and an additional 33% were represented by several specimens found only in one of the five localities. The Northern Line Islands and Moorea shared only 11 OTUs. Total numbers estimated by species richness statistics (Chao1 and ACE) suggest at least 90 species of crustaceans in Moorea and 150 in the Northern Line Islands for this habitat type. However, rarefaction curves for each region failed to approach an asymptote, and Chao1 and ACE estimators did not stabilize after sampling eight heads in Moorea, so even these diversity figures are underestimates. Nevertheless, even this modest sampling effort from a very limited habitat resulted in surprisingly high species numbers.

  6. Using extended genealogy to estimate components of heritability for 23 quantitative and dichotomous traits.

    Science.gov (United States)

    Zaitlen, Noah; Kraft, Peter; Patterson, Nick; Pasaniuc, Bogdan; Bhatia, Gaurav; Pollack, Samuela; Price, Alkes L

    2013-05-01

    Important knowledge about the determinants of complex human phenotypes can be obtained from the estimation of heritability, the fraction of phenotypic variation in a population that is determined by genetic factors. Here, we make use of extensive phenotype data in Iceland, long-range phased genotypes, and a population-wide genealogical database to examine the heritability of 11 quantitative and 12 dichotomous phenotypes in a sample of 38,167 individuals. Most previous estimates of heritability are derived from family-based approaches such as twin studies, which may be biased upwards by epistatic interactions or shared environment. Our estimates of heritability, based on both closely and distantly related pairs of individuals, are significantly lower than those from previous studies. We examine phenotypic correlations across a range of relationships, from siblings to first cousins, and find that the excess phenotypic correlation in these related individuals is predominantly due to shared environment as opposed to dominance or epistasis. We also develop a new method to jointly estimate narrow-sense heritability and the heritability explained by genotyped SNPs. Unlike existing methods, this approach permits the use of information from both closely and distantly related pairs of individuals, thereby reducing the variance of estimates of heritability explained by genotyped SNPs while preventing upward bias. Our results show that common SNPs explain a larger proportion of the heritability than previously thought, with SNPs present on Illumina 300K genotyping arrays explaining more than half of the heritability for the 23 phenotypes examined in this study. Much of the remaining heritability is likely to be due to rare alleles that are not captured by standard genotyping arrays.

  7. Using extended genealogy to estimate components of heritability for 23 quantitative and dichotomous traits.

    Directory of Open Access Journals (Sweden)

    Noah Zaitlen

    2013-05-01

    Full Text Available Important knowledge about the determinants of complex human phenotypes can be obtained from the estimation of heritability, the fraction of phenotypic variation in a population that is determined by genetic factors. Here, we make use of extensive phenotype data in Iceland, long-range phased genotypes, and a population-wide genealogical database to examine the heritability of 11 quantitative and 12 dichotomous phenotypes in a sample of 38,167 individuals. Most previous estimates of heritability are derived from family-based approaches such as twin studies, which may be biased upwards by epistatic interactions or shared environment. Our estimates of heritability, based on both closely and distantly related pairs of individuals, are significantly lower than those from previous studies. We examine phenotypic correlations across a range of relationships, from siblings to first cousins, and find that the excess phenotypic correlation in these related individuals is predominantly due to shared environment as opposed to dominance or epistasis. We also develop a new method to jointly estimate narrow-sense heritability and the heritability explained by genotyped SNPs. Unlike existing methods, this approach permits the use of information from both closely and distantly related pairs of individuals, thereby reducing the variance of estimates of heritability explained by genotyped SNPs while preventing upward bias. Our results show that common SNPs explain a larger proportion of the heritability than previously thought, with SNPs present on Illumina 300K genotyping arrays explaining more than half of the heritability for the 23 phenotypes examined in this study. Much of the remaining heritability is likely to be due to rare alleles that are not captured by standard genotyping arrays.

  8. Myocardial blood flow estimates from dynamic contrast-enhanced magnetic resonance imaging: three quantitative methods

    Science.gov (United States)

    Borrazzo, Cristian; Galea, Nicola; Pacilio, Massimiliano; Altabella, Luisa; Preziosi, Enrico; Carnì, Marco; Ciolina, Federica; Vullo, Francesco; Francone, Marco; Catalano, Carlo; Carbone, Iacopo

    2018-02-01

    Dynamic contrast-enhanced cardiovascular magnetic resonance imaging can be used to quantitatively assess the myocardial blood flow (MBF), recovering the tissue impulse response function for the transit of a gadolinium bolus through the myocardium. Several deconvolution techniques are available, using various models for the impulse response. The method of choice may influence the results, producing differences that have not been deeply investigated yet. Three methods for quantifying myocardial perfusion have been compared: Fermi function modelling (FFM), the Tofts model (TM) and the gamma function model (GF), with the latter traditionally used in brain perfusion MRI. Thirty human subjects were studied at rest as well as under cold pressor test stress (submerging hands in ice-cold water), and a single bolus of gadolinium weighing 0.1  ±  0.05 mmol kg-1 was injected. Perfusion estimate differences between the methods were analysed by paired comparisons with Student’s t-test, linear regression analysis, and Bland-Altman plots, as well as also using the two-way ANOVA, considering the MBF values of all patients grouped according to two categories: calculation method and rest/stress conditions. Perfusion estimates obtained by various methods in both rest and stress conditions were not significantly different, and were in good agreement with the literature. The results obtained during the first-pass transit time (20 s) yielded p-values in the range 0.20-0.28 for Student’s t-test, linear regression analysis slopes between 0.98-1.03, and R values between 0.92-1.01. From the Bland-Altman plots, the paired comparisons yielded a bias (and a 95% CI)—expressed as ml/min/g—for FFM versus TM, -0.01 (-0.20, 0.17) or 0.02 (-0.49, 0.52) at rest or under stress respectively, for FFM versus GF, -0.05 (-0.29, 0.20) or  -0.07 (-0.55, 0.41) at rest or under stress, and for TM versus GF, -0.03 (-0.30, 0.24) or  -0.09 (-0.43, 0.26) at rest or under stress. With the

  9. Effects of calibration methods on quantitative material decomposition in photon-counting spectral computed tomography using a maximum a posteriori estimator.

    Science.gov (United States)

    Curtis, Tyler E; Roeder, Ryan K

    2017-10-01

    Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in

  10. Quantitative Characterisation of Surface Texture

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo; Lonardo, P.M.; Trumpold, H.

    2000-01-01

    This paper reviews the different methods used to give a quantitative characterisation of surface texture. The paper contains a review of conventional 2D as well as 3D roughness parameters, with particular emphasis on recent international standards and developments. It presents new texture...

  11. Estimating skin sensitization potency from a single dose LLNA.

    Science.gov (United States)

    Roberts, David W

    2015-04-01

    Skin sensitization is an important aspect of safety assessment. The mouse local lymph node assay (LLNA) developed in the 1990 s is an in vivo test used for skin sensitization hazard identification and characterization. More recently a reduced version of the LLNA (rLLNA) has been developed as a means of identifying, but not quantifying, sensitization hazard. The work presented here is aimed at enabling rLLNA data to be used to give quantitative potency information that can be used, inter alia, in modeling and read-across approaches to non-animal based potency estimation. A probit function has been derived enabling estimation of EC3 from a single dose. This has led to development of a modified version of the rLLNA, whereby as a general principle the SI value at 10%, or at a lower concentration if 10% is not testable, is used to calculate the EC3. This version of the rLLNA has been evaluated against a selection of chemicals for which full LLNA data are available, and has been shown to give EC3 values in good agreement with those derived from the full LLNA. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Linear solvation energy relationships: "rule of thumb" for estimation of variable values

    Science.gov (United States)

    Hickey, James P.; Passino-Reader, Dora R.

    1991-01-01

    For the linear solvation energy relationship (LSER), values are listed for each of the variables (Vi/100, π*, &betam, αm) for fundamental organic structures and functional groups. We give the guidelines to estimate LSER variable values quickly for a vast array of possible organic compounds such as those found in the environment. The difficulty in generating these variables has greatly discouraged the application of this quantitative structure-activity relationship (QSAR) method. This paper present the first compilation of molecular functional group values together with a utilitarian set of the LSER variable estimation rules. The availability of these variable values and rules should facilitate widespread application of LSER for hazard evaluation of environmental contaminants.

  13. Performance of refractometry in quantitative estimation of isotopic concentration of heavy water in nuclear reactor

    International Nuclear Information System (INIS)

    Dhole, K.; Roy, M.; Ghosh, S.; Datta, A.; Tripathy, M.K.; Bose, H.

    2013-01-01

    Highlights: ► Rapid analysis of heavy water samples, with precise temperature control. ► Entire composition range covered. ► Both variations in mole and wt.% of D 2 O in the heavy water sample studied. ► Standard error of calibration and prediction were estimated. - Abstract: The method of refractometry has been investigated for the quantitative estimation of isotopic concentration of heavy water (D 2 O) in a simulated water sample. Feasibility of refractometry as an excellent analytical technique for rapid and non-invasive determination of D 2 O concentration in water samples has been amply demonstrated. Temperature of the samples has been precisely controlled to eliminate the effect of temperature fluctuation on refractive index measurement. The method is found to exhibit a reasonable analytical response to its calibration performance over the purity range of 0–100% D 2 O. An accuracy of below ±1% in the measurement of isotopic purity of heavy water for the entire range could be achieved

  14. Compositional and Quantitative Model Checking

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand

    2010-01-01

    This paper gives a survey of a composition model checking methodology and its succesfull instantiation to the model checking of networks of finite-state, timed, hybrid and probabilistic systems with respect; to suitable quantitative versions of the modal mu-calculus [Koz82]. The method is based...

  15. Rapid non-destructive quantitative estimation of urania/ thoria in mixed thorium uranium di-oxide pellets by high-resolution gamma-ray spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Shriwastwa, B.B.; Kumar, Anil; Raghunath, B.; Nair, M.R.; Abani, M.C.; Ramachandran, R.; Majumdar, S.; Ghosh, J.K

    2001-06-01

    A non-destructive technique using high-resolution gamma-ray spectrometry has been standardised for quantitative estimation of uranium/thorium in mixed (ThO{sub 2}-UO{sub 2}) fuel pellets of varying composition. Four gamma energies were selected; two each from the uranium and thorium series and the time of counting has been optimised. This technique can be used for rapid estimation of U/Th percentage in a large number of mixed fuel pellets from a production campaign.

  16. Rapid non-destructive quantitative estimation of urania/ thoria in mixed thorium uranium di-oxide pellets by high-resolution gamma-ray spectrometry

    International Nuclear Information System (INIS)

    Shriwastwa, B.B.; Kumar, Anil; Raghunath, B.; Nair, M.R.; Abani, M.C.; Ramachandran, R.; Majumdar, S.; Ghosh, J.K.

    2001-01-01

    A non-destructive technique using high-resolution gamma-ray spectrometry has been standardised for quantitative estimation of uranium/thorium in mixed (ThO 2 -UO 2 ) fuel pellets of varying composition. Four gamma energies were selected; two each from the uranium and thorium series and the time of counting has been optimised. This technique can be used for rapid estimation of U/Th percentage in a large number of mixed fuel pellets from a production campaign

  17. Analytical performance of refractometry in quantitative estimation of isotopic concentration of heavy water in nuclear reactor

    International Nuclear Information System (INIS)

    Dhole, K.; Ghosh, S.; Datta, A.; Tripathy, M.K.; Bose, H.; Roy, M.; Tyagi, A.K.

    2011-01-01

    The method of refractometry has been investigated for the quantitative estimation of isotopic concentration of D 2 O (heavy water) in a simulated water sample. Viability of Refractometry as an excellent analytical technique for rapid and non-invasive determination of D 2 O concentration in water samples has been demonstrated. Temperature of the samples was precisely controlled to eliminate effect of temperature fluctuation on refractive index measurement. Calibration performance by this technique exhibited reasonable analytical response over a wide range (1-100%) of D 2 O concentration. (author)

  18. SU-F-I-33: Estimating Radiation Dose in Abdominal Fat Quantitative CT

    Energy Technology Data Exchange (ETDEWEB)

    Li, X; Yang, K; Liu, B [Massachusetts General Hospital, Boston, MA (United States)

    2016-06-15

    Purpose: To compare size-specific dose estimate (SSDE) in abdominal fat quantitative CT with another dose estimate D{sub size,L} that also takes into account scan length. Methods: This study complied with the requirements of the Health Insurance Portability and Accountability Act. At our institution, abdominal fat CT is performed with scan length = 1 cm and CTDI{sub vol} = 4.66 mGy (referenced to body CTDI phantom). A previously developed CT simulation program was used to simulate single rotation axial scans of 6–55 cm diameter water cylinders, and dose integral of the longitudinal dose profile over the central 1 cm length was used to predict the dose at the center of one-cm scan range. SSDE and D{sub size,L} were assessed for 182 consecutive abdominal fat CT examinations with mean water-equivalent diameter (WED) of 27.8 cm ± 6.0 (range, 17.9 - 42.2 cm). Patient age ranged from 18 to 75 years, and weight ranged from 39 to 163 kg. Results: Mean SSDE was 6.37 mGy ± 1.33 (range, 3.67–8.95 mGy); mean D{sub size,L} was 2.99 mGy ± 0.85 (range, 1.48 - 4.88 mGy); and mean D{sub size,L}/SSDE ratio was 0.46 ± 0.04 (range, 0.40 - 0.55). Conclusion: The conversion factors for size-specific dose estimate in AAPM Report No. 204 were generated using 15 - 30 cm scan lengths. One needs to be cautious in applying SSDE to small length CT scans. For abdominal fat CT, SSDE was 80–150% higher than the dose of 1 cm scan length.

  19. The APEX Quantitative Proteomics Tool: Generating protein quantitation estimates from LC-MS/MS proteomics results

    Directory of Open Access Journals (Sweden)

    Saeed Alexander I

    2008-12-01

    Full Text Available Abstract Background Mass spectrometry (MS based label-free protein quantitation has mainly focused on analysis of ion peak heights and peptide spectral counts. Most analyses of tandem mass spectrometry (MS/MS data begin with an enzymatic digestion of a complex protein mixture to generate smaller peptides that can be separated and identified by an MS/MS instrument. Peptide spectral counting techniques attempt to quantify protein abundance by counting the number of detected tryptic peptides and their corresponding MS spectra. However, spectral counting is confounded by the fact that peptide physicochemical properties severely affect MS detection resulting in each peptide having a different detection probability. Lu et al. (2007 described a modified spectral counting technique, Absolute Protein Expression (APEX, which improves on basic spectral counting methods by including a correction factor for each protein (called Oi value that accounts for variable peptide detection by MS techniques. The technique uses machine learning classification to derive peptide detection probabilities that are used to predict the number of tryptic peptides expected to be detected for one molecule of a particular protein (Oi. This predicted spectral count is compared to the protein's observed MS total spectral count during APEX computation of protein abundances. Results The APEX Quantitative Proteomics Tool, introduced here, is a free open source Java application that supports the APEX protein quantitation technique. The APEX tool uses data from standard tandem mass spectrometry proteomics experiments and provides computational support for APEX protein abundance quantitation through a set of graphical user interfaces that partition thparameter controls for the various processing tasks. The tool also provides a Z-score analysis for identification of significant differential protein expression, a utility to assess APEX classifier performance via cross validation, and a

  20. PCA-based groupwise image registration for quantitative MRI

    NARCIS (Netherlands)

    Huizinga, W.; Poot, D. H. J.; Guyader, J.-M.; Klaassen, R.; Coolen, B. F.; van Kranenburg, M.; van Geuns, R. J. M.; Uitterdijk, A.; Polfliet, M.; Vandemeulebroucke, J.; Leemans, A.; Niessen, W. J.; Klein, S.

    2016-01-01

    Quantitative magnetic resonance imaging (qMRI) is a technique for estimating quantitative tissue properties, such as the T5 and T2 relaxation times, apparent diffusion coefficient (ADC), and various perfusion measures. This estimation is achieved by acquiring multiple images with different

  1. FPGA-based fused smart-sensor for tool-wear area quantitative estimation in CNC machine inserts.

    Science.gov (United States)

    Trejo-Hernandez, Miguel; Osornio-Rios, Roque Alfredo; de Jesus Romero-Troncoso, Rene; Rodriguez-Donate, Carlos; Dominguez-Gonzalez, Aurelio; Herrera-Ruiz, Gilberto

    2010-01-01

    Manufacturing processes are of great relevance nowadays, when there is a constant claim for better productivity with high quality at low cost. The contribution of this work is the development of a fused smart-sensor, based on FPGA to improve the online quantitative estimation of flank-wear area in CNC machine inserts from the information provided by two primary sensors: the monitoring current output of a servoamplifier, and a 3-axis accelerometer. Results from experimentation show that the fusion of both parameters makes it possible to obtain three times better accuracy when compared with the accuracy obtained from current and vibration signals, individually used.

  2. Quantitative estimation of hemorrhage in chronic subdural hematoma using the 51Cr erythrocyte labeling method

    International Nuclear Information System (INIS)

    Ito, H.; Yamamoto, S.; Saito, K.; Ikeda, K.; Hisada, K.

    1987-01-01

    Red cell survival studies using an infusion of chromium-51-labeled erythrocytes were performed to quantitatively estimate hemorrhage in the chronic subdural hematoma cavity of 50 patients. The amount of hemorrhage was determined during craniotomy. Between 6 and 24 hours after infusion of the labeled red cells, hemorrhage accounted for a mean of 6.7% of the hematoma content, indicating continuous or intermittent hemorrhage into the cavity. The clinical state of the patients and the density of the chronic subdural hematoma on computerized tomography scans were related to the amount of hemorrhage. Chronic subdural hematomas with a greater amount of hemorrhage frequently consisted of clots rather than fluid

  3. Health Impacts of Increased Physical Activity from Changes in Transportation Infrastructure: Quantitative Estimates for Three Communities

    Science.gov (United States)

    2015-01-01

    Recently, two quantitative tools have emerged for predicting the health impacts of projects that change population physical activity: the Health Economic Assessment Tool (HEAT) and Dynamic Modeling for Health Impact Assessment (DYNAMO-HIA). HEAT has been used to support health impact assessments of transportation infrastructure projects, but DYNAMO-HIA has not been previously employed for this purpose nor have the two tools been compared. To demonstrate the use of DYNAMO-HIA for supporting health impact assessments of transportation infrastructure projects, we employed the model in three communities (urban, suburban, and rural) in North Carolina. We also compared DYNAMO-HIA and HEAT predictions in the urban community. Using DYNAMO-HIA, we estimated benefit-cost ratios of 20.2 (95% C.I.: 8.7–30.6), 0.6 (0.3–0.9), and 4.7 (2.1–7.1) for the urban, suburban, and rural projects, respectively. For a 40-year time period, the HEAT predictions of deaths avoided by the urban infrastructure project were three times as high as DYNAMO-HIA's predictions due to HEAT's inability to account for changing population health characteristics over time. Quantitative health impact assessment coupled with economic valuation is a powerful tool for integrating health considerations into transportation decision-making. However, to avoid overestimating benefits, such quantitative HIAs should use dynamic, rather than static, approaches. PMID:26504832

  4. Evaluation of quantitative imaging methods for organ activity and residence time estimation using a population of phantoms having realistic variations in anatomy and uptake

    International Nuclear Information System (INIS)

    He Bin; Du Yong; Segars, W. Paul; Wahl, Richard L.; Sgouros, George; Jacene, Heather; Frey, Eric C.

    2009-01-01

    Estimating organ residence times is an essential part of patient-specific dosimetry for radioimmunotherapy (RIT). Quantitative imaging methods for RIT are often evaluated using a single physical or simulated phantom but are intended to be applied clinically where there is variability in patient anatomy, biodistribution, and biokinetics. To provide a more relevant evaluation, the authors have thus developed a population of phantoms with realistic variations in these factors and applied it to the evaluation of quantitative imaging methods both to find the best method and to demonstrate the effects of these variations. Using whole body scans and SPECT/CT images, organ shapes and time-activity curves of 111In ibritumomab tiuxetan were measured in dosimetrically important organs in seven patients undergoing a high dose therapy regimen. Based on these measurements, we created a 3D NURBS-based cardiac-torso (NCAT)-based phantom population. SPECT and planar data at realistic count levels were then simulated using previously validated Monte Carlo simulation tools. The projections from the population were used to evaluate the accuracy and variation in accuracy of residence time estimation methods that used a time series of SPECT and planar scans. Quantitative SPECT (QSPECT) reconstruction methods were used that compensated for attenuation, scatter, and the collimator-detector response. Planar images were processed with a conventional (CPlanar) method that used geometric mean attenuation and triple-energy window scatter compensation and a quantitative planar (QPlanar) processing method that used model-based compensation for image degrading effects. Residence times were estimated from activity estimates made at each of five time points. The authors also evaluated hybrid methods that used CPlanar or QPlanar time-activity curves rescaled to the activity estimated from a single QSPECT image. The methods were evaluated in terms of mean relative error and standard deviation of the

  5. Quantitative estimation of carbonation and chloride penetration in reinforced concrete by laser-induced breakdown spectroscopy

    Science.gov (United States)

    Eto, Shuzo; Matsuo, Toyofumi; Matsumura, Takuro; Fujii, Takashi; Tanaka, Masayoshi Y.

    2014-11-01

    The penetration profile of chlorine in a reinforced concrete (RC) specimen was determined by laser-induced breakdown spectroscopy (LIBS). The concrete core was prepared from RC beams with cracking damage induced by bending load and salt water spraying. LIBS was performed using a specimen that was obtained by splitting the concrete core, and the line scan of laser pulses gave the two-dimensional emission intensity profiles of 100 × 80 mm2 within one hour. The two-dimensional profile of the emission intensity suggests that the presence of the crack had less effect on the emission intensity when the measurement interval was larger than the crack width. The chlorine emission spectrum was measured without using the buffer gas, which is usually used for chlorine measurement, by collinear double-pulse LIBS. The apparent diffusion coefficient, which is one of the most important parameters for chloride penetration in concrete, was estimated using the depth profile of chlorine emission intensity and Fick's law. The carbonation depth was estimated on the basis of the relationship between carbon and calcium emission intensities. When the carbon emission intensity was statistically higher than the calcium emission intensity at the measurement point, we determined that the point was carbonated. The estimation results were consistent with the spraying test results using phenolphthalein solution. These results suggest that the quantitative estimation by LIBS of carbonation depth and chloride penetration can be performed simultaneously.

  6. Improving Satellite Quantitative Precipitation Estimation Using GOES-Retrieved Cloud Optical Depth

    Energy Technology Data Exchange (ETDEWEB)

    Stenz, Ronald; Dong, Xiquan; Xi, Baike; Feng, Zhe; Kuligowski, Robert J.

    2016-02-01

    To address significant gaps in ground-based radar coverage and rain gauge networks in the U.S., geostationary satellite quantitative precipitation estimates (QPEs) such as the Self-Calibrating Multivariate Precipitation Retrievals (SCaMPR) can be used to fill in both the spatial and temporal gaps of ground-based measurements. Additionally, with the launch of GOES-R, the temporal resolution of satellite QPEs may be comparable to that of Weather Service Radar-1988 Doppler (WSR-88D) volume scans as GOES images will be available every five minutes. However, while satellite QPEs have strengths in spatial coverage and temporal resolution, they face limitations particularly during convective events. Deep Convective Systems (DCSs) have large cloud shields with similar brightness temperatures (BTs) over nearly the entire system, but widely varying precipitation rates beneath these clouds. Geostationary satellite QPEs relying on the indirect relationship between BTs and precipitation rates often suffer from large errors because anvil regions (little/no precipitation) cannot be distinguished from rain-cores (heavy precipitation) using only BTs. However, a combination of BTs and optical depth (τ) has been found to reduce overestimates of precipitation in anvil regions (Stenz et al. 2014). A new rain mask algorithm incorporating both τ and BTs has been developed, and its application to the existing SCaMPR algorithm was evaluated. The performance of the modified SCaMPR was evaluated using traditional skill scores and a more detailed analysis of performance in individual DCS components by utilizing the Feng et al. (2012) classification algorithm. SCaMPR estimates with the new rain mask applied benefited from significantly reduced overestimates of precipitation in anvil regions and overall improvements in skill scores.

  7. Quantitative analysis of fission products by γ spectrography

    International Nuclear Information System (INIS)

    Malet, G.

    1962-01-01

    The activity of the fission products present in treated solutions of irradiated fuels is given as a function of the time of cooling and of the irradiation time. The variation of the ratio ( 144 Ce + 144 Pr activity)/ 137 Cs activity) as a function of these same parameters is also given. From these results a method is deduced giving the 'age' of the solution analyzed. By γ-scintillation spectrography it was possible to estimate the following elements individually: 141 Ce, 144 Ce + 144 Pr, 103 Ru, 106 Ru + 106 Rh, 137 Cs, 95 Zr + 95 Nb. Yield curves are given for the case of a single emitter. Of the various existing methods, that of the least squares was used for the quantitative analysis of the afore-mentioned fission products. The accuracy attained varies from 3 to 10%. (author) [fr

  8. The effect of volume-of-interest misregistration on quantitative planar activity and dose estimation

    International Nuclear Information System (INIS)

    Song, N; Frey, E C; He, B

    2010-01-01

    In targeted radionuclide therapy (TRT), dose estimation is essential for treatment planning and tumor dose response studies. Dose estimates are typically based on a time series of whole-body conjugate view planar or SPECT scans of the patient acquired after administration of a planning dose. Quantifying the activity in the organs from these studies is an essential part of dose estimation. The quantitative planar (QPlanar) processing method involves accurate compensation for image degrading factors and correction for organ and background overlap via the combination of computational models of the image formation process and 3D volumes of interest defining the organs to be quantified. When the organ VOIs are accurately defined, the method intrinsically compensates for attenuation, scatter and partial volume effects, as well as overlap with other organs and the background. However, alignment between the 3D organ volume of interest (VOIs) used in QPlanar processing and the true organ projections in the planar images is required. The aim of this research was to study the effects of VOI misregistration on the accuracy and precision of organ activity estimates obtained using the QPlanar method. In this work, we modeled the degree of residual misregistration that would be expected after an automated registration procedure by randomly misaligning 3D SPECT/CT images, from which the VOI information was derived, and planar images. Mutual information-based image registration was used to align the realistic simulated 3D SPECT images with the 2D planar images. The residual image misregistration was used to simulate realistic levels of misregistration and allow investigation of the effects of misregistration on the accuracy and precision of the QPlanar method. We observed that accurate registration is especially important for small organs or ones with low activity concentrations compared to neighboring organs. In addition, residual misregistration gave rise to a loss of precision

  9. A uniform quantitative stiff stability estimate for BDF schemes

    Directory of Open Access Journals (Sweden)

    Winfried Auzinger

    2006-01-01

    Full Text Available The concepts of stability regions, \\(A\\- and \\(A(\\alpha\\-stability - albeit based on scalar models - turned out to be essential for the identification of implicit methods suitable for the integration of stiff ODEs. However, for multistep methods, knowledge of the stability region provides no information on the quantitative stability behavior of the scheme. In this paper we fill this gap for the important class of Backward Differentiation Formulas (BDF. Quantitative stability bounds are derived which are uniformly valid in the stability region of the method. Our analysis is based on a study of the separation of the characteristic roots and a special similarity decomposition of the associated companion matrix.

  10. Investigation of Weather Radar Quantitative Precipitation Estimation Methodologies in Complex Orography

    Directory of Open Access Journals (Sweden)

    Mario Montopoli

    2017-02-01

    Full Text Available Near surface quantitative precipitation estimation (QPE from weather radar measurements is an important task for feeding hydrological models, limiting the impact of severe rain events at the ground as well as aiding validation studies of satellite-based rain products. To date, several works have analyzed the performance of various QPE algorithms using actual and synthetic experiments, possibly trained by measurement of particle size distributions and electromagnetic models. Most of these studies support the use of dual polarization radar variables not only to ensure a good level of data quality but also as a direct input to rain estimation equations. One of the most important limiting factors in radar QPE accuracy is the vertical variability of particle size distribution, which affects all the acquired radar variables as well as estimated rain rates at different levels. This is particularly impactful in mountainous areas, where the sampled altitudes are likely several hundred meters above the surface. In this work, we analyze the impact of the vertical profile variations of rain precipitation on several dual polarization radar QPE algorithms when they are tested in a complex orography scenario. So far, in weather radar studies, more emphasis has been given to the extrapolation strategies that use the signature of the vertical profiles in terms of radar co-polar reflectivity. This may limit the use of the radar vertical profiles when dual polarization QPE algorithms are considered. In that case, all the radar variables used in the rain estimation process should be consistently extrapolated at the surface to try and maintain the correlations among them. To avoid facing such a complexity, especially with a view to operational implementation, we propose looking at the features of the vertical profile of rain (VPR, i.e., after performing the rain estimation. This procedure allows characterization of a single variable (i.e., rain when dealing with

  11. Deterministic quantitative risk assessment development

    Energy Technology Data Exchange (ETDEWEB)

    Dawson, Jane; Colquhoun, Iain [PII Pipeline Solutions Business of GE Oil and Gas, Cramlington Northumberland (United Kingdom)

    2009-07-01

    Current risk assessment practice in pipeline integrity management is to use a semi-quantitative index-based or model based methodology. This approach has been found to be very flexible and provide useful results for identifying high risk areas and for prioritizing physical integrity assessments. However, as pipeline operators progressively adopt an operating strategy of continual risk reduction with a view to minimizing total expenditures within safety, environmental, and reliability constraints, the need for quantitative assessments of risk levels is becoming evident. Whereas reliability based quantitative risk assessments can be and are routinely carried out on a site-specific basis, they require significant amounts of quantitative data for the results to be meaningful. This need for detailed and reliable data tends to make these methods unwieldy for system-wide risk k assessment applications. This paper describes methods for estimating risk quantitatively through the calibration of semi-quantitative estimates to failure rates for peer pipeline systems. The methods involve the analysis of the failure rate distribution, and techniques for mapping the rate to the distribution of likelihoods available from currently available semi-quantitative programs. By applying point value probabilities to the failure rates, deterministic quantitative risk assessment (QRA) provides greater rigor and objectivity than can usually be achieved through the implementation of semi-quantitative risk assessment results. The method permits a fully quantitative approach or a mixture of QRA and semi-QRA to suit the operator's data availability and quality, and analysis needs. For example, consequence analysis can be quantitative or can address qualitative ranges for consequence categories. Likewise, failure likelihoods can be output as classical probabilities or as expected failure frequencies as required. (author)

  12. Novel whole brain segmentation and volume estimation using quantitative MRI

    International Nuclear Information System (INIS)

    West, J.; Warntjes, J.B.M.; Lundberg, P.

    2012-01-01

    Brain segmentation and volume estimation of grey matter (GM), white matter (WM) and cerebro-spinal fluid (CSF) are important for many neurological applications. Volumetric changes are observed in multiple sclerosis (MS), Alzheimer's disease and dementia, and in normal aging. A novel method is presented to segment brain tissue based on quantitative magnetic resonance imaging (qMRI) of the longitudinal relaxation rate R 1 , the transverse relaxation rate R 2 and the proton density, PD. Previously reported qMRI values for WM, GM and CSF were used to define tissues and a Bloch simulation performed to investigate R 1 , R 2 and PD for tissue mixtures in the presence of noise. Based on the simulations a lookup grid was constructed to relate tissue partial volume to the R 1 -R 2 -PD space. The method was validated in 10 healthy subjects. MRI data were acquired using six resolutions and three geometries. Repeatability for different resolutions was 3.2% for WM, 3.2% for GM, 1.0% for CSF and 2.2% for total brain volume. Repeatability for different geometries was 8.5% for WM, 9.4% for GM, 2.4% for CSF and 2.4% for total brain volume. We propose a new robust qMRI-based approach which we demonstrate in a patient with MS. (orig.)

  13. Novel whole brain segmentation and volume estimation using quantitative MRI

    Energy Technology Data Exchange (ETDEWEB)

    West, J. [Linkoeping University, Radiation Physics, Department of Medical and Health Sciences, Faculty of Health Sciences, Linkoeping (Sweden); Linkoeping University, Center for Medical Imaging Science and Visualization (CMIV), Linkoeping (Sweden); SyntheticMR AB, Linkoeping (Sweden); Warntjes, J.B.M. [Linkoeping University, Center for Medical Imaging Science and Visualization (CMIV), Linkoeping (Sweden); SyntheticMR AB, Linkoeping (Sweden); Linkoeping University and Department of Clinical Physiology UHL, County Council of Oestergoetland, Clinical Physiology, Department of Medical and Health Sciences, Faculty of Health Sciences, Linkoeping (Sweden); Lundberg, P. [Linkoeping University, Center for Medical Imaging Science and Visualization (CMIV), Linkoeping (Sweden); Linkoeping University and Department of Radiation Physics UHL, County Council of Oestergoetland, Radiation Physics, Department of Medical and Health Sciences, Faculty of Health Sciences, Linkoeping (Sweden); Linkoeping University and Department of Radiology UHL, County Council of Oestergoetland, Radiology, Department of Medical and Health Sciences, Faculty of Health Sciences, Linkoeping (Sweden)

    2012-05-15

    Brain segmentation and volume estimation of grey matter (GM), white matter (WM) and cerebro-spinal fluid (CSF) are important for many neurological applications. Volumetric changes are observed in multiple sclerosis (MS), Alzheimer's disease and dementia, and in normal aging. A novel method is presented to segment brain tissue based on quantitative magnetic resonance imaging (qMRI) of the longitudinal relaxation rate R{sub 1}, the transverse relaxation rate R{sub 2} and the proton density, PD. Previously reported qMRI values for WM, GM and CSF were used to define tissues and a Bloch simulation performed to investigate R{sub 1}, R{sub 2} and PD for tissue mixtures in the presence of noise. Based on the simulations a lookup grid was constructed to relate tissue partial volume to the R{sub 1}-R{sub 2}-PD space. The method was validated in 10 healthy subjects. MRI data were acquired using six resolutions and three geometries. Repeatability for different resolutions was 3.2% for WM, 3.2% for GM, 1.0% for CSF and 2.2% for total brain volume. Repeatability for different geometries was 8.5% for WM, 9.4% for GM, 2.4% for CSF and 2.4% for total brain volume. We propose a new robust qMRI-based approach which we demonstrate in a patient with MS. (orig.)

  14. Improving Radar Quantitative Precipitation Estimation over Complex Terrain in the San Francisco Bay Area

    Science.gov (United States)

    Cifelli, R.; Chen, H.; Chandrasekar, V.

    2017-12-01

    A recent study by the State of California's Department of Water Resources has emphasized that the San Francisco Bay Area is at risk of catastrophic flooding. Therefore, accurate quantitative precipitation estimation (QPE) and forecast (QPF) are critical for protecting life and property in this region. Compared to rain gauge and meteorological satellite, ground based radar has shown great advantages for high-resolution precipitation observations in both space and time domain. In addition, the polarization diversity shows great potential to characterize precipitation microphysics through identification of different hydrometeor types and their size and shape information. Currently, all the radars comprising the U.S. National Weather Service (NWS) Weather Surveillance Radar-1988 Doppler (WSR-88D) network are operating in dual-polarization mode. Enhancement of QPE is one of the main considerations of the dual-polarization upgrade. The San Francisco Bay Area is covered by two S-band WSR-88D radars, namely, KMUX and KDAX. However, in complex terrain like the Bay Area, it is still challenging to obtain an optimal rainfall algorithm for a given set of dual-polarization measurements. In addition, the accuracy of rain rate estimates is contingent on additional factors such as bright band contamination, vertical profile of reflectivity (VPR) correction, and partial beam blockages. This presentation aims to improve radar QPE for the Bay area using advanced dual-polarization rainfall methodologies. The benefit brought by the dual-polarization upgrade of operational radar network is assessed. In addition, a pilot study of gap fill X-band radar performance is conducted in support of regional QPE system development. This paper also presents a detailed comparison between the dual-polarization radar-derived rainfall products with various operational products including the NSSL's Multi-Radar/Multi-Sensor (MRMS) system. Quantitative evaluation of various rainfall products is achieved

  15. Parameter estimation using the genetic algorithm and its impact on quantitative precipitation forecast

    Directory of Open Access Journals (Sweden)

    Y. H. Lee

    2006-12-01

    Full Text Available In this study, optimal parameter estimations are performed for both physical and computational parameters in a mesoscale meteorological model, and their impacts on the quantitative precipitation forecasting (QPF are assessed for a heavy rainfall case occurred at the Korean Peninsula in June 2005. Experiments are carried out using the PSU/NCAR MM5 model and the genetic algorithm (GA for two parameters: the reduction rate of the convective available potential energy in the Kain-Fritsch (KF scheme for cumulus parameterization, and the Asselin filter parameter for numerical stability. The fitness function is defined based on a QPF skill score. It turns out that each optimized parameter significantly improves the QPF skill. Such improvement is maximized when the two optimized parameters are used simultaneously. Our results indicate that optimizations of computational parameters as well as physical parameters and their adequate applications are essential in improving model performance.

  16. Evaluation gives the activity inventory the nuclear fuel irradiated and its radioactive waste

    International Nuclear Information System (INIS)

    Rodriguez Gual, Maritza

    1998-01-01

    The present work has as objectives to give a quantitative evaluation to the activity that possesses the nuclear fuel for 3,6% enrichment with a burnt one the 33 000 NWd/Tu proposed for the Juragua Nuclear Power Plant . In this work the method is used I calculate ORIGEN2. Obtained results are presented and they are compared with other calculations carried out in reactors type VVER-440

  17. Systematic feasibility analysis of a quantitative elasticity estimation for breast anatomy using supine/prone patient postures.

    Science.gov (United States)

    Hasse, Katelyn; Neylon, John; Sheng, Ke; Santhanam, Anand P

    2016-03-01

    Breast elastography is a critical tool for improving the targeted radiotherapy treatment of breast tumors. Current breast radiotherapy imaging protocols only involve prone and supine CT scans. There is a lack of knowledge on the quantitative accuracy with which breast elasticity can be systematically measured using only prone and supine CT datasets. The purpose of this paper is to describe a quantitative elasticity estimation technique for breast anatomy using only these supine/prone patient postures. Using biomechanical, high-resolution breast geometry obtained from CT scans, a systematic assessment was performed in order to determine the feasibility of this methodology for clinically relevant elasticity distributions. A model-guided inverse analysis approach is presented in this paper. A graphics processing unit (GPU)-based linear elastic biomechanical model was employed as a forward model for the inverse analysis with the breast geometry in a prone position. The elasticity estimation was performed using a gradient-based iterative optimization scheme and a fast-simulated annealing (FSA) algorithm. Numerical studies were conducted to systematically analyze the feasibility of elasticity estimation. For simulating gravity-induced breast deformation, the breast geometry was anchored at its base, resembling the chest-wall/breast tissue interface. Ground-truth elasticity distributions were assigned to the model, representing tumor presence within breast tissue. Model geometry resolution was varied to estimate its influence on convergence of the system. A priori information was approximated and utilized to record the effect on time and accuracy of convergence. The role of the FSA process was also recorded. A novel error metric that combined elasticity and displacement error was used to quantify the systematic feasibility study. For the authors' purposes, convergence was set to be obtained when each voxel of tissue was within 1 mm of ground-truth deformation. The authors

  18. Give Me Strength.

    Institute of Scientific and Technical Information of China (English)

    维拉

    1996-01-01

    Mort had an absolutely terrible day at the office.Everythingthat could go wrong did go wrong.As he walked home he could beheard muttering strange words to himself:“Oh,give me strength,give me strength.”Mort isn’t asking for the kind of strength thatbuilds strong muscles:he’s asking for the courage or ability to

  19. Optimisation of information influences on problems of consequences of Chernobyl accident and quantitative criteria for estimation of information actions

    International Nuclear Information System (INIS)

    Sobaleu, A.

    2004-01-01

    Consequences of Chernobyl NPP accident still very important for Belarus. About 2 million Byelorussians live in the districts polluted by Chernobyl radionuclides. Modern approaches to the decision of after Chernobyl problems in Belarus assume more active use of information and educational actions to grow up a new radiological culture. It will allow to reduce internal doze of radiation without spending a lot of money and other resources. Experience of information work with the population affected by Chernobyl since 1986 till 2004 has shown, that information and educational influences not always reach the final aim - application of received knowledge on radiating safety in practice and changing the style of life. If we take into account limited funds and facilities, we should optimize information work. The optimization can be achieved on the basis of quantitative estimations of information actions effectiveness. It is possible to use two parameters for this quantitative estimations: 1) increase in knowledge of the population and experts on the radiating safety, calculated by new method based on applied theory of the information (Mathematical Theory of Communication) by Claude E. Shannon and 2) reduction of internal doze of radiation, calculated on the basis of measurements on human irradiation counter (HIC) before and after an information or educational influence. (author)

  20. Model developments for quantitative estimates of the benefits of the signals on nuclear power plant availability and economics

    International Nuclear Information System (INIS)

    Seong, Poong Hyun

    1993-01-01

    A novel framework for quantitative estimates of the benefits of signals on nuclear power plant availability and economics has been developed in this work. The models developed in this work quantify how the perfect signals affect the human operator's success in restoring the power plant to the desired state when it enters undesirable transients. Also, the models quantify the economic benefits of these perfect signals. The models have been applied to the condensate feedwater system of the nuclear power plant for demonstration. (Author)

  1. Adaptive estimation of a time-varying phase with coherent states: Smoothing can give an unbounded improvement over filtering

    Science.gov (United States)

    Laverick, Kiarn T.; Wiseman, Howard M.; Dinani, Hossein T.; Berry, Dominic W.

    2018-04-01

    The problem of measuring a time-varying phase, even when the statistics of the variation is known, is considerably harder than that of measuring a constant phase. In particular, the usual bounds on accuracy, such as the 1 /(4 n ¯) standard quantum limit with coherent states, do not apply. Here, by restricting to coherent states, we are able to analytically obtain the achievable accuracy, the equivalent of the standard quantum limit, for a wide class of phase variation. In particular, we consider the case where the phase has Gaussian statistics and a power-law spectrum equal to κp -1/|ω| p for large ω , for some p >1 . For coherent states with mean photon flux N , we give the quantum Cramér-Rao bound on the mean-square phase error as [psin(π /p ) ] -1(4N /κ ) -(p -1 )/p . Next, we consider whether the bound can be achieved by an adaptive homodyne measurement in the limit N /κ ≫1 , which allows the photocurrent to be linearized. Applying the optimal filtering for the resultant linear Gaussian system, we find the same scaling with N , but with a prefactor larger by a factor of p . By contrast, if we employ optimal smoothing we can exactly obtain the quantum Cramér-Rao bound. That is, contrary to previously considered (p =2 ) cases of phase estimation, here the improvement offered by smoothing over filtering is not limited to a factor of 2 but rather can be unbounded by a factor of p . We also study numerically the performance of these estimators for an adaptive measurement in the limit where N /κ is not large and find a more complicated picture.

  2. Distribution and Quantitative Estimates of Variant Creutzfeldt-Jakob Disease Prions in Tissues of Clinical and Asymptomatic Patients.

    Science.gov (United States)

    Douet, Jean Y; Lacroux, Caroline; Aron, Naima; Head, Mark W; Lugan, Séverine; Tillier, Cécile; Huor, Alvina; Cassard, Hervé; Arnold, Mark; Beringue, Vincent; Ironside, James W; Andréoletti, Olivier

    2017-06-01

    In the United-Kingdom, ≈1 of 2,000 persons could be infected with variant Creutzfeldt-Jakob disease (vCJD). Therefore, risk of transmission of vCJD by medical procedures remains a major concern for public health authorities. In this study, we used in vitro amplification of prions by protein misfolding cyclic amplification (PMCA) to estimate distribution and level of the vCJD agent in 21 tissues from 4 patients who died of clinical vCJD and from 1 asymptomatic person with vCJD. PMCA identified major levels of vCJD prions in a range of tissues, including liver, salivary gland, kidney, lung, and bone marrow. Bioassays confirmed that the quantitative estimate of levels of vCJD prion accumulation provided by PMCA are indicative of vCJD infectivity levels in tissues. Findings provide critical data for the design of measures to minimize risk for iatrogenic transmission of vCJD.

  3. Validity of spherical quantitative refractometry: application to laser-produced plasmas

    International Nuclear Information System (INIS)

    Benattar, R.; Popovics, C.

    1983-01-01

    We report an experimental laser technique of quantitative Schlieren imaging of spherical plasmas combined with streak camera recording. We show that quantitative refractometry applies for small values of refraction angles, i.e., when the law giving the refraction angle versus the impact parameter of rays passing through the plasma is a linearly decreasing function

  4. A quantitative framework for estimating water resources in India

    Digital Repository Service at National Institute of Oceanography (India)

    Shankar, D.; Kotamraju, V.; Shetye, S.R

    of information on the variables associated with hydrology, and second, the absence of an easily accessible quantitative framework to put these variables in perspective. In this paper, we discuss a framework that has been assembled to address both these issues...

  5. Quantitative neutron radiography using neutron absorbing honeycomb

    International Nuclear Information System (INIS)

    Tamaki, Masayoshi; Oda, Masahiro; Takahashi, Kenji; Ohkubo, Kohei; Tasaka, Kanji; Tsuruno, Akira; Matsubayashi, Masahito.

    1993-01-01

    This investigation concerns quantitative neutron radiography and computed tomography by using a neutron absorbing honeycomb collimator. By setting the neutron absorbing honeycomb collimator between object and imaging system, neutrons scattered in the object were absorbed by the honeycomb material and eliminated before coming to the imaging system, but the neutrons which were transmitted the object without interaction could reach the imaging system. The image by purely transmitted neutrons gives the quantitative information. Two honeycombs were prepared with coating of boron nitride and gadolinium oxide and evaluated for the quantitative application. The relation between the neutron total cross section and the attenuation coefficient confirmed that they were in a fairly good agreement. Application to quantitative computed tomography was also successfully conducted. The new neutron radiography method using the neutron-absorbing honeycomb collimator for the elimination of the scattered neutrons improved remarkably the quantitativeness of the neutron radiography and computed tomography. (author)

  6. Stereological estimation of nuclear volume and other quantitative histopathological parameters in the prognostic evaluation of supraglottic laryngeal squamous cell carcinoma

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Bennedbaek, O; Pilgaard, J

    1989-01-01

    The aim of this study was to investigate various approaches to the grading of malignancy in pre-treatment biopsies from patients with supraglottic laryngeal squamous cell carcinoma. The prospects of objective malignancy grading based on stereological estimation of the volume-weighted mean nuclear...... observers of the latter was poor in the material which consisted of 35 biopsy specimens. Unbiased estimates of nuclear Vv were on the average 385 microns3 (CV = 0.44), with more than 90% of the associated variance attributable to differences in nuclear Vv among individual lesions. Nuclear Vv was positively....... None of the investigated categorical and quantitative parameters (cutoff points = means) reached the level of significance with respect to prognostic value. However, nuclear Vv showed the best information concerning survival (2p = 0.08), and this estimator offers optimal features for objective...

  7. A quantitative lubricant test for deep drawing

    DEFF Research Database (Denmark)

    Olsson, David Dam; Bay, Niels; Andreasen, Jan L.

    2010-01-01

    A tribological test for deep drawing has been developed by which the performance of lubricants may be evaluated quantitatively measuring the maximum backstroke force on the punch owing to friction between tool and workpiece surface. The forming force is found not to give useful information...

  8. A test for Improvement of high resolution Quantitative Precipitation Estimation for localized heavy precipitation events

    Science.gov (United States)

    Lee, Jung-Hoon; Roh, Joon-Woo; Park, Jeong-Gyun

    2017-04-01

    Accurate estimation of precipitation is one of the most difficult and significant tasks in the area of weather diagnostic and forecasting. In the Korean Peninsula, heavy precipitations are caused by various physical mechanisms, which are affected by shortwave trough, quasi-stationary moisture convergence zone among varying air masses, and a direct/indirect effect of tropical cyclone. In addition to, various geographical and topographical elements make production of temporal and spatial distribution of precipitation is very complicated. Especially, localized heavy rainfall events in South Korea generally arise from mesoscale convective systems embedded in these synoptic scale disturbances. In weather radar data with high temporal and spatial resolution, accurate estimation of rain rate from radar reflectivity data is too difficult. Z-R relationship (Marshal and Palmer 1948) have adapted representatively. In addition to, several methods such as support vector machine (SVM), neural network, Fuzzy logic, Kriging were utilized in order to improve the accuracy of rain rate. These methods show the different quantitative precipitation estimation (QPE) and the performances of accuracy are different for heavy precipitation cases. In this study, in order to improve the accuracy of QPE for localized heavy precipitation, ensemble method for Z-R relationship and various techniques was tested. This QPE ensemble method was developed by a concept based on utilizing each advantage of precipitation calibration methods. The ensemble members were produced for a combination of different Z-R coefficient and calibration method.

  9. Role-modeling and conversations about giving in the socialization of adolescent charitable giving and volunteering.

    Science.gov (United States)

    Ottoni-Wilhelm, Mark; Estell, David B; Perdue, Neil H

    2014-01-01

    This study investigated the relationship between the monetary giving and volunteering behavior of adolescents and the role-modeling and conversations about giving provided by their parents. The participants are a large nationally-representative sample of 12-18 year-olds from the Panel Study of Income Dynamics' Child Development Supplement (n = 1244). Adolescents reported whether they gave money and whether they volunteered. In a separate interview parents reported whether they talked to their adolescent about giving. In a third interview, parents reported whether they gave money and volunteered. The results show that both role-modeling and conversations about giving are strongly related to adolescents' giving and volunteering. Knowing that both role-modeling and conversation are strongly related to adolescents' giving and volunteering suggests an often over-looked way for practitioners and policy-makers to nurture giving and volunteering among adults: start earlier, during adolescence, by guiding parents in their role-modeling of, and conversations about, charitable giving and volunteering. Copyright © 2013 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  10. Attitudes and Perceptions about Private Philanthropic Giving to Arizona Community Colleges and Universities: Implications for Practice

    Science.gov (United States)

    Martinez, George Andrew

    2009-01-01

    Wide disparity exists in philanthropic giving to public, two-year community colleges as compared to public, four-year universities. Recent estimates indicate that 0.5 to 5% of all private philanthropic giving to U.S. higher education annually goes to public, two-year community colleges, with the remainder going to public and private four-year…

  11. Quantitative phase analysis in industrial research

    International Nuclear Information System (INIS)

    Ahmad Monshi

    1996-01-01

    X-Ray Diffraction (XRD) is the only technique able to identify phase and all the other analytical techniques give information about the elements. Quantitative phase analysis of minerals and industrial products is logically the next step after a qualitative examination and is of great importance in industrial research. Since the application of XRD in industry, early in this century, workers were trying to develop quantitative XRD methods. In this paper some of the important methods are briefly discussed and partly compared. These methods are Internal Standard, Known Additions, Double Dilution, External Standard, Direct Comparison, Diffraction Absorption and Ratio of Slopes

  12. Quantitative standard-less XRF analysis

    International Nuclear Information System (INIS)

    Ulitzka, S.

    2002-01-01

    Full text: For most analytical tasks in the mining and associated industries matrix-matched calibrations are used for the monitoring of ore grades and process control. In general, such calibrations are product specific (iron ore, bauxite, alumina, mineral sands, cement etc.) and apply to a relatively narrow concentration range but give the best precision and accuracy for those materials. A wide range of CRMs is available and for less common materials synthetic standards can be made up from 'pure' chemicals. At times, analysis of materials with varying matrices (powders, scales, fly ash, alloys, polymers, liquors, etc.) and diverse physical shapes (non-flat, metal drillings, thin layers on substrates etc.) is required that could also contain elements which are not part of a specific calibration. A qualitative analysis can provide information about the presence of certain elements and the relative intensities of element peaks in a scan can give a rough idea about their concentrations. More often however, quantitative values are required. The paper will look into the basics of quantitative standardless analysis and show results for some well-defined CRMs. Copyright (2002) Australian X-ray Analytical Association Inc

  13. Quantitative analysis of receptor imaging

    International Nuclear Information System (INIS)

    Fu Zhanli; Wang Rongfu

    2004-01-01

    Model-based methods for quantitative analysis of receptor imaging, including kinetic, graphical and equilibrium methods, are introduced in detail. Some technical problem facing quantitative analysis of receptor imaging, such as the correction for in vivo metabolism of the tracer and the radioactivity contribution from blood volume within ROI, and the estimation of the nondisplaceable ligand concentration, is also reviewed briefly

  14. Estimation of muscle fatigue by ratio of mean frequency to average rectified value from surface electromyography.

    Science.gov (United States)

    Fernando, Jeffry Bonar; Yoshioka, Mototaka; Ozawa, Jun

    2016-08-01

    A new method to estimate muscle fatigue quantitatively from surface electromyography (EMG) is proposed. The ratio of mean frequency (MNF) to average rectified value (ARV) is used as the index of muscle fatigue, and muscle fatigue is detected when MNF/ARV falls below a pre-determined or pre-calculated baseline. MNF/ARV gives larger distinction between fatigued muscle and non-fatigued muscle. Experiment results show the effectiveness of our method in estimating muscle fatigue more correctly compared to conventional methods. An early evaluation based on the initial value of MNF/ARV and the subjective time when the subjects start feeling the fatigue also indicates the possibility of calculating baseline from the initial value of MNF/ARV.

  15. Quantitative estimation of myocardial thickness by the wall thickness map with Tl-201 myocardial SPECT and its clinical use

    International Nuclear Information System (INIS)

    Sekiai, Yasuhiro; Sawai, Michihiko; Murayama, Susumu

    1988-01-01

    To estimate the wall thickness of left ventricular myocardium objectively and quantitatively, we adopted the device of wall thickness map (WTM) with Tl-201 myocardial SPECT. For validation on measuring left ventricular wall thickness with SPECT, fundamental studies were carried out with phantom models, and clinical studies were performed in 10 cases comparing the results from SPECT with those in echocardiography. To draw the WTM, left ventricular wall thickness was measured using the cut off method from SPECT images obtained at 5.6 mm intervals from the base and middle of left ventricle: short-axis image for the base and middle of left ventricle and vertical and horizontal long-axis images for the apical region. Wall thickness was defined from the number of pixel above the cut off level. Results of fundamental studies disclosed that it is impossible to evaluate the thickness of less than 10 mm by Tl-201 myocardial SPECT but possible to discriminate wall thickness of 10 mm, 15 mm, and 20 mm by Tl-201 myocardial SPECT. Echocardiographic results supported the validity of WTM, showing a good linear correlation (r = 0.96) between two methods on measuring wall thickness of left ventricle. We conclude that the WTM applied in this report may be useful for objective and quantitative estimation of myocardial hypertrophy. (author)

  16. Quantitative analysis of coupler tuning

    International Nuclear Information System (INIS)

    Zheng Shuxin; Cui Yupeng; Chen Huaibi; Xiao Liling

    2001-01-01

    The author deduces the equation of coupler frequency deviation Δf and coupling coefficient β instead of only giving the adjusting direction in the process of matching coupler, on the basis of coupling-cavity chain equivalent circuits model. According to this equation, automatic measurement and quantitative display are realized on a measuring system. It contributes to industrialization of traveling-wave accelerators for large container inspection systems

  17. Modified Weighted Kaplan-Meier Estimator

    Directory of Open Access Journals (Sweden)

    Mohammad Shafiq

    2007-01-01

    Full Text Available In many medical studies majority of the study subjects do not reach to the event of interest during the study period. In such situations survival probabilities can be estimated for censored observation by Kaplan Meier estimator. However in case of heavy censoring these estimates are biased and over estimate the survival probabilities. For heavy censoring a new method was proposed (Bahrawar Jan, 2005 to estimate the survival probabilities by weighting the censored observations by non-censoring rate. But the main defect in this weighted method is that it gives zero weight to the last censored observation. To over come this difficulty a new weight is proposed which also gives a non-zero weight to the last censored observation.

  18. The Influence of Reconstruction Kernel on Bone Mineral and Strength Estimates Using Quantitative Computed Tomography and Finite Element Analysis.

    Science.gov (United States)

    Michalski, Andrew S; Edwards, W Brent; Boyd, Steven K

    2017-10-17

    Quantitative computed tomography has been posed as an alternative imaging modality to investigate osteoporosis. We examined the influence of computed tomography convolution back-projection reconstruction kernels on the analysis of bone quantity and estimated mechanical properties in the proximal femur. Eighteen computed tomography scans of the proximal femur were reconstructed using both a standard smoothing reconstruction kernel and a bone-sharpening reconstruction kernel. Following phantom-based density calibration, we calculated typical bone quantity outcomes of integral volumetric bone mineral density, bone volume, and bone mineral content. Additionally, we performed finite element analysis in a standard sideways fall on the hip loading configuration. Significant differences for all outcome measures, except integral bone volume, were observed between the 2 reconstruction kernels. Volumetric bone mineral density measured using images reconstructed by the standard kernel was significantly lower (6.7%, p kernel. Furthermore, the whole-bone stiffness and the failure load measured in images reconstructed by the standard kernel were significantly lower (16.5%, p kernel. These data suggest that for future quantitative computed tomography studies, a standardized reconstruction kernel will maximize reproducibility, independent of the use of a quantitative calibration phantom. Copyright © 2017 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.

  19. Giving presentations

    CERN Document Server

    Ellis, Mark

    1997-01-01

    This is part of a series of books, which gives training in key business communication skills. Emphasis is placed on building awareness of language appropriateness and fluency in typical business interactions. This new edition is in full colour.

  20. Evaluating LMA and CLAMP: Using information criteria to choose a model for estimating elevation

    Science.gov (United States)

    Miller, I.; Green, W.; Zaitchik, B.; Brandon, M.; Hickey, L.

    2005-12-01

    The morphology of leaves and composition of the flora respond strongly to the moisture and temperature of their environment. Elevation and latitude correlate, at first order, to these atmospheric parameters. An obvious modern example of this relationship between leaf morphology and environment is the tree line, where boreal forests give way to artic (high latitude) or alpine (high elevation) tundra. Several quantitative methods, all of which rely on uniformitarianism, have been developed to estimate paleoelevation using fossil leaf morphology. These include 1) the univariate leaf-margin analysis (LMA), which estimates mean annual temperature (MAT) by the positive linear correlation between MAT and P, the proportion of entire or smooth to non-entire or toothed margined woody dicot angiosperm leaves within a flora and 2) the Climate Leaf Analysis Multivariate Program (CLAMP) which uses Canonical Correspondence Analysis (CCA) to estimate MAT, moist enthalpy, and other atmospheric parameters using 31 explanatory leaf characters from woody dicot angiosperms. Given a difference in leaf-estimated MAT or moist enthalpy between contemporaneous, synlatitudinal fossil floras-one at sea-level, the other at an unknown paleoelevation-paleoelevation may be estimated. These methods have been widely applied to orogenic settings and concentrate particularly in the Western US. We introduce the use of information criteria to compare different models for estimating elevation and show how the additional complexity of the CLAMP analytical methodology does not necessarily improve on the elevation estimates produced by simpler regression models. In addition, we discuss the signal-to-noise ratio in the data, give confidence intervals for detecting elevations, and address the problem of spatial autocorrelation and irregular sampling in the data.

  1. The Effect of Media on Charitable Giving and Volunteering: Evidence from the "Give Five" Campaign

    Science.gov (United States)

    Yoruk, Baris K.

    2012-01-01

    Fundraising campaigns advertised via mass media are common. To what extent such campaigns affect charitable behavior is mostly unknown, however. Using giving and volunteering surveys conducted biennially from 1988 to 1996, I investigate the effect of a national fundraising campaign, "Give Five," on charitable giving and volunteering patterns. The…

  2. Usefulness of the automatic quantitative estimation tool for cerebral blood flow: clinical assessment of the application software tool AQCEL.

    Science.gov (United States)

    Momose, Mitsuhiro; Takaki, Akihiro; Matsushita, Tsuyoshi; Yanagisawa, Shin; Yano, Kesato; Miyasaka, Tadashi; Ogura, Yuka; Kadoya, Masumi

    2011-01-01

    AQCEL enables automatic reconstruction of single-photon emission computed tomogram (SPECT) without image degradation and quantitative analysis of cerebral blood flow (CBF) after the input of simple parameters. We ascertained the usefulness and quality of images obtained by the application software AQCEL in clinical practice. Twelve patients underwent brain perfusion SPECT using technetium-99m ethyl cysteinate dimer at rest and after acetazolamide (ACZ) loading. Images reconstructed using AQCEL were compared with those reconstructed using conventional filtered back projection (FBP) method for qualitative estimation. Two experienced nuclear medicine physicians interpreted the image quality using the following visual scores: 0, same; 1, slightly superior; 2, superior. For quantitative estimation, the mean CBF values of the normal hemisphere of the 12 patients using ACZ calculated by the AQCEL method were compared with those calculated by the conventional method. The CBF values of the 24 regions of the 3-dimensional stereotaxic region of interest template (3DSRT) calculated by the AQCEL method at rest and after ACZ loading were compared to those calculated by the conventional method. No significant qualitative difference was observed between the AQCEL and conventional FBP methods in the rest study. The average score by the AQCEL method was 0.25 ± 0.45 and that by the conventional method was 0.17 ± 0.39 (P = 0.34). There was a significant qualitative difference between the AQCEL and conventional methods in the ACZ loading study. The average score for AQCEL was 0.83 ± 0.58 and that for the conventional method was 0.08 ± 0.29 (P = 0.003). During quantitative estimation using ACZ, the mean CBF values of 12 patients calculated by the AQCEL method were 3-8% higher than those calculated by the conventional method. The square of the correlation coefficient between these methods was 0.995. While comparing the 24 3DSRT regions of 12 patients, the squares of the correlation

  3. Quantitative digital radiography with two dimensional flat panels

    International Nuclear Information System (INIS)

    Dinten, J.M.; Robert-Coutant, C.; Darboux, M.

    2003-01-01

    Purpose: Attenuation law relates radiographic images to irradiated object thickness and chemical composition. Film radiography exploits qualitatively this property for diagnosis. Digital radiographic flat panels present large dynamic range, reproducibility and linearity properties which open the gate for quantification. We will present, through two applications (mammography and bone densitometry), an approach to extract quantitative information from digital 2D radiographs. Material and method: The main difficulty for quantification is X-rays scatter, which superimposes to acquisition data. Because of multiple scatterings and 3D geometry dependence, it cannot be directly exploited through an exact analytical model. Therefore we have developed an approach for its estimation and subtraction from medical radiographs, based on approximations and derivations of analytical models of scatter formation in human tissues. Results: In digital mammography, the objective is to build a map of the glandular tissue thickness. Its separation from fat tissue is based on two equations: height of compression and attenuation. This last equation needs X-Rays scatter correction. In bone densitometry, physicians look for quantitative bone mineral density. Today, clinical DEXA systems use collimated single or linear detectors to eliminate scatter. This scanning technology induces poor image quality. By applying our scatter correction approach, we have developed a bone densitometer using a digital flat panel (Lexxos, DMS). It provides with accurate and reproducible measurements while presenting radiological image quality. Conclusion: These applications show how information processing, and especially X-Rays scatter processing, enables to extract quantitative information from digital radiographs. This approach, associated to Computer Aided Diagnosis algorithms or reconstructions algorithms, gives access to useful information for diagnosis. (author)

  4. Quantitative autoradiography - a method of radioactivity measurement

    International Nuclear Information System (INIS)

    Treutler, H.C.; Freyer, K.

    1988-01-01

    In the last years the autoradiography has been developed to a quantitative method of radioactivity measurement. Operating techniques of quantitative autoradiography are demonstrated using special standard objects. Influences of irradiation quality, of backscattering in sample and detector materials, and of sensitivity and fading of the detectors are considered. Furthermore, questions of quantitative evaluation of autoradiograms are dealt with, and measuring errors are discussed. Finally, some practical uses of quantitative autoradiography are demonstrated by means of the estimation of activity distribution in radioactive foil samples. (author)

  5. Mapping the imaginary of charitable giving

    DEFF Research Database (Denmark)

    Bajde, Domen

    2012-01-01

    The meaningfulness of charitable giving is largely owed to the imaginary conceptions that underpin this form of giving. Building on Taylor's notion of “social imaginary” and Godelier's work on “gift imaginary,” we theorize the imaginary of charitable giving. Through a combination of qualitative m...... across relatively stable assemblages of conceptions of poverty, donors, end-recipients and charitable giving. These assemblages are suggested to form a multifaceted imaginary that is both cultural (shared) and personal (individually performed).......The meaningfulness of charitable giving is largely owed to the imaginary conceptions that underpin this form of giving. Building on Taylor's notion of “social imaginary” and Godelier's work on “gift imaginary,” we theorize the imaginary of charitable giving. Through a combination of qualitative...

  6. Spin tunneling in magnetic molecules: Quantitative estimates for Fe8 clusters

    Science.gov (United States)

    Galetti, D.; Silva, Evandro C.

    2007-12-01

    Spin tunneling in the particular case of the magnetic molecular cluster octanuclear iron(III), Fe8, is treated by an effective Hamiltonian that allows for an angle-based description of the process. The presence of an external magnetic field along the easy axis is also taken into account in this description. Analytic expressions for the energy levels and barriers are obtained from a harmonic approximation of the potential function which give results in good agreement with the experimental results. The energy splittings due to spin tunneling is treated in an adapted WKB approach and it is shown that the present description can give results to a reliable degree of accuracy.

  7. Quantitative Estimation of the Climatic Effects of Carbon Transferred by International Trade.

    Science.gov (United States)

    Wei, Ting; Dong, Wenjie; Moore, John; Yan, Qing; Song, Yi; Yang, Zhiyong; Yuan, Wenping; Chou, Jieming; Cui, Xuefeng; Yan, Xiaodong; Wei, Zhigang; Guo, Yan; Yang, Shili; Tian, Di; Lin, Pengfei; Yang, Song; Wen, Zhiping; Lin, Hui; Chen, Min; Feng, Guolin; Jiang, Yundi; Zhu, Xian; Chen, Juan; Wei, Xin; Shi, Wen; Zhang, Zhiguo; Dong, Juan; Li, Yexin; Chen, Deliang

    2016-06-22

    Carbon transfer via international trade affects the spatial pattern of global carbon emissions by redistributing emissions related to production of goods and services. It has potential impacts on attribution of the responsibility of various countries for climate change and formulation of carbon-reduction policies. However, the effect of carbon transfer on climate change has not been quantified. Here, we present a quantitative estimate of climatic impacts of carbon transfer based on a simple CO2 Impulse Response Function and three Earth System Models. The results suggest that carbon transfer leads to a migration of CO2 by 0.1-3.9 ppm or 3-9% of the rise in the global atmospheric concentrations from developed countries to developing countries during 1990-2005 and potentially reduces the effectiveness of the Kyoto Protocol by up to 5.3%. However, the induced atmospheric CO2 concentration and climate changes (e.g., in temperature, ocean heat content, and sea-ice) are very small and lie within observed interannual variability. Given continuous growth of transferred carbon emissions and their proportion in global total carbon emissions, the climatic effect of traded carbon is likely to become more significant in the future, highlighting the need to consider carbon transfer in future climate negotiations.

  8. Quantitative Estimation of the Climatic Effects of Carbon Transferred by International Trade

    Science.gov (United States)

    Wei, Ting; Dong, Wenjie; Moore, John; Yan, Qing; Song, Yi; Yang, Zhiyong; Yuan, Wenping; Chou, Jieming; Cui, Xuefeng; Yan, Xiaodong; Wei, Zhigang; Guo, Yan; Yang, Shili; Tian, Di; Lin, Pengfei; Yang, Song; Wen, Zhiping; Lin, Hui; Chen, Min; Feng, Guolin; Jiang, Yundi; Zhu, Xian; Chen, Juan; Wei, Xin; Shi, Wen; Zhang, Zhiguo; Dong, Juan; Li, Yexin; Chen, Deliang

    2016-06-01

    Carbon transfer via international trade affects the spatial pattern of global carbon emissions by redistributing emissions related to production of goods and services. It has potential impacts on attribution of the responsibility of various countries for climate change and formulation of carbon-reduction policies. However, the effect of carbon transfer on climate change has not been quantified. Here, we present a quantitative estimate of climatic impacts of carbon transfer based on a simple CO2 Impulse Response Function and three Earth System Models. The results suggest that carbon transfer leads to a migration of CO2 by 0.1-3.9 ppm or 3-9% of the rise in the global atmospheric concentrations from developed countries to developing countries during 1990-2005 and potentially reduces the effectiveness of the Kyoto Protocol by up to 5.3%. However, the induced atmospheric CO2 concentration and climate changes (e.g., in temperature, ocean heat content, and sea-ice) are very small and lie within observed interannual variability. Given continuous growth of transferred carbon emissions and their proportion in global total carbon emissions, the climatic effect of traded carbon is likely to become more significant in the future, highlighting the need to consider carbon transfer in future climate negotiations.

  9. Quantitative tools for addressing hospital readmissions

    Directory of Open Access Journals (Sweden)

    Lagoe Ronald J

    2012-11-01

    Full Text Available Abstract Background Increased interest in health care cost containment is focusing attention on reduction of hospital readmissions. Major payors have already developed financial penalties for providers that generate excess readmissions. This subject has benefitted from the development of resources such as the Potentially Preventable Readmissions software. This process has encouraged hospitals to renew efforts to improve these outcomes. The aim of this study was to describe quantitative tools such as definitions, risk estimation, and tracking of patients for reducing hospital readmissions. Findings This study employed the Potentially Preventable Readmissions software to develop quantitative tools for addressing hospital readmissions. These tools included two definitions of readmissions that support identification and management of patients. They also included analytical approaches for estimation of the risk of readmission for individual patients by age, discharge status of the initial admission, and severity of illness. They also included patient specific spreadsheets for tracking of target populations and for evaluation of the impact of interventions. Conclusions The study demonstrated that quantitative tools including the development of definitions of readmissions, estimation of the risk of readmission, and patient specific spreadsheets could contribute to the improvement of patient outcomes in hospitals.

  10. Identification and quantitative grade estimation of Uranium mineralization based on gross-count gamma ray log at Lemajung sector West Kalimantan

    International Nuclear Information System (INIS)

    Adi Gunawan Muhammad

    2014-01-01

    Lemajung sector, is one of uranium potential sector in Kalan Area, West Kalimantan. Uranium mineralization is found in metasiltstone and schistose metapelite rock with general direction of mineralization east - west tilted ± 70° to the north parallel with schistocity pattern (S1). Drilling evaluation has been implemented in 2013 in Lemajung sector at R-05 (LEML-(S1). Drilling evaluation has been implemented in 2013 in Lemajung sector at R-05 (LEML-gamma ray. The purpose of this activity is to determine uranium mineralization grade with quantitatively methode in the rocks and also determine the geological conditions in sorounding of drilling area. The methodology involves determining the value of k-factor, geological mapping for the sorounding of drill hole, determination of the thickness and grade estimation of uranium mineralization with gross-count gamma ray. Quantitatively from grade estimation of uranium using gross-count gamma ray log can be known that the highest % eU_3O_8 in the hole R-05 (LEML-40) reaches 0.7493≈6354 ppm eU found at depth interval from 30.1 to 34.96 m. Uranium mineralization is present as fracture filling (vein) or tectonic breccia matrix filling in metasiltstone with thickness from 0.10 to 2.40 m associated with sulphide (pyrite) and characterized by high ratio of U/Th. (author)

  11. Visually estimated ejection fraction by two dimensional and triplane echocardiography is closely correlated with quantitative ejection fraction by real-time three dimensional echocardiography.

    Science.gov (United States)

    Shahgaldi, Kambiz; Gudmundsson, Petri; Manouras, Aristomenis; Brodin, Lars-Ake; Winter, Reidar

    2009-08-25

    Visual assessment of left ventricular ejection fraction (LVEF) is often used in clinical routine despite general recommendations to use quantitative biplane Simpsons (BPS) measurements. Even thou quantitative methods are well validated and from many reasons preferable, the feasibility of visual assessment (eyeballing) is superior. There is to date only sparse data comparing visual EF assessment in comparison to quantitative methods available. The aim of this study was to compare visual EF assessment by two-dimensional echocardiography (2DE) and triplane echocardiography (TPE) using quantitative real-time three-dimensional echocardiography (RT3DE) as the reference method. Thirty patients were enrolled in the study. Eyeballing EF was assessed using apical 4-and 2 chamber views and TP mode by two experienced readers blinded to all clinical data. The measurements were compared to quantitative RT3DE. There were an excellent correlation between eyeballing EF by 2D and TP vs 3DE (r = 0.91 and 0.95 respectively) without any significant bias (-0.5 +/- 3.7% and -0.2 +/- 2.9% respectively). Intraobserver variability was 3.8% for eyeballing 2DE, 3.2% for eyeballing TP and 2.3% for quantitative 3D-EF. Interobserver variability was 7.5% for eyeballing 2D and 8.4% for eyeballing TP. Visual estimation of LVEF both using 2D and TP by an experienced reader correlates well with quantitative EF determined by RT3DE. There is an apparent trend towards a smaller variability using TP in comparison to 2D, this was however not statistically significant.

  12. The influence of the design matrix on treatment effect estimates in the quantitative analyses of single-subject experimental design research.

    Science.gov (United States)

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M; Beretvas, S Natasha; Van den Noortgate, Wim

    2014-09-01

    The quantitative methods for analyzing single-subject experimental data have expanded during the last decade, including the use of regression models to statistically analyze the data, but still a lot of questions remain. One question is how to specify predictors in a regression model to account for the specifics of the design and estimate the effect size of interest. These quantitative effect sizes are used in retrospective analyses and allow synthesis of single-subject experimental study results which is informative for evidence-based decision making, research and theory building, and policy discussions. We discuss different design matrices that can be used for the most common single-subject experimental designs (SSEDs), namely, the multiple-baseline designs, reversal designs, and alternating treatment designs, and provide empirical illustrations. The purpose of this article is to guide single-subject experimental data analysts interested in analyzing and meta-analyzing SSED data. © The Author(s) 2014.

  13. An overview of quantitative approaches in Gestalt perception.

    Science.gov (United States)

    Jäkel, Frank; Singh, Manish; Wichmann, Felix A; Herzog, Michael H

    2016-09-01

    Gestalt psychology is often criticized as lacking quantitative measurements and precise mathematical models. While this is true of the early Gestalt school, today there are many quantitative approaches in Gestalt perception and the special issue of Vision Research "Quantitative Approaches in Gestalt Perception" showcases the current state-of-the-art. In this article we give an overview of these current approaches. For example, ideal observer models are one of the standard quantitative tools in vision research and there is a clear trend to try and apply this tool to Gestalt perception and thereby integrate Gestalt perception into mainstream vision research. More generally, Bayesian models, long popular in other areas of vision research, are increasingly being employed to model perceptual grouping as well. Thus, although experimental and theoretical approaches to Gestalt perception remain quite diverse, we are hopeful that these quantitative trends will pave the way for a unified theory. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Estimating bioerosion rate on fossil corals: a quantitative approach from Oligocene reefs (NW Italy)

    Science.gov (United States)

    Silvestri, Giulia

    2010-05-01

    Bioerosion of coral reefs, especially when related to the activity of macroborers, is considered to be one of the major processes influencing framework development in present-day reefs. Macroboring communities affecting both living and dead corals are widely distributed also in the fossil record and their role is supposed to be analogously important in determining flourishing vs demise of coral bioconstructions. Nevertheless, many aspects concerning environmental factors controlling the incidence of bioerosion, shifting in composition of macroboring communities and estimation of bioerosion rate in different contexts are still poorly documented and understood. This study presents an attempt to quantify bioerosion rate on reef limestones characteristic of some Oligocene outcrops of the Tertiary Piedmont Basin (NW Italy) and deposited under terrigenous sedimentation within prodelta and delta fan systems. Branching coral rubble-dominated facies have been recognized as prevailing in this context. Depositional patterns, textures, and the generally low incidence of taphonomic features, such as fragmentation and abrasion, suggest relatively quiet waters where coral remains were deposited almost in situ. Thus taphonomic signatures occurring on corals can be reliably used to reconstruct environmental parameters affecting these particular branching coral assemblages during their life and to compare them with those typical of classical clear-water reefs. Bioerosion is sparsely distributed within coral facies and consists of a limited suite of traces, mostly referred to clionid sponges and polychaete and sipunculid worms. The incidence of boring bivalves seems to be generally lower. Together with semi-quantitative analysis of bioerosion rate along vertical logs and horizontal levels, two quantitative methods have been assessed and compared. These consist in the elaboration of high resolution scanned thin sections through software for image analysis (Photoshop CS3) and point

  15. The Practical Realities of Giving Back

    Directory of Open Access Journals (Sweden)

    Ashton Bree Wesner

    2014-07-01

    Full Text Available In this thematic section, authors consider practical ways of giving back to the communities in which they conduct research. Each author discusses their evolving thoughts on how to give back in these practical ways. Some of these authors discuss giving back by giving money, food, rides, parties, and water bottles. In other cases, authors discuss giving back by creating jobs in the short or long term, grant writing, advocacy, and education. Story-telling is also a theme that many of the authors in this section discuss. For some authors, non-material forms of giving back are critical—simply maintaining social ties to the communities in which they worked, or sharing humor. The authors consider the utility of their attempts at giving back, and in some cases present their personal philosophy or guidelines on the subject.

  16. Quantitative hyperbolicity estimates in one-dimensional dynamics

    International Nuclear Information System (INIS)

    Day, S; Kokubu, H; Pilarczyk, P; Luzzatto, S; Mischaikow, K; Oka, H

    2008-01-01

    We develop a rigorous computational method for estimating the Lyapunov exponents in uniformly expanding regions of the phase space for one-dimensional maps. Our method uses rigorous numerics and graph algorithms to provide results that are mathematically meaningful and can be achieved in an efficient way

  17. Quantitative cardiac computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Thelen, M.; Dueber, C.; Wolff, P.; Erbel, R.; Hoffmann, T.

    1985-06-01

    The scope and limitations of quantitative cardiac CT have been evaluated in a series of experimental and clinical studies. The left ventricular muscle mass was estimated by computed tomography in 19 dogs (using volumetric methods, measurements in two axes and planes and reference volume). There was good correlation with anatomical findings. The enddiastolic volume of the left ventricle was estimated in 22 patients with cardiomyopathies; using angiography as a reference, CT led to systematic under-estimation. It is also shown that ECG-triggered magnetic resonance tomography results in improved visualisation and may be expected to improve measurements of cardiac morphology.

  18. Inequality and redistribution behavior in a give-or-take game

    Science.gov (United States)

    Bechtel, Michael M.; Scheve, Kenneth F.

    2018-01-01

    Political polarization and extremism are widely thought to be driven by the surge in economic inequality in many countries around the world. Understanding why inequality persists depends on knowing the causal effect of inequality on individual behavior. We study how inequality affects redistribution behavior in a randomized “give-or-take” experiment that created equality, advantageous inequality, or disadvantageous inequality between two individuals before offering one of them the opportunity to either take from or give to the other. We estimate the causal effect of inequality in representative samples of German and American citizens (n = 4,966) and establish two main findings. First, individuals imperfectly equalize payoffs: On average, respondents transfer 12% of the available endowments to realize more equal wealth distributions. This means that respondents tolerate a considerable degree of inequality even in a setting in which there are no costs to redistribution. Second, redistribution behavior in response to disadvantageous and advantageous inequality is largely asymmetric: Individuals who take from those who are richer do not also tend to give to those who are poorer, and individuals who give to those who are poorer do not tend to take from those who are richer. These behavioral redistribution types correlate in meaningful ways with support for heavy taxes on the rich and the provision of welfare benefits for the poor. Consequently, it seems difficult to construct a majority coalition willing to back the type of government interventions needed to counter rising inequality. PMID:29555734

  19. Allometric Models Based on Bayesian Frameworks Give Better Estimates of Aboveground Biomass in the Miombo Woodlands

    Directory of Open Access Journals (Sweden)

    Shem Kuyah

    2016-02-01

    Full Text Available The miombo woodland is the most extensive dry forest in the world, with the potential to store substantial amounts of biomass carbon. Efforts to obtain accurate estimates of carbon stocks in the miombo woodlands are limited by a general lack of biomass estimation models (BEMs. This study aimed to evaluate the accuracy of most commonly employed allometric models for estimating aboveground biomass (AGB in miombo woodlands, and to develop new models that enable more accurate estimation of biomass in the miombo woodlands. A generalizable mixed-species allometric model was developed from 88 trees belonging to 33 species ranging in diameter at breast height (DBH from 5 to 105 cm using Bayesian estimation. A power law model with DBH alone performed better than both a polynomial model with DBH and the square of DBH, and models including height and crown area as additional variables along with DBH. The accuracy of estimates from published models varied across different sites and trees of different diameter classes, and was lower than estimates from our model. The model developed in this study can be used to establish conservative carbon stocks required to determine avoided emissions in performance-based payment schemes, for example in afforestation and reforestation activities.

  20. Software cost estimation

    NARCIS (Netherlands)

    Heemstra, F.J.

    1992-01-01

    The paper gives an overview of the state of the art of software cost estimation (SCE). The main questions to be answered in the paper are: (1) What are the reasons for overruns of budgets and planned durations? (2) What are the prerequisites for estimating? (3) How can software development effort be

  1. Software cost estimation

    NARCIS (Netherlands)

    Heemstra, F.J.; Heemstra, F.J.

    1993-01-01

    The paper gives an overview of the state of the art of software cost estimation (SCE). The main questions to be answered in the paper are: (1) What are the reasons for overruns of budgets and planned durations? (2) What are the prerequisites for estimating? (3) How can software development effort be

  2. Methodologies for quantitative systems pharmacology (QSP) models : Design and Estimation

    NARCIS (Netherlands)

    Ribba, B.; Grimm, Hp; Agoram, B.; Davies, M.R.; Gadkar, K.; Niederer, S.; van Riel, N.; Timmis, J.; van der Graaf, Ph.

    2017-01-01

    With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early

  3. Methodologies for Quantitative Systems Pharmacology (QSP) Models: Design and Estimation

    NARCIS (Netherlands)

    Ribba, B.; Grimm, H. P.; Agoram, B.; Davies, M. R.; Gadkar, K.; Niederer, S.; van Riel, N.; Timmis, J.; van der Graaf, P. H.

    2017-01-01

    With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early

  4. Identification of Case Content with Quantitative Network Analysis

    DEFF Research Database (Denmark)

    Christensen, Martin Lolle; Olsen, Henrik Palmer; Tarissan, Fabian

    2016-01-01

    the relevant articles. In order to enhance information retrieval about case content, without relying on manual labor and subjective judgment, we propose in this paper a quantitative method that gives a better indication of case content in terms of which articles a given case is more closely associated with...

  5. Authorization gives the personnel he/she gives the center he/she gives Isotopes for the acting he/she gives tied functions with the security and the radiological protection

    International Nuclear Information System (INIS)

    Perez Pijuan, S.; Hernandez Alvarez, R.; Peres Reyes, Y.; Venegas Bernal, M.C.

    1998-01-01

    The conception is described used in a center production labelled compound and radiopharmaceuticals for the authorization to the support, operation and supervision personnel The approaches are exposed used to define the excellent positions for the security the installation. The are described the training programs, designed starting from the indentification the specific competitions for each duty station and with particular emphasis in the development gives abilities you practice. It is used for the administration and evaluation gives the programs training the Automated System Administration Programs Training (GESAT)

  6. Radar-derived quantitative precipitation estimation in complex terrain over the eastern Tibetan Plateau

    Science.gov (United States)

    Gou, Yabin; Ma, Yingzhao; Chen, Haonan; Wen, Yixin

    2018-05-01

    Quantitative precipitation estimation (QPE) is one of the important applications of weather radars. However, in complex terrain such as Tibetan Plateau, it is a challenging task to obtain an optimal Z-R relation due to the complex spatial and temporal variability in precipitation microphysics. This paper develops two radar QPE schemes respectively based on Reflectivity Threshold (RT) and Storm Cell Identification and Tracking (SCIT) algorithms using observations from 11 Doppler weather radars and 3264 rain gauges over the Eastern Tibetan Plateau (ETP). These two QPE methodologies are evaluated extensively using four precipitation events that are characterized by different meteorological features. Precipitation characteristics of independent storm cells associated with these four events, as well as the storm-scale differences, are investigated using short-term vertical profile of reflectivity (VPR) clusters. Evaluation results show that the SCIT-based rainfall approach performs better than the simple RT-based method for all precipitation events in terms of score comparison using validation gauge measurements as references. It is also found that the SCIT-based approach can effectively mitigate the local error of radar QPE and represent the precipitation spatiotemporal variability better than the RT-based scheme.

  7. Uncertainty of quantitative microbiological methods of pharmaceutical analysis.

    Science.gov (United States)

    Gunar, O V; Sakhno, N G

    2015-12-30

    The total uncertainty of quantitative microbiological methods, used in pharmaceutical analysis, consists of several components. The analysis of the most important sources of the quantitative microbiological methods variability demonstrated no effect of culture media and plate-count techniques in the estimation of microbial count while the highly significant effect of other factors (type of microorganism, pharmaceutical product and individual reading and interpreting errors) was established. The most appropriate method of statistical analysis of such data was ANOVA which enabled not only the effect of individual factors to be estimated but also their interactions. Considering all the elements of uncertainty and combining them mathematically the combined relative uncertainty of the test results was estimated both for method of quantitative examination of non-sterile pharmaceuticals and microbial count technique without any product. These data did not exceed 35%, appropriated for a traditional plate count methods. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Estimation of quantitative levels of diesel exhaust exposure and the health impact in the contemporary Australian mining industry.

    Science.gov (United States)

    Peters, Susan; de Klerk, Nicholas; Reid, Alison; Fritschi, Lin; Musk, Aw Bill; Vermeulen, Roel

    2017-03-01

    To estimate quantitative levels of exposure to diesel exhaust expressed by elemental carbon (EC) in the contemporary mining industry and to describe the excess risk of lung cancer that may result from those levels. EC exposure has been monitored in Western Australian miners since 2003. Mixed-effects models were used to estimate EC levels for five surface and five underground occupation groups (as a fixed effect) and specific jobs within each group (as a random effect). Further fixed effects included sampling year and duration, and mineral mined. On the basis of published risk functions, we estimated excess lifetime risk of lung cancer mortality for several employment scenarios. Personal EC measurements (n=8614) were available for 146 different jobs at 124 mine sites. The mean estimated EC exposure level for surface occupations in 2011 was 14 µg/m 3 for 12 hour shifts. Levels for underground occupation groups ranged from 18 to 44 µg/m 3 . Underground diesel loader operators had the highest exposed specific job: 59 µg/m 3 . A lifetime career (45 years) as a surface worker or underground miner, experiencing exposure levels as estimated for 2011 (14 and 44 µg/m 3 EC), was associated with 5.5 and 38 extra lung cancer deaths per 1000 males, respectively. EC exposure levels in the contemporary Australian mining industry are still substantial, particularly for underground workers. The estimated excess numbers of lung cancer deaths associated with these exposures support the need for implementation of stringent occupational exposure limits for diesel exhaust. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  9. Giving what one should: explanations for the knowledge-behavior gap for altruistic giving.

    Science.gov (United States)

    Blake, Peter R

    2018-04-01

    Several studies have shown that children struggle to give what they believe that they should: the so-called knowledge-behavior gap. Over a dozen recent Dictator Game studies find that, although young children believe that they should give half of a set of resources to a peer, they typically give less and often keep all of the resources for themselves. This article reviews recent evidence for five potential explanations for the gap and how children close it with age: self-regulation, social distance, theory of mind, moral knowledge and social learning. I conclude that self-regulation, social distance, and social learning show the most promising evidence for understanding the mechanisms that can close the gap. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Visually estimated ejection fraction by two dimensional and triplane echocardiography is closely correlated with quantitative ejection fraction by real-time three dimensional echocardiography

    Directory of Open Access Journals (Sweden)

    Manouras Aristomenis

    2009-08-01

    Full Text Available Abstract Background Visual assessment of left ventricular ejection fraction (LVEF is often used in clinical routine despite general recommendations to use quantitative biplane Simpsons (BPS measurements. Even thou quantitative methods are well validated and from many reasons preferable, the feasibility of visual assessment (eyeballing is superior. There is to date only sparse data comparing visual EF assessment in comparison to quantitative methods available. The aim of this study was to compare visual EF assessment by two-dimensional echocardiography (2DE and triplane echocardiography (TPE using quantitative real-time three-dimensional echocardiography (RT3DE as the reference method. Methods Thirty patients were enrolled in the study. Eyeballing EF was assessed using apical 4-and 2 chamber views and TP mode by two experienced readers blinded to all clinical data. The measurements were compared to quantitative RT3DE. Results There were an excellent correlation between eyeballing EF by 2D and TP vs 3DE (r = 0.91 and 0.95 respectively without any significant bias (-0.5 ± 3.7% and -0.2 ± 2.9% respectively. Intraobserver variability was 3.8% for eyeballing 2DE, 3.2% for eyeballing TP and 2.3% for quantitative 3D-EF. Interobserver variability was 7.5% for eyeballing 2D and 8.4% for eyeballing TP. Conclusion Visual estimation of LVEF both using 2D and TP by an experienced reader correlates well with quantitative EF determined by RT3DE. There is an apparent trend towards a smaller variability using TP in comparison to 2D, this was however not statistically significant.

  11. The Relative Performance of High Resolution Quantitative Precipitation Estimates in the Russian River Basin

    Science.gov (United States)

    Bytheway, J. L.; Biswas, S.; Cifelli, R.; Hughes, M.

    2017-12-01

    The Russian River carves a 110 mile path through Mendocino and Sonoma counties in western California, providing water for thousands of residents and acres of agriculture as well as a home for several species of endangered fish. The Russian River basin receives almost all of its precipitation during the October through March wet season, and the systems bringing this precipitation are often impacted by atmospheric river events as well as the complex topography of the region. This study will examine the performance of several high resolution (hourly, products and forecasts over the 2015-2016 and 2016-2017 wet seasons. Comparisons of event total rainfall as well as hourly rainfall will be performed using 1) rain gauges operated by the National Oceanic and Atmospheric Administration (NOAA) Physical Sciences Division (PSD), 2) products from the Multi-Radar/Multi-Sensor (MRMS) QPE dataset, and 3) quantitative precipitation forecasts from the High Resolution Rapid Refresh (HRRR) model at 1, 3, 6, and 12 hour lead times. Further attention will be given to cases or locations representing large disparities between the estimates.

  12. Quantitative CT: technique dependence of volume estimation on pulmonary nodules

    Science.gov (United States)

    Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Colsher, James; Amurao, Maxwell; Samei, Ehsan

    2012-03-01

    Current estimation of lung nodule size typically relies on uni- or bi-dimensional techniques. While new three-dimensional volume estimation techniques using MDCT have improved size estimation of nodules with irregular shapes, the effect of acquisition and reconstruction parameters on accuracy (bias) and precision (variance) of the new techniques has not been fully investigated. To characterize the volume estimation performance dependence on these parameters, an anthropomorphic chest phantom containing synthetic nodules was scanned and reconstructed with protocols across various acquisition and reconstruction parameters. Nodule volumes were estimated by a clinical lung analysis software package, LungVCAR. Precision and accuracy of the volume assessment were calculated across the nodules and compared between protocols via a generalized estimating equation analysis. Results showed that the precision and accuracy of nodule volume quantifications were dependent on slice thickness, with different dependences for different nodule characteristics. Other parameters including kVp, pitch, and reconstruction kernel had lower impact. Determining these technique dependences enables better volume quantification via protocol optimization and highlights the importance of consistent imaging parameters in sequential examinations.

  13. Simulation evaluation of quantitative myocardial perfusion assessment from cardiac CT

    Science.gov (United States)

    Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.

    2014-03-01

    Contrast enhancement on cardiac CT provides valuable information about myocardial perfusion and methods have been proposed to assess perfusion with static and dynamic acquisitions. There is a lack of knowledge and consensus on the appropriate approach to ensure 1) sufficient diagnostic accuracy for clinical decisions and 2) low radiation doses for patient safety. This work developed a thorough dynamic CT simulation and several accepted blood flow estimation techniques to evaluate the performance of perfusion assessment across a range of acquisition and estimation scenarios. Cardiac CT acquisitions were simulated for a range of flow states (Flow = 0.5, 1, 2, 3 ml/g/min, cardiac output = 3,5,8 L/min). CT acquisitions were simulated with a validated CT simulator incorporating polyenergetic data acquisition and realistic x-ray flux levels for dynamic acquisitions with a range of scenarios including 1, 2, 3 sec sampling for 30 sec with 25, 70, 140 mAs. Images were generated using conventional image reconstruction with additional image-based beam hardening correction to account for iodine content. Time attenuation curves were extracted for multiple regions around the myocardium and used to estimate flow. In total, 2,700 independent realizations of dynamic sequences were generated and multiple MBF estimation methods were applied to each of these. Evaluation of quantitative kinetic modeling yielded blood flow estimates with an root mean square error (RMSE) of ~0.6 ml/g/min averaged across multiple scenarios. Semi-quantitative modeling and qualitative static imaging resulted in significantly more error (RMSE = ~1.2 and ~1.2 ml/min/g respectively). For quantitative methods, dose reduction through reduced temporal sampling or reduced tube current had comparable impact on the MBF estimate fidelity. On average, half dose acquisitions increased the RMSE of estimates by only 18% suggesting that substantial dose reductions can be employed in the context of quantitative myocardial

  14. Secondary dentine as a sole parameter for age estimation: Comparison and reliability of qualitative and quantitative methods among North Western adult Indians

    Directory of Open Access Journals (Sweden)

    Jasbir Arora

    2016-06-01

    Full Text Available The indestructible nature of teeth against most of the environmental abuses makes its use in disaster victim identification (DVI. The present study has been undertaken to examine the reliability of Gustafson’s qualitative method and Kedici’s quantitative method of measuring secondary dentine for age estimation among North Western adult Indians. 196 (M = 85; F = 111 single rooted teeth were collected from the Department of Oral Health Sciences, PGIMER, Chandigarh. Ground sections were prepared and the amount of secondary dentine formed was scored qualitatively according to Gustafson’s (0–3 scoring system (method 1 and quantitatively following Kedici’s micrometric measurement method (method 2. Out of 196 teeth 180 samples (M = 80; F = 100 were found to be suitable for measuring secondary dentine following Kedici’s method. Absolute mean error of age was calculated by both methodologies. Results clearly showed that in pooled data, method 1 gave an error of ±10.4 years whereas method 2 exhibited an error of approximately ±13 years. A statistically significant difference was noted in absolute mean error of age between two methods of measuring secondary dentine for age estimation. Further, it was also revealed that teeth extracted for periodontal reasons severely decreased the accuracy of Kedici’s method however, the disease had no effect while estimating age by Gustafson’s method. No significant gender differences were noted in the absolute mean error of age by both methods which suggest that there is no need to separate data on the basis of gender.

  15. How to Safely Give Ibuprofen

    Science.gov (United States)

    ... of ibuprofen are available in similar forms. How to Give When giving ibuprofen, refer to the following dosage ... of Use Notice of Nondiscrimination Visit the Nemours Web site. Note: All information on KidsHealth® is for ...

  16. The Limits to Giving Back

    Directory of Open Access Journals (Sweden)

    Jade S. Sasser

    2014-07-01

    Full Text Available In this thematic section, authors consider the limitations on giving back that they faced in field research, or saw others face. For some authors, their attempts at giving back were severely limited by the scope of their projects, or their understandings of local cultures or histories. For others, very specific circumstances and historical interventions of foreigners in certain places can limit how and to what extent a researcher is able to have a reciprocal relationship with the participating community. Some authors, by virtue of their lesser positions of power relative to those that they were studying, simply decided not to give back to those communities. In each article it becomes apparent that how and in what ways people give back is unique (and limited both to their personal values and the contexts in which they do research.

  17. Quantitative estimates of coral reef substrate and species type derived objectively from photographic images taken at twenty-eight sites in the Hawaiian islands, 2002-2004 (NODC Accession 0002313)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of CRAMP surveys taken in 2002-2004 and includes quantitative estimates of substrate and species type. From the data percent coverage of a...

  18. Development of a quantitative lubricant test for deep drawing

    DEFF Research Database (Denmark)

    Olsson, David Dam; Bay, Niels; Andreasen, Jan Lasson

    2004-01-01

    A tribological test for deep drawing has been developed by which the performance of lubricants may be evaluated quantitatively measuring the maximum backstroke force on the punch due to sliding friction between tool and work piece surface. The forming force is found not to give useful information...

  19. Quantitative radiography

    International Nuclear Information System (INIS)

    Brase, J.M.; Martz, H.E.; Waltjen, K.E.; Hurd, R.L.; Wieting, M.G.

    1986-01-01

    Radiographic techniques have been used in nondestructive evaluation primarily to develop qualitative information (i.e., defect detection). This project applies and extends the techniques developed in medical x-ray imaging, particularly computed tomography (CT), to develop quantitative information (both spatial dimensions and material quantities) on the three-dimensional (3D) structure of solids. Accomplishments in FY 86 include (1) improvements in experimental equipment - an improved microfocus system that will give 20-μm resolution and has potential for increased imaging speed, and (2) development of a simple new technique for displaying 3D images so as to clearly show the structure of the object. Image reconstruction and data analysis for a series of synchrotron CT experiments conducted by LLNL's Chemistry Department has begun

  20. Quantitative estimation of viable myocardium in the infarcted zone by infarct-redistribution map from images of exercise thallium-201 emission computed tomography

    International Nuclear Information System (INIS)

    Sekiai, Yasuhiro

    1988-01-01

    To evaluate, quantitatively, the viable myocardium in the infarcted zone, we invented the infarct-redistribution map which is produced from images of exercise thallium-201 emission computed tomography performed on 10 healthy subjects and 20 patients with myocardial infarction. The map displayed a left ventricle in which the infarcted area both with and without redistribution, the redistribution area without infarction, and normal perfusion area were shown separated in same screen. In these circumstances, the nonredistribution infarct lesion was found as being surrounded by the redistribution area. Indices of infarct and redistribution extent (defect score, % defect, redistribution ratio (RR) and redistribution index (RI)), were induced from the map and were used for quantitative analysis of the redistribution area and as the basis for comparative discussion regarding regional wall motion of the left ventricle. The quantitative indices of defect score, % defect, RR and RI were consistent with the visual assessment of planar images in detecting the extent of redistribution. Furthermore, defect score and % defect had an inverted linear relationship with % shortening (r = -0.573; p < 0.05, r = -0.536; p < 0.05, respectively), and RI had a good linear relationship with % shortening (r = 0.669; p < 0.01). We conclude that the infarct-redistribution map accurately reflects the myocardial viability and therefore may be useful for quantitative estimation of viable myocardium in the infarcted zone. (author)

  1. Multiparametric Quantitative Ultrasound Imaging in Assessment of Chronic Kidney Disease.

    Science.gov (United States)

    Gao, Jing; Perlman, Alan; Kalache, Safa; Berman, Nathaniel; Seshan, Surya; Salvatore, Steven; Smith, Lindsey; Wehrli, Natasha; Waldron, Levi; Kodali, Hanish; Chevalier, James

    2017-11-01

    To evaluate the value of multiparametric quantitative ultrasound imaging in assessing chronic kidney disease (CKD) using kidney biopsy pathologic findings as reference standards. We prospectively measured multiparametric quantitative ultrasound markers with grayscale, spectral Doppler, and acoustic radiation force impulse imaging in 25 patients with CKD before kidney biopsy and 10 healthy volunteers. Based on all pathologic (glomerulosclerosis, interstitial fibrosis/tubular atrophy, arteriosclerosis, and edema) scores, the patients with CKD were classified into mild (no grade 3 and quantitative ultrasound parameters included kidney length, cortical thickness, pixel intensity, parenchymal shear wave velocity, intrarenal artery peak systolic velocity (PSV), end-diastolic velocity (EDV), and resistive index. We tested the difference in quantitative ultrasound parameters among mild CKD, moderate to severe CKD, and healthy controls using analysis of variance, analyzed correlations of quantitative ultrasound parameters with pathologic scores and the estimated glomerular filtration rate (GFR) using Pearson correlation coefficients, and examined the diagnostic performance of quantitative ultrasound parameters in determining moderate CKD and an estimated GFR of less than 60 mL/min/1.73 m 2 using receiver operating characteristic curve analysis. There were significant differences in cortical thickness, pixel intensity, PSV, and EDV among the 3 groups (all P quantitative ultrasound parameters, the top areas under the receiver operating characteristic curves for PSV and EDV were 0.88 and 0.97, respectively, for determining pathologic moderate to severe CKD, and 0.76 and 0.86 for estimated GFR of less than 60 mL/min/1.73 m 2 . Moderate to good correlations were found for PSV, EDV, and pixel intensity with pathologic scores and estimated GFR. The PSV, EDV, and pixel intensity are valuable in determining moderate to severe CKD. The value of shear wave velocity in

  2. Satellite remote sensing for estimating leaf area index, FPAR and primary production. A literature review

    International Nuclear Information System (INIS)

    Boresjoe Bronge, Laine

    2004-03-01

    Land vegetation is a critical component of several biogeochemical cycles that have become the focus of concerted international research effort. Most ecosystem productivity models, carbon budget models, and global models of climate, hydrology and biogeochemistry require vegetation parameters to calculate land surface photosynthesis, evapotranspiration and net primary production. Therefore, accurate estimates of vegetation parameters are increasingly important in the carbon cycle, the energy balance and in environmental impact assessment studies. The possibility of quantitatively estimating vegetation parameters of importance in this context using satellite data has been explored by numerous papers dealing with the subject. This report gives a summary of the present status and applicability of satellite remote sensing for estimating vegetation productivity by using vegetation index for calculating leaf area index (LAI) and fraction of absorbed photosynthetically active radiation (FPAR). Some possible approaches for use of satellite data for estimating LAI, FPAR and net primary production (NPP) on a local scale are suggested. Recommendations for continued work in the Forsmark and Oskarshamn investigation areas, where vegetation data and NDVI-images based on satellite data have been produced, are also given

  3. Satellite remote sensing for estimating leaf area index, FPAR and primary production. A literature review

    Energy Technology Data Exchange (ETDEWEB)

    Boresjoe Bronge, Laine [SwedPower AB, Stockholm (Sweden)

    2004-03-01

    Land vegetation is a critical component of several biogeochemical cycles that have become the focus of concerted international research effort. Most ecosystem productivity models, carbon budget models, and global models of climate, hydrology and biogeochemistry require vegetation parameters to calculate land surface photosynthesis, evapotranspiration and net primary production. Therefore, accurate estimates of vegetation parameters are increasingly important in the carbon cycle, the energy balance and in environmental impact assessment studies. The possibility of quantitatively estimating vegetation parameters of importance in this context using satellite data has been explored by numerous papers dealing with the subject. This report gives a summary of the present status and applicability of satellite remote sensing for estimating vegetation productivity by using vegetation index for calculating leaf area index (LAI) and fraction of absorbed photosynthetically active radiation (FPAR). Some possible approaches for use of satellite data for estimating LAI, FPAR and net primary production (NPP) on a local scale are suggested. Recommendations for continued work in the Forsmark and Oskarshamn investigation areas, where vegetation data and NDVI-images based on satellite data have been produced, are also given.

  4. Whether and How Much to Give

    DEFF Research Database (Denmark)

    Petrovski, Erik

    This study evaluates whether factors known to foster charitable giving have a uniform influence on both (1) the decision to give and (2) the decision of how much to give. I establish that these two decisions are independent by dismissing the widely used Tobit model, which assumes a singe decision...

  5. Quantitative assessment of the microbial risk of leafy greens from farm to consumption: preliminary framework, data, and risk estimates.

    Science.gov (United States)

    Danyluk, Michelle D; Schaffner, Donald W

    2011-05-01

    This project was undertaken to relate what is known about the behavior of Escherichia coli O157:H7 under laboratory conditions and integrate this information to what is known regarding the 2006 E. coli O157:H7 spinach outbreak in the context of a quantitative microbial risk assessment. The risk model explicitly assumes that all contamination arises from exposure in the field. Extracted data, models, and user inputs were entered into an Excel spreadsheet, and the modeling software @RISK was used to perform Monte Carlo simulations. The model predicts that cut leafy greens that are temperature abused will support the growth of E. coli O157:H7, and populations of the organism may increase by as much a 1 log CFU/day under optimal temperature conditions. When the risk model used a starting level of -1 log CFU/g, with 0.1% of incoming servings contaminated, the predicted numbers of cells per serving were within the range of best available estimates of pathogen levels during the outbreak. The model predicts that levels in the field of -1 log CFU/g and 0.1% prevalence could have resulted in an outbreak approximately the size of the 2006 E. coli O157:H7 outbreak. This quantitative microbial risk assessment model represents a preliminary framework that identifies available data and provides initial risk estimates for pathogenic E. coli in leafy greens. Data gaps include retail storage times, correlations between storage time and temperature, determining the importance of E. coli O157:H7 in leafy greens lag time models, and validation of the importance of cross-contamination during the washing process.

  6. Quantitative Analysis of VIIRS DNB Nightlight Point Source for Light Power Estimation and Stability Monitoring

    Directory of Open Access Journals (Sweden)

    Changyong Cao

    2014-12-01

    Full Text Available The high sensitivity and advanced onboard calibration on the Visible Infrared Imaging Radiometer Suite (VIIRS Day/Night Band (DNB enables accurate measurements of low light radiances which leads to enhanced quantitative applications at night. The finer spatial resolution of DNB also allows users to examine social economic activities at urban scales. Given the growing interest in the use of the DNB data, there is a pressing need for better understanding of the calibration stability and absolute accuracy of the DNB at low radiances. The low light calibration accuracy was previously estimated at a moderate 15% using extended sources while the long-term stability has yet to be characterized. There are also several science related questions to be answered, for example, how the Earth’s atmosphere and surface variability contribute to the stability of the DNB measured radiances; how to separate them from instrument calibration stability; whether or not SI (International System of Units traceable active light sources can be designed and installed at selected sites to monitor the calibration stability, radiometric and geolocation accuracy, and point spread functions of the DNB; furthermore, whether or not such active light sources can be used for detecting environmental changes, such as aerosols. This paper explores the quantitative analysis of nightlight point sources, such as those from fishing vessels, bridges, and cities, using fundamental radiometry and radiative transfer, which would be useful for a number of applications including search and rescue in severe weather events, as well as calibration/validation of the DNB. Time series of the bridge light data are used to assess the stability of the light measurements and the calibration of VIIRS DNB. It was found that the light radiant power computed from the VIIRS DNB data matched relatively well with independent assessments based on the in situ light installations, although estimates have to be

  7. Quantitative evaluation of spatial scale of carrier trapping at grain boundary by GHz-microwave dielectric loss spectroscopy

    Science.gov (United States)

    Choi, W.; Tsutsui, Y.; Miyakai, T.; Sakurai, T.; Seki, S.

    2017-11-01

    Charge carrier mobility is an important primary parameter for the electronic conductive materials, and the intrinsic limit of the mobility has been hardly access by conventional direct-current evaluation methods. In the present study, intra-grain hole mobility of pentacene thin films was estimated quantitatively using microwave-based dielectric loss spectroscopy (time-resolved microwave conductivity measurement) in alternating current mode of charge carrier local motion. Metal-insulator-semiconductor devices were prepared with different insulating polymers or substrate temperature upon vacuum deposition of the pentacene layer, which afforded totally four different grain-size conditions of pentacene layers. Under the condition where the local motion was determined by interfacial traps at the pentacene grain boundaries (grain-grain interfaces), the observed hole mobilities were plotted against the grain sizes, giving an excellent correlation fit successfully by a parabolic function representative of the boarder length. Consequently, the intra-grain mobility and trap-release time of holes were estimated as 15 cm2 V-1 s-1 and 9.4 ps.

  8. Quantitative Analysis of the Security of Software-Defined Network Controller Using Threat/Effort Model

    Directory of Open Access Journals (Sweden)

    Zehui Wu

    2017-01-01

    Full Text Available SDN-based controller, which is responsible for the configuration and management of the network, is the core of Software-Defined Networks. Current methods, which focus on the secure mechanism, use qualitative analysis to estimate the security of controllers, leading to inaccurate results frequently. In this paper, we employ a quantitative approach to overcome the above shortage. Under the analysis of the controller threat model we give the formal model results of the APIs, the protocol interfaces, and the data items of controller and further provide our Threat/Effort quantitative calculation model. With the help of Threat/Effort model, we are able to compare not only the security of different versions of the same kind controller but also different kinds of controllers and provide a basis for controller selection and secure development. We evaluated our approach in four widely used SDN-based controllers which are POX, OpenDaylight, Floodlight, and Ryu. The test, which shows the similarity outcomes with the traditional qualitative analysis, demonstrates that with our approach we are able to get the specific security values of different controllers and presents more accurate results.

  9. Collective animal behavior from Bayesian estimation and probability matching.

    Directory of Open Access Journals (Sweden)

    Alfonso Pérez-Escudero

    2011-11-01

    Full Text Available Animals living in groups make movement decisions that depend, among other factors, on social interactions with other group members. Our present understanding of social rules in animal collectives is mainly based on empirical fits to observations, with less emphasis in obtaining first-principles approaches that allow their derivation. Here we show that patterns of collective decisions can be derived from the basic ability of animals to make probabilistic estimations in the presence of uncertainty. We build a decision-making model with two stages: Bayesian estimation and probabilistic matching. In the first stage, each animal makes a Bayesian estimation of which behavior is best to perform taking into account personal information about the environment and social information collected by observing the behaviors of other animals. In the probability matching stage, each animal chooses a behavior with a probability equal to the Bayesian-estimated probability that this behavior is the most appropriate one. This model derives very simple rules of interaction in animal collectives that depend only on two types of reliability parameters, one that each animal assigns to the other animals and another given by the quality of the non-social information. We test our model by obtaining theoretically a rich set of observed collective patterns of decisions in three-spined sticklebacks, Gasterosteus aculeatus, a shoaling fish species. The quantitative link shown between probabilistic estimation and collective rules of behavior allows a better contact with other fields such as foraging, mate selection, neurobiology and psychology, and gives predictions for experiments directly testing the relationship between estimation and collective behavior.

  10. Quantitative estimation of the right ventricular overloading by thallium-201 myocardial scintigraphy

    International Nuclear Information System (INIS)

    Owada, Kenji; Machii, Kazuo; Tsukahara, Yasunori

    1982-01-01

    Thallium-201 myocardial scintigraphy was performed on 55 patients with various types of right ventricular overloading. The right ventricular (RV) free wall was visualized in 39 out of the 55 patients (71%). The mean values of right ventricular systolic pressure (RVSP) and pulmonary artery mean pressure (PAMP) in the visualized cases (uptakers) were 54.6 +- 24.1 and 30.5 +- 15.3 mmHg, respectively. These values were significantly higher than those of the non-visualized cases (non-uptakers). There were 12 RVSP-''normotensive'' uptakers and 15 PAMP-''normotensive'' uptakers. The RV free wall images were classified into three types according to their morphological features. Type I was predominantly seen in cases of RV pressure overloading, type II in RV volume overloading and type III in combined ventricular overloading. RVSP in the type III group was significantly higher than that in other two groups. The radioactivity ratio in RV free wall and interventricular septum (IVS), the RV/IVS uptake ratio was calculated using left anterior oblique (LAO) view images. The RV/IVS uptake ratio closely correlated with RVSP and PAMP (r = 0.88 and 0.82, respectively). In each group of RV free wall image, there were also close correlations between the RV/IVS uptake ratio and both RVSP and PAMP. Our results indicate that the RV/IVS uptake ratio can be used as a parameter for the semi-quantitative estimation of right ventricular overloading. (author)

  11. Giving behavior of millionaires.

    Science.gov (United States)

    Smeets, Paul; Bauer, Rob; Gneezy, Uri

    2015-08-25

    This paper studies conditions influencing the generosity of wealthy people. We conduct incentivized experiments with individuals who have at least €1 million in their bank account. The results show that millionaires are more generous toward low-income individuals in a giving situation when the other participant has no power, than in a strategic setting, where the other participant can punish unfair behavior. Moreover, the level of giving by millionaires is higher than in any other previous study. Our findings have important implications for charities and financial institutions that deal with wealthy individuals.

  12. Toxicity Estimation Software Tool (TEST)

    Science.gov (United States)

    The Toxicity Estimation Software Tool (TEST) was developed to allow users to easily estimate the toxicity of chemicals using Quantitative Structure Activity Relationships (QSARs) methodologies. QSARs are mathematical models used to predict measures of toxicity from the physical c...

  13. Gender Differences In Giving Directions: A Case Study Of English Literature Students At Binus University

    Directory of Open Access Journals (Sweden)

    Tjoo Hong Sing

    2011-05-01

    Full Text Available Many researchers have said that there are differences in the ways people give direction between males and females, especially in spatial task (cardinal directions, topography, mileage, building, right/left markers (e.g., Lawton, 2001; Dabbs et al., 1998. Here, the thesis investigates what differences occur between both genders in giving direction. The respondents are 25 females and 25 males of fifth semester Binus University students majoring in English Literature. The respondents answered with a certain route from Binus’s Anggrek Campus to Senayan City. The study was conducted by qualitative and quantitative method. From the data analysis, the writer discovered that gender does affect in selecting the key words in explaining direction it is found that there were differences in choosing key words in giving direction between females and males. The difference is women use more than twice spatial references than men do. In terms of verbal abilities, it was confirmed that female use longer explanation. However, in other aspects such as serial orientation and maintenance words, the result is inconclusive. 

  14. Whether and How Much to Give

    DEFF Research Database (Denmark)

    Petrovski, Erik

    2017-01-01

    Charitable giving involves two seemingly distinct decisions: whether to give and how much to give. However, many researchers methodologically assume that these decisions are one and the same. The present study supports the argument that this is an incorrect assumption that is likely to generate...... misleading conclusions, in part, since the second decision is much more financial in nature than the first. The argument that charitable giving entails two distinct decisions is validated by empirically dismissing the prevailing Tobit model, which assumes a single decision, in favor of less restrictive two......-stage approaches: Cragg’s model and the Heckman model. Most importantly, it is shown that only by adopting a two-stage approach may it be uncovered that common determinants of charitable giving such as income and gender affect the two decisions at hand very differently. Data comes from a high-quality 2012 Danish...

  15. Round robin: Quantitative lateral resolution of PHI XPS microprobes Quantum 2000/Quantera SXM

    International Nuclear Information System (INIS)

    Scheithauer, Uwe; Kolb, Max; Kip, Gerard A.M.; Naburgh, Emile; Snijders, J.H.M.

    2016-01-01

    Highlights: • The quantitative lateral resolution of 7 PHI XPS microprobes has been estimated in a round robin. • An ellipsoidally shaped quartz crystal monochromatizes the Alkα radiation and refocuses it from the Al anode to the sample surface. • The long tail contributions of the X-ray beam intensity distribution were measured using a new and innovative approach. • This quantitative lateral resolution has a significantly larger value than the nominal X-ray beam diameter. • The quantitative lateral resolution follows a trend in time: The newer the monochromator so much the better the quantitative lateral resolution. - Abstract: The quantitative lateral resolution is a reliable measure for the quality of an XPS microprobe equipped with a focused X-ray beam. It describes the long tail contributions of the X-ray beam intensity distribution. The knowledge of these long tail contributions is essential when judging on the origin of signals of XPS spectra recorded on small-sized features. In this round robin test the quantitative lateral resolution of 7 PHI XPS microprobes has been estimated. As expected, the quantitative lateral resolution has significantly larger values than the nominal X-ray beam diameter. The estimated values of the quantitative lateral resolution follow a trend in time: the newer the monochromator of an XPS microprobe so much the better the quantitative lateral resolution.

  16. Round robin: Quantitative lateral resolution of PHI XPS microprobes Quantum 2000/Quantera SXM

    Energy Technology Data Exchange (ETDEWEB)

    Scheithauer, Uwe, E-mail: scht.uhg@googlemail.com [82008 Unterhaching (Germany); Kolb, Max, E-mail: max.kolb@airbus.com [Airbus Group Innovations, TX2, 81663 Munich (Germany); Kip, Gerard A.M., E-mail: G.A.M.Kip@utwente.nl [Universiteit Twente, MESA+ Nanolab, Postbus 217, 7500AE Enschede (Netherlands); Naburgh, Emile, E-mail: e.p.naburgh@philips.com [Materials Analysis, Philips Innovation Services, High Tech Campus 11, 5656 AE Eindhoven (Netherlands); Snijders, J.H.M., E-mail: j.h.m.snijders@philips.com [Materials Analysis, Philips Innovation Services, High Tech Campus 11, 5656 AE Eindhoven (Netherlands)

    2016-07-15

    Highlights: • The quantitative lateral resolution of 7 PHI XPS microprobes has been estimated in a round robin. • An ellipsoidally shaped quartz crystal monochromatizes the Alkα radiation and refocuses it from the Al anode to the sample surface. • The long tail contributions of the X-ray beam intensity distribution were measured using a new and innovative approach. • This quantitative lateral resolution has a significantly larger value than the nominal X-ray beam diameter. • The quantitative lateral resolution follows a trend in time: The newer the monochromator so much the better the quantitative lateral resolution. - Abstract: The quantitative lateral resolution is a reliable measure for the quality of an XPS microprobe equipped with a focused X-ray beam. It describes the long tail contributions of the X-ray beam intensity distribution. The knowledge of these long tail contributions is essential when judging on the origin of signals of XPS spectra recorded on small-sized features. In this round robin test the quantitative lateral resolution of 7 PHI XPS microprobes has been estimated. As expected, the quantitative lateral resolution has significantly larger values than the nominal X-ray beam diameter. The estimated values of the quantitative lateral resolution follow a trend in time: the newer the monochromator of an XPS microprobe so much the better the quantitative lateral resolution.

  17. An Experimental Study for Quantitative Estimation of Rebar Corrosion in Concrete Using Ground Penetrating Radar

    Directory of Open Access Journals (Sweden)

    Md Istiaque Hasan

    2016-01-01

    Full Text Available Corrosion of steel rebar in reinforced concrete is one the most important durability issues in the service life of a structure. In this paper, an investigation is conducted to find out the relationship between the amount of reinforced concrete corrosion and GPR maximum positive amplitude. Accelerated corrosion was simulated in the lab by impressing direct current into steel rebar that was submerged in a 5% salt water solution. The amount of corrosion was varied in the rebars with different levels of mass loss ranging from 0% to 45%. The corroded rebars were then placed into three different oil emulsion tanks having different dielectric properties similar to concrete. The maximum amplitudes from the corroded bars were recorded. A linear relationship between the maximum positive amplitudes and the amount of corrosion in terms of percentage loss of area was observed. It was proposed that the relationship between the GPR maximum amplitude and the amount of corrosion can be used as a basis of a NDE technique of quantitative estimation of corrosion.

  18. Giving Back, Moving Forward

    Directory of Open Access Journals (Sweden)

    Louise Fortmann

    2014-07-01

    Full Text Available While reflecting on her own experience with giving back in Zimbabwe, Fortmann considers how the idea of “giving back” sits at the intersection of feminist theory, participatory research, and the democratization of science. From feminist theory arises the question of how to reciprocate to those who have contributed to our research. The participatory research and democratization of science literature push us to recognize and consider the collaborative nature of our research. Fortmann concludes by identifying three categories of reciprocity in research: material, intellectual, and personal. Sharing must occur, regardless of the kind of research taking place.

  19. Quantitative estimation of brain atrophy and function with PET and MRI two-dimensional projection images

    International Nuclear Information System (INIS)

    Saito, Reiko; Uemura, Koji; Uchiyama, Akihiko; Toyama, Hinako; Ishii, Kenji; Senda, Michio

    2001-01-01

    The purpose of this paper is to estimate the extent of atrophy and the decline in brain function objectively and quantitatively. Two-dimensional (2D) projection images of three-dimensional (3D) transaxial images of positron emission tomography (PET) and magnetic resonance imaging (MRI) were made by means of the Mollweide method which keeps the area of the brain surface. A correlation image was generated between 2D projection images of MRI and cerebral blood flow (CBF) or 18 F-fluorodeoxyglucose (FDG) PET images and the sulcus was extracted from the correlation image clustered by K-means method. Furthermore, the extent of atrophy was evaluated from the extracted sulcus on 2D-projection MRI and the cerebral cortical function such as blood flow or glucose metabolic rate was assessed in the cortex excluding sulcus on 2D-projection PET image, and then the relationship between the cerebral atrophy and function was evaluated. This method was applied to the two groups, the young and the aged normal subjects, and the relationship between the age and the rate of atrophy or the cerebral blood flow was investigated. This method was also applied to FDG-PET and MRI studies in the normal controls and in patients with corticobasal degeneration. The mean rate of atrophy in the aged group was found to be higher than that in the young. The mean value and the variance of the cerebral blood flow for the young are greater than those of the aged. The sulci were similarly extracted using either CBF or FDG PET images. The purposed method using 2-D projection images of MRI and PET is clinically useful for quantitative assessment of atrophic change and functional disorder of cerebral cortex. (author)

  20. Quantitative analysis of fission products by {gamma} spectrography; Analyse quantitative des produits de fission par spectrographie {gamma}

    Energy Technology Data Exchange (ETDEWEB)

    Malet, G

    1962-07-01

    The activity of the fission products present in treated solutions of irradiated fuels is given as a function of the time of cooling and of the irradiation time. The variation of the ratio ({sup 144}Ce + {sup 144}Pr activity/{sup 137}Cs activity) as a function of these same parameters is also given. From these results a method is deduced giving the 'age' of the solution analyzed. By {gamma}-scintillation spectrography it was possible to estimate the following elements individually: {sup 141}Ce, {sup 144}Ce + {sup 144}Pr, {sup 103}Ru, {sup 106}Ru + {sup 106}Rh, {sup 137}Cs, {sup 95}Zr + {sup 95}Nb. Yield curves are given for the case of a single emitter. Of the various existing methods, that of the least squares was used for the quantitative analysis of the afore-mentioned fission products. The accuracy attained varies from 3 to 10%. (author) [French] L'activite des produits de fission presents dans les solutions de traitement de combustibles irradies est donnee en fonction du temps de refroidissement et du temps d'irradiation. On etudie de plus la variation du rapport Activite du {sup 144}Ce + {sup 144}Pr /Activite du {sup 137}Cs en fonction de ces memes parametres. De ces resultats, on deduit une methode donnant l'age de la solution analysee. La spectrographie {gamma} a scintillation a permis le dosage individuel des produits suivants: {sup 141}Ce, {sup 144}Ce + {sup 144}Pr, {sup 103}Ru, {sup 106}Ru + {sup 106}Rh, {sup 137}Cs, {sup 95}Zr + {sup 95}Nb. Des courbes de rendement sont donnees dans le cas d'un emetteur unique. Des differentes methodes existantes, la methode des moindres carres a ete employee pour l'analyse quantitative des produits de fission precites. La precision obtenue varie entre 3 et 10 pour cent. (auteur)

  1. QUANTITATIVE ESTIMATION OF VOLUMETRIC ICE CONTENT IN FROZEN GROUND BY DIPOLE ELECTROMAGNETIC PROFILING METHOD

    Directory of Open Access Journals (Sweden)

    L. G. Neradovskiy

    2018-01-01

    Full Text Available Volumetric estimation of the ice content in frozen soils is known as one of the main problems in the engineering geocryology and the permafrost geophysics. A new way to use the known method of dipole electromagnetic profiling for the quantitative estimation of the volumetric ice content in frozen soils is discussed. Investigations of foundation of the railroad in Yakutia (i.e. in the permafrost zone were used as an example for this new approach. Unlike the conventional way, in which the permafrost is investigated by its resistivity and constructing of geo-electrical cross-sections, the new approach is aimed at the study of the dynamics of the process of attenuation in the layer of annual heat cycle in the field of high-frequency vertical magnetic dipole. This task is simplified if not all the characteristics of the polarization ellipse are measured but the only one which is the vertical component of the dipole field and can be the most easily measured. Collected data of the measurements were used to analyze the computational errors of the average values of the volumetric ice content from the amplitude attenuation of the vertical component of the dipole field. Note that the volumetric ice content is very important for construction. It is shown that usually the relative error of computation of this characteristic of a frozen soil does not exceed 20% if the works are performed by the above procedure using the key-site methodology. This level of accuracy meets requirements of the design-and-survey works for quick, inexpensive, and environmentally friendly zoning of built-up remote and sparsely populated territories of the Russian permafrost zone according to a category of a degree of the ice content in frozen foundations of engineering constructions.

  2. Estimating raw material equivalents on a macro-level: comparison of multi-regional input-output analysis and hybrid LCI-IO.

    Science.gov (United States)

    Schoer, Karl; Wood, Richard; Arto, Iñaki; Weinzettel, Jan

    2013-12-17

    The mass of material consumed by a population has become a useful proxy for measuring environmental pressure. The "raw material equivalents" (RME) metric of material consumption addresses the issue of including the full supply chain (including imports) when calculating national or product level material impacts. The RME calculation suffers from data availability, however, as quantitative data on production practices along the full supply chain (in different regions) is required. Hence, the RME is currently being estimated by three main approaches: (1) assuming domestic technology in foreign economies, (2) utilizing region-specific life-cycle inventories (in a hybrid framework), and (3) utilizing multi-regional input-output (MRIO) analysis to explicitly cover all regions of the supply chain. While the first approach has been shown to give inaccurate results, this paper focuses on the benefits and costs of the latter two approaches. We analyze results from two key (MRIO and hybrid) projects modeling raw material equivalents, adjusting the models in a stepwise manner in order to quantify the effects of individual conceptual elements. We attempt to isolate the MRIO gap, which denotes the quantitative impact of calculating the RME of imports by an MRIO approach instead of the hybrid model, focusing on the RME of EU external trade imports. While, the models give quantitatively similar results, differences become more pronounced when tracking more detailed material flows. We assess the advantages and disadvantages of the two approaches and look forward to ways to further harmonize data and approaches.

  3. Qualitative and quantitative cost estimation : a methodology analysis

    NARCIS (Netherlands)

    Aram, S.; Eastman, C.; Beetz, J.; Issa, R.; Flood, I.

    2014-01-01

    This paper reports on the first part of ongoing research with the goal of designing a framework and a knowledge-based system for 3D parametric model-based quantity take-off and cost estimation in the Architecture, Engineering and Construction (AEC) industry. The authors have studied and analyzed

  4. Quantitative subsurface analysis using frequency modulated thermal wave imaging

    Science.gov (United States)

    Subhani, S. K.; Suresh, B.; Ghali, V. S.

    2018-01-01

    Quantitative depth analysis of the anomaly with an enhanced depth resolution is a challenging task towards the estimation of depth of the subsurface anomaly using thermography. Frequency modulated thermal wave imaging introduced earlier provides a complete depth scanning of the object by stimulating it with a suitable band of frequencies and further analyzing the subsequent thermal response using a suitable post processing approach to resolve subsurface details. But conventional Fourier transform based methods used for post processing unscramble the frequencies with a limited frequency resolution and contribute for a finite depth resolution. Spectral zooming provided by chirp z transform facilitates enhanced frequency resolution which can further improves the depth resolution to axially explore finest subsurface features. Quantitative depth analysis with this augmented depth resolution is proposed to provide a closest estimate to the actual depth of subsurface anomaly. This manuscript experimentally validates this enhanced depth resolution using non stationary thermal wave imaging and offers an ever first and unique solution for quantitative depth estimation in frequency modulated thermal wave imaging.

  5. Reproducibility of CSF quantitative culture methods for estimating rate of clearance in cryptococcal meningitis.

    Science.gov (United States)

    Dyal, Jonathan; Akampurira, Andrew; Rhein, Joshua; Morawski, Bozena M; Kiggundu, Reuben; Nabeta, Henry W; Musubire, Abdu K; Bahr, Nathan C; Williams, Darlisha A; Bicanic, Tihana; Larsen, Robert A; Meya, David B; Boulware, David R

    2016-05-01

    Quantitative cerebrospinal fluid (CSF) cultures provide a measure of disease severity in cryptococcal meningitis. The fungal clearance rate by quantitative cultures has become a primary endpoint for phase II clinical trials. This study determined the inter-assay accuracy of three different quantitative culture methodologies. Among 91 participants with meningitis symptoms in Kampala, Uganda, during August-November 2013, 305 CSF samples were prospectively collected from patients at multiple time points during treatment. Samples were simultaneously cultured by three methods: (1) St. George's 100 mcl input volume of CSF with five 1:10 serial dilutions, (2) AIDS Clinical Trials Group (ACTG) method using 1000, 100, 10 mcl input volumes, and two 1:100 dilutions with 100 and 10 mcl input volume per dilution on seven agar plates; and (3) 10 mcl calibrated loop of undiluted and 1:100 diluted CSF (loop). Quantitative culture values did not statistically differ between St. George-ACTG methods (P= .09) but did for St. George-10 mcl loop (Pmethods was high (r≥0.88). For detecting sterility, the ACTG-method had the highest negative predictive value of 97% (91% St. George, 60% loop), but the ACTG-method had occasional (∼10%) difficulties in quantification due to colony clumping. For CSF clearance rate, St. George-ACTG methods did not differ overall (mean -0.05 ± 0.07 log10CFU/ml/day;P= .14) on a group level; however, individual-level clearance varied. The St. George and ACTG quantitative CSF culture methods produced comparable but not identical results. Quantitative cultures can inform treatment management strategies. © The Author 2016. Published by Oxford University Press on behalf of The International Society for Human and Animal Mycology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Balance between qualitative and quantitative verification methods

    International Nuclear Information System (INIS)

    Nidaira, Kazuo

    2012-01-01

    The amount of inspection effort for verification of declared nuclear material needs to be optimized in the situation where qualitative and quantitative measures are applied. Game theory was referred to investigate the relation of detection probability and deterrence of diversion. Payoffs used in the theory were quantified for cases of conventional safeguards and integrated safeguards by using AHP, Analytical Hierarchy Process. Then, it became possible to estimate detection probability under integrated safeguards which had equivalent deterrence capability for detection probability under conventional safeguards. In addition the distribution of inspection effort for qualitative and quantitative measures was estimated. Although the AHP has some ambiguities in quantifying qualitative factors, its application to optimization in safeguards is useful to reconsider the detection probabilities under integrated safeguards. (author)

  7. Human Pose Estimation and Activity Recognition from Multi-View Videos

    DEFF Research Database (Denmark)

    Holte, Michael Boelstoft; Tran, Cuong; Trivedi, Mohan

    2012-01-01

    approaches which have been proposed to comply with these requirements. We report a comparison of the most promising methods for multi-view human action recognition using two publicly available datasets: the INRIA Xmas Motion Acquisition Sequences (IXMAS) Multi-View Human Action Dataset, and the i3DPost Multi......–computer interaction (HCI), assisted living, gesture-based interactive games, intelligent driver assistance systems, movies, 3D TV and animation, physical therapy, autonomous mental development, smart environments, sport motion analysis, video surveillance, and video annotation. Next, we review and categorize recent......-View Human Action and Interaction Dataset. To compare the proposed methods, we give a qualitative assessment of methods which cannot be compared quantitatively, and analyze some prominent 3D pose estimation techniques for application, where not only the performed action needs to be identified but a more...

  8. Quantitative estimation of land surface evapotranspiration in Taiwan based on MODIS data

    Directory of Open Access Journals (Sweden)

    Che-sheng Zhan

    2011-09-01

    Full Text Available Land surface evapotranspiration (ET determines the local and regional water-heat balances. Accurate estimation of regional surface ET provides a scientific basis for the formulation and implementation of water conservation programs. This study set up a table of the momentum roughness length and zero-plane displacement related with land cover and an empirical relationship between land surface temperature and air temperature. A revised quantitative remote sensing ET model, the SEBS-Taiwan model, was developed. Based on Moderate Resolution Imaging Spectroradiometer (MODIS data, SEBS-Taiwan was used to simulate and evaluate the typical actual daily ET values in different seasons of 2002 and 2003 in Taiwan. SEBS-Taiwan generally performed well and could accurately simulate the actual daily ET. The simulated daily ET values matched the observed values satisfactorily. The results indicate that the net regional solar radiation, evaporation ratio, and surface ET values for the whole area of Taiwan are larger in summer than in spring, and larger in autumn than in winter. The results also show that the regional average daily ET values of 2002 are a little higher than those of 2003. Through analysis of the ET values from different types of land cover, we found that forest has the largest ET value, while water areas, bare land, and urban areas have the lowest ET values. Generally, the Northern Taiwan area, including Ilan County, Nantou County, and Hualien County, has higher ET values, while other cities, such as Chiayi, Taichung, and Tainan, have lower ET values.

  9. GIVING AND RECEIVING CONSTRUCTIVE FEEDBACK

    Directory of Open Access Journals (Sweden)

    Ірина Олійник

    2015-05-01

    Full Text Available The article scrutinizes the notion of feedback applicable in classrooms where team teaching is provided. The experience of giving and receiving feedback has been a good practice in cooperation between a U.S. Peace Corps volunteer and a Ukrainian counterpart. Giving and receiving feedback is an effective means of classroom observation that provides better insight into the process of teaching a foreign language. The article discusses the stages of feedback and explicates the notion of sharing experience between two teachers working simultaneously in the same classroom. The guidelines for giving and receiving feedback have been provided as well as the most commonly used vocabulary items have been listed. It has been proved that mutual feedback leads to improving teaching methods and using various teaching styles and techniques.

  10. Disdrometer-based C-Band Radar Quantitative Precipitation Estimation (QPE) in a highly complex terrain region in tropical Colombia.

    Science.gov (United States)

    Sepúlveda, J.; Hoyos Ortiz, C. D.

    2017-12-01

    An adequate quantification of precipitation over land is critical for many societal applications including agriculture, hydroelectricity generation, water supply, and risk management associated with extreme events. The use of rain gauges, a traditional method for precipitation estimation, and an excellent one, to estimate the volume of liquid water during a particular precipitation event, does not allow to fully capture the highly spatial variability of the phenomena which is a requirement for almost all practical applications. On the other hand, the weather radar, an active remote sensing sensor, provides a proxy for rainfall with fine spatial resolution and adequate temporary sampling, however, it does not measure surface precipitation. In order to fully exploit the capabilities of the weather radar, it is necessary to develop quantitative precipitation estimation (QPE) techniques combining radar information with in-situ measurements. Different QPE methodologies are explored and adapted to local observations in a highly complex terrain region in tropical Colombia using a C-Band radar and a relatively dense network of rain gauges and disdrometers. One important result is that the expressions reported in the literature for extratropical locations are not representative of the conditions found in the tropical region studied. In addition to reproducing the state-of-the-art techniques, a new multi-stage methodology based on radar-derived variables and disdrometer data is proposed in order to achieve the best QPE possible. The main motivation for this new methodology is based on the fact that most traditional QPE methods do not directly take into account the different uncertainty sources involved in the process. The main advantage of the multi-stage model compared to traditional models is that it allows assessing and quantifying the uncertainty in the surface rain rate estimation. The sub-hourly rainfall estimations using the multi-stage methodology are realistic

  11. Quantitative estimation of cholinesterase-specific drug metabolism of carbamate inhibitors provided by the analysis of the area under the inhibition-time curve.

    Science.gov (United States)

    Zhou, Huimin; Xiao, Qiaoling; Tan, Wen; Zhan, Yiyi; Pistolozzi, Marco

    2017-09-10

    Several molecules containing carbamate groups are metabolized by cholinesterases. This metabolism includes a time-dependent catalytic step which temporary inhibits the enzymes. In this paper we demonstrate that the analysis of the area under the inhibition versus time curve (AUIC) can be used to obtain a quantitative estimation of the amount of carbamate metabolized by the enzyme. (R)-bambuterol monocarbamate and plasma butyrylcholinesterase were used as model carbamate-cholinesterase system. The inhibition of different concentrations of the enzyme was monitored for 5h upon incubation with different concentrations of carbamate and the resulting AUICs were analyzed. The amount of carbamate metabolized could be estimated with cholinesterases in a selected compartment in which the cholinesterase is confined (e.g. in vitro solutions, tissues or body fluids), either in vitro or in vivo. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Analysis gives sensibility two models gives migration and transport gives radionuclides in the geosphere

    International Nuclear Information System (INIS)

    Torres Berdeguez, M. B.; Gil Castillo, R.; Peralta Vidal, J.L.

    1998-01-01

    An sensibility analysis it was applied two models, the first one, a model compressible for the near field (I finish source) The second, a simple model gives migration and transport radionuclides in the geosphere. The study was developed varying the securities ed simultaneously at the same time each parameter and observing the results in changes in the output and input. The intention in analysis it is to determine the parameter that but it influences in the variation the concentration. The statistical technique Regression it was employee in the study. This statistical method is used to analyze the dependence between a dependent variable and an or but independent variables

  13. How to Give a Good Talk?

    OpenAIRE

    Legout , Arnaud

    2013-01-01

    Why should you give great talks? How to make great slides? How to give a talk? How to make good presentations?; 3rd cycle; Warning: download the powerpoint version to get animations. Animated slides in the PDF version may look cluttered.

  14. Parameter determination for quantitative PIXE analysis using genetic algorithms

    International Nuclear Information System (INIS)

    Aspiazu, J.; Belmont-Moreno, E.

    1996-01-01

    For biological and environmental samples, PIXE technique is in particular advantage for elemental analysis, but the quantitative analysis implies accomplishing complex calculations that require the knowledge of more than a dozen parameters. Using a genetic algorithm, the authors give here an account of the procedure to obtain the best values for the parameters necessary to fit the efficiency for a X-ray detector. The values for some variables involved in quantitative PIXE analysis, were manipulated in a similar way as the genetic information is treated in a biological process. The authors carried out the algorithm until they reproduce, within the confidence interval, the elemental concentrations corresponding to a reference material

  15. Quantitative uranium speciation with U M{sub 4,5}-edge HERFD absorption spectra

    Energy Technology Data Exchange (ETDEWEB)

    Kvashnina, Kristina O.; Rossberg, Andre [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany). Molecular Structures

    2017-06-01

    This report gives a brief description of the quantitative uranium speciation performed by iterative transformation factor analysis (ITFA) of High Energy Resolution X-ray Fluorescence Detection (HERFD) data collected at the M{sub 4,5} edge.

  16. Prognostic, quantitative histopathologic variables in lobular carcinoma of the breast

    DEFF Research Database (Denmark)

    Ladekarl, M; Sørensen, Flemming Brandt

    1993-01-01

    BACKGROUND: A retrospective investigation of 53 consecutively treated patients with operable lobular carcinoma of the breast, with a median follow-up of 6.6 years, was performed to examine the prognostic value of quantitative histopathologic parameters. METHODS: The measurements were performed...... in routinely processed histologic sections using a simple, unbiased technique for the estimation of the three-dimensional mean nuclear volume (vv(nuc)). In addition, quantitative estimates were obtained of the mitotic index (MI), the nuclear index (NI), the nuclear volume fraction (Vv(nuc/tis)), and the mean...... management of patients with breast cancer....

  17. MathPatch - Raising Retention and Performance in an Intro-geoscience Class by Raising Students' Quantitative Skills

    Science.gov (United States)

    Baer, E. M.; Whittington, C.; Burn, H.

    2008-12-01

    The geological sciences are fundamentally quantitative. However, the diversity of students' mathematical preparation and skills makes the successful use of quantitative concepts difficult in introductory level classes. At Highline Community College, we have implemented a one-credit co-requisite course to give students supplemental instruction for quantitative skills used in the course. The course, formally titled "Quantitative Geology," nicknamed "MathPatch," runs parallel to our introductory Physical Geology course. MathPatch teaches the quantitative skills required for the geology class right before they are needed. Thus, students learn only the skills they need and are given opportunities to apply them immediately. Topics include complex-graph reading, unit conversions, large numbers, scientific notation, scale and measurement, estimation, powers of 10, and other fundamental mathematical concepts used in basic geological concepts. Use of this course over the past 8 years has successfully accomplished the goals of increasing students' quantitative skills, success and retention. Students master the quantitative skills to a greater extent than before the course was implemented, and less time is spent covering basic quantitative skills in the classroom. Because the course supports the use of quantitative skills, the large number of faculty that teach Geology 101 are more comfortable in using quantitative analysis, and indeed see it as an expectation of the course at Highline. Also significant, retention in the geology course has increased substantially, from 75% to 85%. Although successful, challenges persist with requiring MathPatch as a supplementary course. One, we have seen enrollments decrease in Geology 101, which may be the result of adding this co-requisite. Students resist mandatory enrollment in the course, although they are not good at evaluating their own need for the course. The logistics utilizing MathPatch in an evening class with fewer and longer

  18. Motor unit number estimation in the quantitative assessment of severity and progression of motor unit loss in Hirayama disease.

    Science.gov (United States)

    Zheng, Chaojun; Zhu, Yu; Zhu, Dongqing; Lu, Feizhou; Xia, Xinlei; Jiang, Jianyuan; Ma, Xiaosheng

    2017-06-01

    To investigate motor unit number estimation (MUNE) as a method to quantitatively evaluate severity and progression of motor unit loss in Hirayama disease (HD). Multipoint incremental MUNE was performed bilaterally on both abductor digiti minimi and abductor pollicis brevis muscles in 46 patients with HD and 32 controls, along with handgrip strength examination. MUNE was re-evaluated approximately 1year after initial examination in 17 patients with HD. The MUNE values were significantly lower in all the tested muscles in the HD group (Pdisease duration (Pmotor unit loss in patients with HD within approximately 1year (P4years. A reduction in the functioning motor units was found in patients with HD compared with that in controls, even in the early asymptomatic stages. Moreover, the motor unit loss in HD progresses gradually as the disease advances. These results have provided evidence for the application of MUNE in estimating the reduction of motor unit in HD and confirming the validity of MUNE for tracking the progression of HD in a clinical setting. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  19. Probabilistic quantitative microbial risk assessment model of norovirus from wastewater irrigated vegetables in Ghana using genome copies and fecal indicator ratio conversion for estimating exposure dose.

    Science.gov (United States)

    Owusu-Ansah, Emmanuel de-Graft Johnson; Sampson, Angelina; Amponsah, Samuel K; Abaidoo, Robert C; Dalsgaard, Anders; Hald, Tine

    2017-12-01

    The need to replace the commonly applied fecal indicator conversions ratio (an assumption of 1:10 -5 virus to fecal indicator organism) in Quantitative Microbial Risk Assessment (QMRA) with models based on quantitative data on the virus of interest has gained prominence due to the different physical and environmental factors that might influence the reliability of using indicator organisms in microbial risk assessment. The challenges facing analytical studies on virus enumeration (genome copies or particles) have contributed to the already existing lack of data in QMRA modelling. This study attempts to fit a QMRA model to genome copies of norovirus data. The model estimates the risk of norovirus infection from the intake of vegetables irrigated with wastewater from different sources. The results were compared to the results of a corresponding model using the fecal indicator conversion ratio to estimate the norovirus count. In all scenarios of using different water sources, the application of the fecal indicator conversion ratio underestimated the norovirus disease burden, measured by the Disability Adjusted Life Years (DALYs), when compared to results using the genome copies norovirus data. In some cases the difference was >2 orders of magnitude. All scenarios using genome copies met the 10 -4 DALY per person per year for consumption of vegetables irrigated with wastewater, although these results are considered to be highly conservative risk estimates. The fecal indicator conversion ratio model of stream-water and drain-water sources of wastewater achieved the 10 -6 DALY per person per year threshold, which tends to indicate an underestimation of health risk when compared to using genome copies for estimating the dose. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. PEPIS: A Pipeline for Estimating Epistatic Effects in Quantitative Trait Locus Mapping and Genome-Wide Association Studies.

    Directory of Open Access Journals (Sweden)

    Wenchao Zhang

    2016-05-01

    Full Text Available The term epistasis refers to interactions between multiple genetic loci. Genetic epistasis is important in regulating biological function and is considered to explain part of the 'missing heritability,' which involves marginal genetic effects that cannot be accounted for in genome-wide association studies. Thus, the study of epistasis is of great interest to geneticists. However, estimating epistatic effects for quantitative traits is challenging due to the large number of interaction effects that must be estimated, thus significantly increasing computing demands. Here, we present a new web server-based tool, the Pipeline for estimating EPIStatic genetic effects (PEPIS, for analyzing polygenic epistatic effects. The PEPIS software package is based on a new linear mixed model that has been used to predict the performance of hybrid rice. The PEPIS includes two main sub-pipelines: the first for kinship matrix calculation, and the second for polygenic component analyses and genome scanning for main and epistatic effects. To accommodate the demand for high-performance computation, the PEPIS utilizes C/C++ for mathematical matrix computing. In addition, the modules for kinship matrix calculations and main and epistatic-effect genome scanning employ parallel computing technology that effectively utilizes multiple computer nodes across our networked cluster, thus significantly improving the computational speed. For example, when analyzing the same immortalized F2 rice population genotypic data examined in a previous study, the PEPIS returned identical results at each analysis step with the original prototype R code, but the computational time was reduced from more than one month to about five minutes. These advances will help overcome the bottleneck frequently encountered in genome wide epistatic genetic effect analysis and enable accommodation of the high computational demand. The PEPIS is publically available at http://bioinfo.noble.org/PolyGenic_QTL/.

  1. PEPIS: A Pipeline for Estimating Epistatic Effects in Quantitative Trait Locus Mapping and Genome-Wide Association Studies.

    Science.gov (United States)

    Zhang, Wenchao; Dai, Xinbin; Wang, Qishan; Xu, Shizhong; Zhao, Patrick X

    2016-05-01

    The term epistasis refers to interactions between multiple genetic loci. Genetic epistasis is important in regulating biological function and is considered to explain part of the 'missing heritability,' which involves marginal genetic effects that cannot be accounted for in genome-wide association studies. Thus, the study of epistasis is of great interest to geneticists. However, estimating epistatic effects for quantitative traits is challenging due to the large number of interaction effects that must be estimated, thus significantly increasing computing demands. Here, we present a new web server-based tool, the Pipeline for estimating EPIStatic genetic effects (PEPIS), for analyzing polygenic epistatic effects. The PEPIS software package is based on a new linear mixed model that has been used to predict the performance of hybrid rice. The PEPIS includes two main sub-pipelines: the first for kinship matrix calculation, and the second for polygenic component analyses and genome scanning for main and epistatic effects. To accommodate the demand for high-performance computation, the PEPIS utilizes C/C++ for mathematical matrix computing. In addition, the modules for kinship matrix calculations and main and epistatic-effect genome scanning employ parallel computing technology that effectively utilizes multiple computer nodes across our networked cluster, thus significantly improving the computational speed. For example, when analyzing the same immortalized F2 rice population genotypic data examined in a previous study, the PEPIS returned identical results at each analysis step with the original prototype R code, but the computational time was reduced from more than one month to about five minutes. These advances will help overcome the bottleneck frequently encountered in genome wide epistatic genetic effect analysis and enable accommodation of the high computational demand. The PEPIS is publically available at http://bioinfo.noble.org/PolyGenic_QTL/.

  2. SU-G-IeP1-06: Estimating Relative Tissue Density From Quantitative MR Images: A Novel Perspective for MRI-Only Heterogeneity Corrected Dose Calculation

    International Nuclear Information System (INIS)

    Soliman, A; Hashemi, M; Safigholi, H; Tchistiakova, E; Song, W

    2016-01-01

    Purpose: To explore the feasibility of extracting the relative density from quantitative MRI measurements as well as estimate a correlation between the extracted measures and CT Hounsfield units. Methods: MRI has the ability to separate water and fat signals, producing two separate images for each component. By performing appropriate corrections on the separated images, quantitative measurement of water and fat mass density can be estimated. This work aims to test this hypothesis on 1.5T.Peanut oil was used as fat-representative, while agar as water-representative. Gadolinium Chloride III and Sodium Chloride were added to the agar solution to adjust the relaxation times and the medium conductivity, respectively. Peanut oil was added to the agar solution with different percentages: 0%, 3%, 5%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90% and 100%. The phantom was scanned on 1.5T GE Optima 450W with the body coil using a multigradient echo sequences. Water/fat separation were performed while correcting for main field (B0) inhomogeneity and T_2* relaxation time. B1+ inhomogeneities were ignored. The phantom was subsequently scanned on a Philips Brilliance CT Big Bore. MR-corrected fat signal from all vials were normalized to 100% fat signal. CT Hounsfield values were then compared to those obtained from the normalized MR-corrected fat values as well as to the phantom for validation. Results: Good agreement were found between CT HU and the MR-extracted fat values (R"2 = 0.98). CT HU also showed excellent agreement with the prepared fat fractions (R"2=0.99). Vials with 70%, 80%, and 90% fat percentages showed inhomogeneous distributions, however their results were included for completion. Conclusion: Quantitative MRI water/fat imaging can be potentially used to extract the relative tissue density. Further in-vivo validation are required.

  3. SU-G-IeP1-06: Estimating Relative Tissue Density From Quantitative MR Images: A Novel Perspective for MRI-Only Heterogeneity Corrected Dose Calculation

    Energy Technology Data Exchange (ETDEWEB)

    Soliman, A; Hashemi, M; Safigholi, H [Sunnybrook Research Institute, Toronto, ON (Canada); Sunnybrook Health Sciences Centre, Toronto, ON (Canada); Tchistiakova, E [Sunnybrook Health Sciences Centre, Toronto, ON (Canada); University of Toronto, Toronto, ON (Canada); Song, W [Sunnybrook Research Institute, Toronto, ON (Canada); Sunnybrook Health Sciences Centre, Toronto, ON (Canada); University of Toronto, Toronto, ON (Canada)

    2016-06-15

    Purpose: To explore the feasibility of extracting the relative density from quantitative MRI measurements as well as estimate a correlation between the extracted measures and CT Hounsfield units. Methods: MRI has the ability to separate water and fat signals, producing two separate images for each component. By performing appropriate corrections on the separated images, quantitative measurement of water and fat mass density can be estimated. This work aims to test this hypothesis on 1.5T.Peanut oil was used as fat-representative, while agar as water-representative. Gadolinium Chloride III and Sodium Chloride were added to the agar solution to adjust the relaxation times and the medium conductivity, respectively. Peanut oil was added to the agar solution with different percentages: 0%, 3%, 5%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90% and 100%. The phantom was scanned on 1.5T GE Optima 450W with the body coil using a multigradient echo sequences. Water/fat separation were performed while correcting for main field (B0) inhomogeneity and T{sub 2}* relaxation time. B1+ inhomogeneities were ignored. The phantom was subsequently scanned on a Philips Brilliance CT Big Bore. MR-corrected fat signal from all vials were normalized to 100% fat signal. CT Hounsfield values were then compared to those obtained from the normalized MR-corrected fat values as well as to the phantom for validation. Results: Good agreement were found between CT HU and the MR-extracted fat values (R{sup 2} = 0.98). CT HU also showed excellent agreement with the prepared fat fractions (R{sup 2}=0.99). Vials with 70%, 80%, and 90% fat percentages showed inhomogeneous distributions, however their results were included for completion. Conclusion: Quantitative MRI water/fat imaging can be potentially used to extract the relative tissue density. Further in-vivo validation are required.

  4. Quantitative precipitation estimation based on high-resolution numerical weather prediction and data assimilation with WRF – a performance test

    Directory of Open Access Journals (Sweden)

    Hans-Stefan Bauer

    2015-04-01

    Full Text Available Quantitative precipitation estimation and forecasting (QPE and QPF are among the most challenging tasks in atmospheric sciences. In this work, QPE based on numerical modelling and data assimilation is investigated. Key components are the Weather Research and Forecasting (WRF model in combination with its 3D variational assimilation scheme, applied on the convection-permitting scale with sophisticated model physics over central Europe. The system is operated in a 1-hour rapid update cycle and processes a large set of in situ observations, data from French radar systems, the European GPS network and satellite sensors. Additionally, a free forecast driven by the ECMWF operational analysis is included as a reference run representing current operational precipitation forecasting. The verification is done both qualitatively and quantitatively by comparisons of reflectivity, accumulated precipitation fields and derived verification scores for a complex synoptic situation that developed on 26 and 27 September 2012. The investigation shows that even the downscaling from ECMWF represents the synoptic situation reasonably well. However, significant improvements are seen in the results of the WRF QPE setup, especially when the French radar data are assimilated. The frontal structure is more defined and the timing of the frontal movement is improved compared with observations. Even mesoscale band-like precipitation structures on the rear side of the cold front are reproduced, as seen by radar. The improvement in performance is also confirmed by a quantitative comparison of the 24-hourly accumulated precipitation over Germany. The mean correlation of the model simulations with observations improved from 0.2 in the downscaling experiment and 0.29 in the assimilation experiment without radar data to 0.56 in the WRF QPE experiment including the assimilation of French radar data.

  5. Security Events and Vulnerability Data for Cybersecurity Risk Estimation.

    Science.gov (United States)

    Allodi, Luca; Massacci, Fabio

    2017-08-01

    Current industry standards for estimating cybersecurity risk are based on qualitative risk matrices as opposed to quantitative risk estimates. In contrast, risk assessment in most other industry sectors aims at deriving quantitative risk estimations (e.g., Basel II in Finance). This article presents a model and methodology to leverage on the large amount of data available from the IT infrastructure of an organization's security operation center to quantitatively estimate the probability of attack. Our methodology specifically addresses untargeted attacks delivered by automatic tools that make up the vast majority of attacks in the wild against users and organizations. We consider two-stage attacks whereby the attacker first breaches an Internet-facing system, and then escalates the attack to internal systems by exploiting local vulnerabilities in the target. Our methodology factors in the power of the attacker as the number of "weaponized" vulnerabilities he/she can exploit, and can be adjusted to match the risk appetite of the organization. We illustrate our methodology by using data from a large financial institution, and discuss the significant mismatch between traditional qualitative risk assessments and our quantitative approach. © 2017 Society for Risk Analysis.

  6. Agreement between clinical estimation and a new quantitative analysis by Photoshop software in fundus and angiographic image variables.

    Science.gov (United States)

    Ramezani, Alireza; Ahmadieh, Hamid; Azarmina, Mohsen; Soheilian, Masoud; Dehghan, Mohammad H; Mohebbi, Mohammad R

    2009-12-01

    To evaluate the validity of a new method for the quantitative analysis of fundus or angiographic images using Photoshop 7.0 (Adobe, USA) software by comparing with clinical evaluation. Four hundred and eighteen fundus and angiographic images of diabetic patients were evaluated by three retina specialists and then by computing using Photoshop 7.0 software. Four variables were selected for comparison: amount of hard exudates (HE) on color pictures, amount of HE on red-free pictures, severity of leakage, and the size of the foveal avascular zone (FAZ). The coefficient of agreement (Kappa) between the two methods in the amount of HE on color and red-free photographs were 85% (0.69) and 79% (0.59), respectively. The agreement for severity of leakage was 72% (0.46). In the two methods for the evaluation of the FAZ size using the magic and lasso software tools, the agreement was 54% (0.09) and 89% (0.77), respectively. Agreement in the estimation of the FAZ size by the lasso magnetic tool was excellent and was almost as good in the quantification of HE on color and on red-free images. Considering the agreement of this new technique for the measurement of variables in fundus images using Photoshop software with the clinical evaluation, this method seems to have sufficient validity to be used for the quantitative analysis of HE, leakage, and FAZ size on the angiograms of diabetic patients.

  7. Quantitative maps of groundwater resources in Africa

    International Nuclear Information System (INIS)

    MacDonald, A M; Bonsor, H C; Dochartaigh, B É Ó; Taylor, R G

    2012-01-01

    In Africa, groundwater is the major source of drinking water and its use for irrigation is forecast to increase substantially to combat growing food insecurity. Despite this, there is little quantitative information on groundwater resources in Africa, and groundwater storage is consequently omitted from assessments of freshwater availability. Here we present the first quantitative continent-wide maps of aquifer storage and potential borehole yields in Africa based on an extensive review of available maps, publications and data. We estimate total groundwater storage in Africa to be 0.66 million km 3 (0.36–1.75 million km 3 ). Not all of this groundwater storage is available for abstraction, but the estimated volume is more than 100 times estimates of annual renewable freshwater resources on Africa. Groundwater resources are unevenly distributed: the largest groundwater volumes are found in the large sedimentary aquifers in the North African countries Libya, Algeria, Egypt and Sudan. Nevertheless, for many African countries appropriately sited and constructed boreholes can support handpump abstraction (yields of 0.1–0.3 l s −1 ), and contain sufficient storage to sustain abstraction through inter-annual variations in recharge. The maps show further that the potential for higher yielding boreholes ( > 5 l s −1 ) is much more limited. Therefore, strategies for increasing irrigation or supplying water to rapidly urbanizing cities that are predicated on the widespread drilling of high yielding boreholes are likely to be unsuccessful. As groundwater is the largest and most widely distributed store of freshwater in Africa, the quantitative maps are intended to lead to more realistic assessments of water security and water stress, and to promote a more quantitative approach to mapping of groundwater resources at national and regional level. (letter)

  8. Estimating directional epistasis

    Science.gov (United States)

    Le Rouzic, Arnaud

    2014-01-01

    Epistasis, i.e., the fact that gene effects depend on the genetic background, is a direct consequence of the complexity of genetic architectures. Despite this, most of the models used in evolutionary and quantitative genetics pay scant attention to genetic interactions. For instance, the traditional decomposition of genetic effects models epistasis as noise around the evolutionarily-relevant additive effects. Such an approach is only valid if it is assumed that there is no general pattern among interactions—a highly speculative scenario. Systematic interactions generate directional epistasis, which has major evolutionary consequences. In spite of its importance, directional epistasis is rarely measured or reported by quantitative geneticists, not only because its relevance is generally ignored, but also due to the lack of simple, operational, and accessible methods for its estimation. This paper describes conceptual and statistical tools that can be used to estimate directional epistasis from various kinds of data, including QTL mapping results, phenotype measurements in mutants, and artificial selection responses. As an illustration, I measured directional epistasis from a real-life example. I then discuss the interpretation of the estimates, showing how they can be used to draw meaningful biological inferences. PMID:25071828

  9. The impact of 3D volume of interest definition on accuracy and precision of activity estimation in quantitative SPECT and planar processing methods

    Science.gov (United States)

    He, Bin; Frey, Eric C.

    2010-06-01

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed 111In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations were

  10. The impact of 3D volume of interest definition on accuracy and precision of activity estimation in quantitative SPECT and planar processing methods

    Energy Technology Data Exchange (ETDEWEB)

    He Bin [Division of Nuclear Medicine, Department of Radiology, New York Presbyterian Hospital-Weill Medical College of Cornell University, New York, NY 10021 (United States); Frey, Eric C, E-mail: bih2006@med.cornell.ed, E-mail: efrey1@jhmi.ed [Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Medical Institutions, Baltimore, MD 21287-0859 (United States)

    2010-06-21

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed {sup 111}In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations

  11. Quantitative interpretation of nuclear logging data by adopting point-by-point spectrum striping deconvolution technology

    International Nuclear Information System (INIS)

    Tang Bin; Liu Ling; Zhou Shumin; Zhou Rongsheng

    2006-01-01

    The paper discusses the gamma-ray spectrum interpretation technology on nuclear logging. The principles of familiar quantitative interpretation methods, including the average content method and the traditional spectrum striping method, are introduced, and their limitation of determining the contents of radioactive elements on unsaturated ledges (where radioactive elements distribute unevenly) is presented. On the basis of the intensity gamma-logging quantitative interpretation technology by using the deconvolution method, a new quantitative interpretation method of separating radioactive elements is presented for interpreting the gamma spectrum logging. This is a point-by-point spectrum striping deconvolution technology which can give the logging data a quantitative interpretation. (authors)

  12. Quantitative sonoelastography for the in vivo assessment of skeletal muscle viscoelasticity

    International Nuclear Information System (INIS)

    Hoyt, Kenneth; Kneezel, Timothy; Castaneda, Benjamin; Parker, Kevin J

    2008-01-01

    A novel quantitative sonoelastography technique for assessing the viscoelastic properties of skeletal muscle tissue was developed. Slowly propagating shear wave interference patterns (termed crawling waves) were generated using a two-source configuration vibrating normal to the surface. Theoretical models predict crawling wave displacement fields, which were validated through phantom studies. In experiments, a viscoelastic model was fit to dispersive shear wave speed sonoelastographic data using nonlinear least-squares techniques to determine frequency-independent shear modulus and viscosity estimates. Shear modulus estimates derived using the viscoelastic model were in agreement with that obtained by mechanical testing on phantom samples. Preliminary sonoelastographic data acquired in healthy human skeletal muscles confirm that high-quality quantitative elasticity data can be acquired in vivo. Studies on relaxed muscle indicate discernible differences in both shear modulus and viscosity estimates between different skeletal muscle groups. Investigations into the dynamic viscoelastic properties of (healthy) human skeletal muscles revealed that voluntarily contracted muscles exhibit considerable increases in both shear modulus and viscosity estimates as compared to the relaxed state. Overall, preliminary results are encouraging and quantitative sonoelastography may prove clinically feasible for in vivo characterization of the dynamic viscoelastic properties of human skeletal muscle

  13. Transmission-less attenuation estimation from time-of-flight PET histo-images using consistency equations

    Science.gov (United States)

    Li, Yusheng; Defrise, Michel; Metzler, Scott D.; Matej, Samuel

    2015-08-01

    In positron emission tomography (PET) imaging, attenuation correction with accurate attenuation estimation is crucial for quantitative patient studies. Recent research showed that the attenuation sinogram can be determined up to a scaling constant utilizing the time-of-flight information. The TOF-PET data can be naturally and efficiently stored in a histo-image without information loss, and the radioactive tracer distribution can be efficiently reconstructed using the DIRECT approaches. In this paper, we explore transmission-less attenuation estimation from TOF-PET histo-images. We first present the TOF-PET histo-image formation and the consistency equations in the histo-image parameterization, then we derive a least-squares solution for estimating the directional derivatives of the attenuation factors from the measured emission histo-images. Finally, we present a fast solver to estimate the attenuation factors from their directional derivatives using the discrete sine transform and fast Fourier transform while considering the boundary conditions. We find that the attenuation histo-images can be uniquely determined from the TOF-PET histo-images by considering boundary conditions. Since the estimate of the attenuation directional derivatives can be inaccurate for LORs tangent to the patient boundary, external sources, e.g. a ring or annulus source, might be needed to give an accurate estimate of the attenuation gradient for such LORs. The attenuation estimation from TOF-PET emission histo-images is demonstrated using simulated 2D TOF-PET data.

  14. Processing of next generation weather radar-multisensor precipitation estimates and quantitative precipitation forecast data for the DuPage County streamflow simulation system

    Science.gov (United States)

    Bera, Maitreyee; Ortel, Terry W.

    2018-01-12

    The U.S. Geological Survey, in cooperation with DuPage County Stormwater Management Department, is testing a near real-time streamflow simulation system that assists in the management and operation of reservoirs and other flood-control structures in the Salt Creek and West Branch DuPage River drainage basins in DuPage County, Illinois. As part of this effort, the U.S. Geological Survey maintains a database of hourly meteorological and hydrologic data for use in this near real-time streamflow simulation system. Among these data are next generation weather radar-multisensor precipitation estimates and quantitative precipitation forecast data, which are retrieved from the North Central River Forecasting Center of the National Weather Service. The DuPage County streamflow simulation system uses these quantitative precipitation forecast data to create streamflow predictions for the two simulated drainage basins. This report discusses in detail how these data are processed for inclusion in the Watershed Data Management files used in the streamflow simulation system for the Salt Creek and West Branch DuPage River drainage basins.

  15. EVALUATION OF SERVICE QUALITY OF AIRWAY COMPANIES GIVING DOMESTIC SERVICES IN TURKEY WITH FUZZY SET APPROACH

    Directory of Open Access Journals (Sweden)

    H. Handan DEMIR

    2013-01-01

    Full Text Available Today, service quality has become a major phenomenon with the requirement of meeting consumer demands in the best way brought along with the rising competition between companies. Airway transportation is preferred more and more during the recent years. Many qualitative and quantitative criteria are considered while evaluating service criteria in airway transportation. In this context, evaluation of service quality is a decisionmaking problem with many criteria. The purpose of this study is to evaluate service quality of domestic airway companies in Turkey. In this study; fuzzy TOPSIS method which is one of the most preferred fuzzy MCDM methods, extension of multi criteria decision making methods in fuzzy environments, considering qualitative and quantitative criteria together and giving opportunity to make group decisions in fuzzy environments. As a result, evaluation was made based on service quality criteria for the most preferred airways companies in Turkey and these companies were ranked according to their levels of service quality.

  16. Rethinking the social and cultural dimensions of charitable giving

    DEFF Research Database (Denmark)

    Bajde, Domen

    2009-01-01

    -giving and focuses on charitable gifts as an emblem of postmodern gift-giving to distant others. Historical evidence and sociological theory on postmodern solidarity are combined to shed light on the fluid duality of contemporary giving and the importance of the imaginary in charitable giving. The outlined socially...... symbolic dimensions of charitable giving are critically examined in light of postmodern consumer culture and the recent social corporate responsibility trends. By openly engaging the proposed complexities of gift-giving, our vocabulary and understanding of postmodern giving can be revised so as to invite...

  17. QUANTITATIVE CONFOCAL LASER SCANNING MICROSCOPY

    Directory of Open Access Journals (Sweden)

    Merete Krog Raarup

    2011-05-01

    Full Text Available This paper discusses recent advances in confocal laser scanning microscopy (CLSM for imaging of 3D structure as well as quantitative characterization of biomolecular interactions and diffusion behaviour by means of one- and two-photon excitation. The use of CLSM for improved stereological length estimation in thick (up to 0.5 mm tissue is proposed. The techniques of FRET (Fluorescence Resonance Energy Transfer, FLIM (Fluorescence Lifetime Imaging Microscopy, FCS (Fluorescence Correlation Spectroscopy and FRAP (Fluorescence Recovery After Photobleaching are introduced and their applicability for quantitative imaging of biomolecular (co-localization and trafficking in live cells described. The advantage of two-photon versus one-photon excitation in relation to these techniques is discussed.

  18. Estimating true human and animal host source contribution in quantitative microbial source tracking using the Monte Carlo method.

    Science.gov (United States)

    Wang, Dan; Silkie, Sarah S; Nelson, Kara L; Wuertz, Stefan

    2010-09-01

    Cultivation- and library-independent, quantitative PCR-based methods have become the method of choice in microbial source tracking. However, these qPCR assays are not 100% specific and sensitive for the target sequence in their respective hosts' genome. The factors that can lead to false positive and false negative information in qPCR results are well defined. It is highly desirable to have a way of removing such false information to estimate the true concentration of host-specific genetic markers and help guide the interpretation of environmental monitoring studies. Here we propose a statistical model based on the Law of Total Probability to predict the true concentration of these markers. The distributions of the probabilities of obtaining false information are estimated from representative fecal samples of known origin. Measurement error is derived from the sample precision error of replicated qPCR reactions. Then, the Monte Carlo method is applied to sample from these distributions of probabilities and measurement error. The set of equations given by the Law of Total Probability allows one to calculate the distribution of true concentrations, from which their expected value, confidence interval and other statistical characteristics can be easily evaluated. The output distributions of predicted true concentrations can then be used as input to watershed-wide total maximum daily load determinations, quantitative microbial risk assessment and other environmental models. This model was validated by both statistical simulations and real world samples. It was able to correct the intrinsic false information associated with qPCR assays and output the distribution of true concentrations of Bacteroidales for each animal host group. Model performance was strongly affected by the precision error. It could perform reliably and precisely when the standard deviation of the precision error was small (≤ 0.1). Further improvement on the precision of sample processing and q

  19. Estimation of subcriticality of TCA using 'indirect estimation method for calculation error'

    International Nuclear Information System (INIS)

    Naito, Yoshitaka; Yamamoto, Toshihiro; Arakawa, Takuya; Sakurai, Kiyoshi

    1996-01-01

    To estimate the subcriticality of neutron multiplication factor in a fissile system, 'Indirect Estimation Method for Calculation Error' is proposed. This method obtains the calculational error of neutron multiplication factor by correlating measured values with the corresponding calculated ones. This method was applied to the source multiplication and to the pulse neutron experiments conducted at TCA, and the calculation error of MCNP 4A was estimated. In the source multiplication method, the deviation of measured neutron count rate distributions from the calculated ones estimates the accuracy of calculated k eff . In the pulse neutron method, the calculation errors of prompt neutron decay constants give the accuracy of the calculated k eff . (author)

  20. Realizing the quantitative potential of the radioisotope image

    International Nuclear Information System (INIS)

    Brown, N.J.G.; Britton, K.E.; Cruz, F.R.

    1977-01-01

    The sophistication and accuracy of a clinical strategy depends on the accuracy of the results of the tests used. When numerical values are given in the test report powerful clinical strategies can be developed. The eye is well able to perceive structures in a high-quality grey-scale image. However, the degree of difference in density between two points cannot be estimated quantitatively by eye. This creates a problem particularly when there is only a small difference between the count-rate at a suspicious point or region and the count-rate to be expected there if the image were normal. To resolve this problem methods of quantitation of the amplitude of a feature, defined as the difference between the observed and expected values at the region of the feature, have been developed. The eye can estimate the frequency of light entering it very accurately (perceived as colour). Thus, if count-rate data are transformed into colour in a systematic way then information about realtive count-rate can be perceived. A computer-driven, interactive colour display system is used in which the count-rate range of each colour is computed as a percentage of a reference count-rate value. This can be used to obtain quantitative estimates of the amplitude of an image feature. The application of two methods to normal and pathological data are described and the results discussed. (author)

  1. A simple method to estimate interwell autocorrelation

    Energy Technology Data Exchange (ETDEWEB)

    Pizarro, J.O.S.; Lake, L.W. [Univ. of Texas, Austin, TX (United States)

    1997-08-01

    The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.

  2. A new estimation technique of sovereign default risk

    Directory of Open Access Journals (Sweden)

    Mehmet Ali Soytaş

    2016-12-01

    Full Text Available Using the fixed-point theorem, sovereign default models are solved by numerical value function iteration and calibration methods, which due to their computational constraints, greatly limits the models' quantitative performance and foregoes its country-specific quantitative projection ability. By applying the Hotz-Miller estimation technique (Hotz and Miller, 1993- often used in applied microeconometrics literature- to dynamic general equilibrium models of sovereign default, one can estimate the ex-ante default probability of economies, given the structural parameter values obtained from country-specific business-cycle statistics and relevant literature. Thus, with this technique we offer an alternative solution method to dynamic general equilibrium models of sovereign default to improve upon their quantitative inference ability.

  3. Radiation risk estimation

    International Nuclear Information System (INIS)

    Schull, W.J.; Texas Univ., Houston, TX

    1992-01-01

    Estimation of the risk of cancer following exposure to ionizing radiation remains largely empirical, and models used to adduce risk incorporate few, if any, of the advances in molecular biology of a past decade or so. These facts compromise the estimation risk where the epidemiological data are weakest, namely, at low doses and dose rates. Without a better understanding of the molecular and cellular events ionizing radiation initiates or promotes, it seems unlikely that this situation will improve. Nor will the situation improve without further attention to the identification and quantitative estimation of the effects of those host and environmental factors that enhance or attenuate risk. (author)

  4. Quantitative Literacy at Michigan State University, 2: Connection to Financial Literacy

    Directory of Open Access Journals (Sweden)

    Dennis Gilliland

    2011-07-01

    Full Text Available The lack of capability of making financial decisions has been recently described for the adult United States population. A concerted effort to increase awareness of this crisis, to improve education in quantitative and financial literacy, and to simplify financial decision-making processes is critical to the solution. This paper describes a study that was undertaken to explore the relationship between quantitative literacy and financial literacy for entering college freshmen. In summer 2010, incoming freshmen to Michigan State University were assessed. Well-tested financial literacy items and validated quantitative literacy assessment instruments were administered to 531 subjects. Logistic regression models were used to assess the relationship between level of financial literacy and independent variables including quantitative literacy score, ACT mathematics score, and demographic variables including gender. The study establishes a strong positive association between quantitative literacy and financial literacy on top of the effects of the other independent variables. Adding one percent to the performance on a quantitative literacy assessment changes the odds for being at the highest level of financial literacy by a factor estimated to be 1.05. Gender is found to have a large, statistically significant effect as well with being female changing the odds by a factor estimated to be 0.49.

  5. Conscientious refusals and reason-giving.

    Science.gov (United States)

    Marsh, Jason

    2014-07-01

    Some philosophers have argued for what I call the reason-giving requirement for conscientious refusal in reproductive healthcare. According to this requirement, healthcare practitioners who conscientiously object to administering standard forms of treatment must have arguments to back up their conscience, arguments that are purely public in character. I argue that such a requirement, though attractive in some ways, faces an overlooked epistemic problem: it is either too easy or too difficult to satisfy in standard cases. I close by briefly considering whether a version of the reason-giving requirement can be salvaged despite this important difficulty. © 2013 John Wiley & Sons Ltd.

  6. Quantitative and qualitative coronary arteriography. 1

    International Nuclear Information System (INIS)

    Brown, B.G.; Simpson, Paul; Dodge, J.T. Jr; Bolson, E.L.; Dodge, H.T.

    1991-01-01

    The clinical objectives of arteriography are to obtain information that contributes to an understanding of the mechanisms of the clinical syndrome, provides prognostic information, facilitates therapeutic decisions, and guides invasive therapy. Quantitative and improved qualitative assessments of arterial disease provide us with a common descriptive language which has the potential to accomplish these objectives more effectively and thus to improve clinical outcome. In certain situations, this potential has been demonstrated. Clinical investigation using quantitative techniques has definitely contributed to our understanding of disease mechanisms and of atherosclerosis progression/regression. Routine quantitation of clinical images should permit more accurate and repeatable estimates of disease severity and promises to provide useful estimates of coronary flow reserve. But routine clinical QCA awaits more cost- and time-efficient methods and clear proof of a clinical advantage. Careful inspection of highly magnified, high-resolution arteriographic images reveals morphologic features related to the pathophysiology of the clinical syndrome and to the likelihood of future progression or regression of obstruction. Features that have been found useful include thrombus in its various forms, ulceration and irregularity, eccentricity, flexing and dissection. The description of such high-resolution features should be included among, rather than excluded from, the goals of image processing, since they contribute substantially to the understanding and treatment of the clinical syndrome. (author). 81 refs.; 8 figs.; 1 tab

  7. Quantitative Precipitation Estimation over Ocean Using Bayesian Approach from Microwave Observations during the Typhoon Season

    Directory of Open Access Journals (Sweden)

    Jen-Chi Hu

    2009-01-01

    Full Text Available We have developed a new Bayesian approach to retrieve oceanic rain rate from the Tropical Rainfall Measuring Mission (TRMM Microwave Imager (TMI, with an emphasis on typhoon cases in the West Pacific. Retrieved rain rates are validated with measurements of rain gauges located on Japanese islands. To demonstrate improvement, retrievals are also compared with those from the TRMM/Precipitation Radar (PR, the Goddard Profiling Algorithm (GPROF, and a multi-channel linear regression statistical method (MLRS. We have found that qualitatively, all methods retrieved similar horizontal distributions in terms of locations of eyes and rain bands of typhoons. Quantitatively, our new Bayesian retrievals have the best linearity and the smallest root mean square (RMS error against rain gauge data for 16 typhoon over passes in 2004. The correlation coefficient and RMS of our retrievals are 0.95 and ~2 mm hr-1, respectively. In particular, at heavy rain rates, our Bayesian retrievals out perform those retrieved from GPROF and MLRS. Over all, the new Bayesian approach accurately retrieves surface rain rate for typhoon cases. Ac cu rate rain rate estimates from this method can be assimilated in models to improve forecast and prevent potential damages in Taiwan during typhoon seasons.

  8. Development and testing of transfer functions for generating quantitative climatic estimates from Australian pollen data

    Science.gov (United States)

    Cook, Ellyn J.; van der Kaars, Sander

    2006-10-01

    We review attempts to derive quantitative climatic estimates from Australian pollen data, including the climatic envelope, climatic indicator and modern analogue approaches, and outline the need to pursue alternatives for use as input to, or validation of, simulations by models of past, present and future climate patterns. To this end, we have constructed and tested modern pollen-climate transfer functions for mainland southeastern Australia and Tasmania using the existing southeastern Australian pollen database and for northern Australia using a new pollen database we are developing. After testing for statistical significance, 11 parameters were selected for mainland southeastern Australia, seven for Tasmania and six for northern Australia. The functions are based on weighted-averaging partial least squares regression and their predictive ability evaluated against modern observational climate data using leave-one-out cross-validation. Functions for summer, annual and winter rainfall and temperatures are most robust for southeastern Australia, while in Tasmania functions for minimum temperature of the coldest period, mean winter and mean annual temperature are the most reliable. In northern Australia, annual and summer rainfall and annual and summer moisture indexes are the strongest. The validation of all functions means all can be applied to Quaternary pollen records from these three areas with confidence. Copyright

  9. Prospective longitudinal assessment of parotid gland function using dynamic quantitative pertechnate scintigraphy and estimation of dose–response relationship of parotid-sparing radiotherapy in head-neck cancers

    International Nuclear Information System (INIS)

    Gupta, Tejpal; Hotwani, Chandni; Kannan, Sadhana; Master, Zubin; Rangarajan, Venkatesh; Murthy, Vedang; Budrukkar, Ashwini; Ghosh-Laskar, Sarbani; Agarwal, Jai Prakash

    2015-01-01

    To estimate dose–response relationship using dynamic quantitative 99m Tc-pertechnate scintigraphy in head-neck cancer patients treated with parotid-sparing conformal radiotherapy. Dynamic quantitative pertechnate salivary scintigraphy was performed pre-treatment and subsequently periodically after definitive radiotherapy. Reduction in salivary function following radiotherapy was quantified by salivary excretion fraction (SEF) ratios. Dose–response curves were modeled using standardized methodology to calculate tolerance dose 50 (TD50) for parotid glands. Salivary gland function was significantly affected by radiotherapy with maximal decrease in SEF ratios at 3-months, with moderate functional recovery over time. There was significant inverse correlation between SEF ratios and mean parotid doses at 3-months (r = −0.589, p < 0.001); 12-months (r = −0.554, p < 0.001); 24-months (r = −0.371, p = 0.002); and 36-months (r = −0.350, p = 0.005) respectively. Using a post-treatment SEF ratio <45% as the scintigraphic criteria to define severe salivary toxicity, the estimated TD50 value with its 95% confidence interval (95% CI) for the parotid gland was 35.1Gy (23.6-42.6Gy), 41.3Gy (34.6-48.8Gy), 55.9Gy (47.4-70.0Gy) and 64.3Gy (55.8-70.0Gy) at 3, 12, 24, and 36-months respectively. There is consistent decline in parotid function even after conformal radiotherapy with moderate recovery over time. Dynamic quantitative pertechnate scintigraphy is a simple, reproducible, and minimally invasive test of major salivary gland function. The online version of this article (doi:10.1186/s13014-015-0371-2) contains supplementary material, which is available to authorized users

  10. [Gift giving and the ethics of the caregiver].

    Science.gov (United States)

    Grassin, Marc

    2014-12-01

    Modern societies establish relationships on a contract basis, but the caregiver relationship invariably involves the notion of a gift. Caring engages the giving / receiving / giving back circle of reciprocity. The caregiving relationship requires a gift ethic which gives meaning to the nurse/patient contract.

  11. Estimation of sample size and testing power (part 5).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-02-01

    Estimation of sample size and testing power is an important component of research design. This article introduced methods for sample size and testing power estimation of difference test for quantitative and qualitative data with the single-group design, the paired design or the crossover design. To be specific, this article introduced formulas for sample size and testing power estimation of difference test for quantitative and qualitative data with the above three designs, the realization based on the formulas and the POWER procedure of SAS software and elaborated it with examples, which will benefit researchers for implementing the repetition principle.

  12. A quantitative and dynamic model of the Arabidopsis flowering time gene regulatory network.

    Directory of Open Access Journals (Sweden)

    Felipe Leal Valentim

    Full Text Available Various environmental signals integrate into a network of floral regulatory genes leading to the final decision on when to flower. Although a wealth of qualitative knowledge is available on how flowering time genes regulate each other, only a few studies incorporated this knowledge into predictive models. Such models are invaluable as they enable to investigate how various types of inputs are combined to give a quantitative readout. To investigate the effect of gene expression disturbances on flowering time, we developed a dynamic model for the regulation of flowering time in Arabidopsis thaliana. Model parameters were estimated based on expression time-courses for relevant genes, and a consistent set of flowering times for plants of various genetic backgrounds. Validation was performed by predicting changes in expression level in mutant backgrounds and comparing these predictions with independent expression data, and by comparison of predicted and experimental flowering times for several double mutants. Remarkably, the model predicts that a disturbance in a particular gene has not necessarily the largest impact on directly connected genes. For example, the model predicts that SUPPRESSOR OF OVEREXPRESSION OF CONSTANS (SOC1 mutation has a larger impact on APETALA1 (AP1, which is not directly regulated by SOC1, compared to its effect on LEAFY (LFY which is under direct control of SOC1. This was confirmed by expression data. Another model prediction involves the importance of cooperativity in the regulation of APETALA1 (AP1 by LFY, a prediction supported by experimental evidence. Concluding, our model for flowering time gene regulation enables to address how different quantitative inputs are combined into one quantitative output, flowering time.

  13. Phase estimation in optical interferometry

    CERN Document Server

    Rastogi, Pramod

    2014-01-01

    Phase Estimation in Optical Interferometry covers the essentials of phase-stepping algorithms used in interferometry and pseudointerferometric techniques. It presents the basic concepts and mathematics needed for understanding the phase estimation methods in use today. The first four chapters focus on phase retrieval from image transforms using a single frame. The next several chapters examine the local environment of a fringe pattern, give a broad picture of the phase estimation approach based on local polynomial phase modeling, cover temporal high-resolution phase evaluation methods, and pre

  14. Statistical aspects of quantitative real-time PCR experiment design.

    Science.gov (United States)

    Kitchen, Robert R; Kubista, Mikael; Tichopad, Ales

    2010-04-01

    Experiments using quantitative real-time PCR to test hypotheses are limited by technical and biological variability; we seek to minimise sources of confounding variability through optimum use of biological and technical replicates. The quality of an experiment design is commonly assessed by calculating its prospective power. Such calculations rely on knowledge of the expected variances of the measurements of each group of samples and the magnitude of the treatment effect; the estimation of which is often uninformed and unreliable. Here we introduce a method that exploits a small pilot study to estimate the biological and technical variances in order to improve the design of a subsequent large experiment. We measure the variance contributions at several 'levels' of the experiment design and provide a means of using this information to predict both the total variance and the prospective power of the assay. A validation of the method is provided through a variance analysis of representative genes in several bovine tissue-types. We also discuss the effect of normalisation to a reference gene in terms of the measured variance components of the gene of interest. Finally, we describe a software implementation of these methods, powerNest, that gives the user the opportunity to input data from a pilot study and interactively modify the design of the assay. The software automatically calculates expected variances, statistical power, and optimal design of the larger experiment. powerNest enables the researcher to minimise the total confounding variance and maximise prospective power for a specified maximum cost for the large study. Copyright 2010 Elsevier Inc. All rights reserved.

  15. (Micro)Financing to Give

    DEFF Research Database (Denmark)

    Bajde, Domen

    2013-01-01

    and workings of microfinance. We illustrate how market-like elements are productively and problematically deployed in philanthropic giving and address the need to consider a broader range of socio-material relations involved in the framing of transactions. A complex network of actors and (trans)actions needs...

  16. Quantitative testing of the methodology for genome size estimation in plants using flow cytometry: a case study of the Primulina genus

    Directory of Open Access Journals (Sweden)

    Jing eWang

    2015-05-01

    Full Text Available Flow cytometry (FCM is a commonly used method for estimating genome size in many organisms. The use of flow cytometry in plants is influenced by endogenous fluorescence inhibitors and may cause an inaccurate estimation of genome size; thus, falsifying the relationship between genome size and phenotypic traits/ecological performance. Quantitative optimization of FCM methodology minimizes such errors, yet there are few studies detailing this methodology. We selected the genus Primulina, one of the most representative and diverse genera of the Old World Gesneriaceae, to evaluate the methodology effect on determining genome size. Our results showed that buffer choice significantly affected genome size estimation in six out of the eight species examined and altered the 2C-value (DNA content by as much as 21.4%. The staining duration and propidium iodide (PI concentration slightly affected the 2C-value. Our experiments showed better histogram quality when the samples were stained for 40 minutes at a PI concentration of 100 µg ml-1. The quality of the estimates was not improved by one-day incubation in the dark at 4 °C or by centrifugation. Thus, our study determined an optimum protocol for genome size measurement in Primulina: LB01 buffer supplemented with 100 µg ml-1 PI and stained for 40 minutes. This protocol also demonstrated a high universality in other Gesneriaceae genera. We report the genome size of nine Gesneriaceae species for the first time. The results showed substantial genome size variation both within and among the species, with the 2C-value ranging between 1.62 and 2.71 pg. Our study highlights the necessity of optimizing the FCM methodology prior to obtaining reliable genome size estimates in a given taxon.

  17. Improving multisensor estimation of heavy-to-extreme precipitation via conditional bias-penalized optimal estimation

    Science.gov (United States)

    Kim, Beomgeun; Seo, Dong-Jun; Noh, Seong Jin; Prat, Olivier P.; Nelson, Brian R.

    2018-01-01

    A new technique for merging radar precipitation estimates and rain gauge data is developed and evaluated to improve multisensor quantitative precipitation estimation (QPE), in particular, of heavy-to-extreme precipitation. Unlike the conventional cokriging methods which are susceptible to conditional bias (CB), the proposed technique, referred to herein as conditional bias-penalized cokriging (CBPCK), explicitly minimizes Type-II CB for improved quantitative estimation of heavy-to-extreme precipitation. CBPCK is a bivariate version of extended conditional bias-penalized kriging (ECBPK) developed for gauge-only analysis. To evaluate CBPCK, cross validation and visual examination are carried out using multi-year hourly radar and gauge data in the North Central Texas region in which CBPCK is compared with the variant of the ordinary cokriging (OCK) algorithm used operationally in the National Weather Service Multisensor Precipitation Estimator. The results show that CBPCK significantly reduces Type-II CB for estimation of heavy-to-extreme precipitation, and that the margin of improvement over OCK is larger in areas of higher fractional coverage (FC) of precipitation. When FC > 0.9 and hourly gauge precipitation is > 60 mm, the reduction in root mean squared error (RMSE) by CBPCK over radar-only (RO) is about 12 mm while the reduction in RMSE by OCK over RO is about 7 mm. CBPCK may be used in real-time analysis or in reanalysis of multisensor precipitation for which accurate estimation of heavy-to-extreme precipitation is of particular importance.

  18. A quantitative framework for estimating risk of collision between marine mammals and boats

    Science.gov (United States)

    Martin, Julien; Sabatier, Quentin; Gowan, Timothy A.; Giraud, Christophe; Gurarie, Eliezer; Calleson, Scott; Ortega-Ortiz, Joel G.; Deutsch, Charles J.; Rycyk, Athena; Koslovsky, Stacie M.

    2016-01-01

    Speed regulations of watercraft in protected areas are designed to reduce lethal collisions with wildlife but can have economic consequences. We present a quantitative framework for investigating the risk of deadly collisions between boats and wildlife.

  19. Execution gives the recommendations given by WAMAP to Guatemala in relation to the administration he/she gives the radioactive waste

    International Nuclear Information System (INIS)

    Gomez Ordonnez, P.

    1998-01-01

    The Wamap mission visits Guatemala assisting to application Direccion General de Energia. The nuclear activity in Guatemala is limited to the investigation and the radioisotopes application. In this visit three important aspects were identified that required attention: The establishment gives a Regulatory law in the handling waste; An inventory gives the radioactive waste that have been generated; Technical knowledge on the storage. gathering and immobilization gives the waste

  20. Probabilistic quantitative microbial risk assessment model of norovirus from wastewater irrigated vegetables in Ghana using genome copies and fecal indicator ratio conversion for estimating exposure dose

    DEFF Research Database (Denmark)

    Owusu-Ansah, Emmanuel de-Graft Johnson; Sampson, Angelina; Amponsah, Samuel K.

    2017-01-01

    physical and environmental factors that might influence the reliability of using indicator organisms in microbial risk assessment. The challenges facing analytical studies on virus enumeration (genome copies or particles) have contributed to the already existing lack of data in QMRA modelling. This study......The need to replace the commonly applied fecal indicator conversions ratio (an assumption of 1:10− 5 virus to fecal indicator organism) in Quantitative Microbial Risk Assessment (QMRA) with models based on quantitative data on the virus of interest has gained prominence due to the different...... attempts to fit a QMRA model to genome copies of norovirus data. The model estimates the risk of norovirus infection from the intake of vegetables irrigated with wastewater from different sources. The results were compared to the results of a corresponding model using the fecal indicator conversion ratio...

  1. Early Quantitative Assessment of Non-Functional Requirements

    NARCIS (Netherlands)

    Kassab, M.; Daneva, Maia; Ormandjieva, O.

    2007-01-01

    Non-functional requirements (NFRs) of software systems are a well known source of uncertainty in effort estimation. Yet, quantitatively approaching NFR early in a project is hard. This paper makes a step towards reducing the impact of uncertainty due to NRF. It offers a solution that incorporates

  2. On the Schauder estimates of solutions to parabolic equations

    International Nuclear Information System (INIS)

    Han Qing

    1998-01-01

    This paper gives a priori estimates on asymptotic polynomials of solutions to parabolic differential equations at any points. This leads to a pointwise version of Schauder estimates. The result improves the classical Schauder estimates in a way that the estimates of solutions and their derivatives at one point depend on the coefficient and nonhomogeneous terms at this particular point

  3. QUANTITATIVE ESTIMATION OF SOIL EROSION IN THE DRĂGAN RIVER WATERSHED WITH THE U.S.L.E. TYPE ROMSEM MODEL

    Directory of Open Access Journals (Sweden)

    Csaba HORVÁTH

    2008-05-01

    Full Text Available Quantitative estimation of soil erosion in the Drăgan river watershed with the U.S.L.E. type Romsem modelSediment delivered from water erosion causes substantial waterway damages and water quality degradation. A number of factors such as drainage area size, basin slope, climate, land use/land cover may affect sediment delivery processes. The goal of this study is to define a computationally effective suitable soil erosion model in the Drăgan river watershed, for future sedimentation studies. Geographic Information System (GIS is used to determine the Universal Soil Loss Equation Model (U.S.L.E. values of the studied water basin. The methods and approaches used in this study are expected to be applicable in future research and to watersheds in other regions.

  4. Quantitative estimation of itopride hydrochloride and rabeprazole sodium from capsule formulation

    OpenAIRE

    Pillai S; Singhvi I

    2008-01-01

    Two simple, accurate, economical and reproducible UV spectrophotometric methods and one HPLC method for simultaneous estimation of two component drug mixture of itopride hydrochloride and rabeprazole sodium from combined capsule dosage form have been developed. First developed method involves formation and solving of simultaneous equations using 265.2 nm and 290.8 nm as two wavelengths. Second method is based on two wavelength calculation, wavelengths selected for estimation of itopride hydro...

  5. MAGNETO-CONVECTION AND LITHIUM AGE ESTIMATES OF THE β PICTORIS MOVING GROUP

    International Nuclear Information System (INIS)

    Macdonald, J.; Mullan, D. J.

    2010-01-01

    Although the means of the ages of stars in young groups determined from Li depletion often agree with mean ages determined from Hertzsprung-Russell (H-R) diagram isochrones, there are often statistically significant differences in the ages of individual stars determined by the two methods. We find that inclusion of the effects of inhibition of convection due to the presence of magnetic fields leads to consistent ages for the individual stars. We illustrate how age consistency arises by applying our results to the β Pictoris moving group (BPMG). We find that, although magnetic inhibition of convection leads to increased ages from the H-R diagram isochrones for all stars, Li ages are decreased for fully convective M stars and increased for stars with radiative cores. Our consistent age determination for BPMG of 40 Myr is larger than previous determinations by a factor of about two. We have also considered models in which the mixing length ratio is adjusted to give consistent ages. We find that our magneto-convection models, which give quantitative estimates of magnetic field strength, provide a viable alternative to models in which the effects of magnetic fields (and other processes) are accounted for by reducing the mixing length ratio.

  6. Improved accuracy of quantitative parameter estimates in dynamic contrast-enhanced CT study with low temporal resolution

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sun Mo, E-mail: Sunmo.Kim@rmp.uhn.on.ca [Radiation Medicine Program, Princess Margaret Hospital/University Health Network, Toronto, Ontario M5G 2M9 (Canada); Haider, Masoom A. [Department of Medical Imaging, Sunnybrook Health Sciences Centre, Toronto, Ontario M4N 3M5, Canada and Department of Medical Imaging, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Jaffray, David A. [Radiation Medicine Program, Princess Margaret Hospital/University Health Network, Toronto, Ontario M5G 2M9, Canada and Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Yeung, Ivan W. T. [Radiation Medicine Program, Princess Margaret Hospital/University Health Network, Toronto, Ontario M5G 2M9 (Canada); Department of Medical Physics, Stronach Regional Cancer Centre, Southlake Regional Health Centre, Newmarket, Ontario L3Y 2P9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada)

    2016-01-15

    quantitative histogram parameters of volume transfer constant [standard deviation (SD), 98th percentile, and range], rate constant (SD), blood volume fraction (mean, SD, 98th percentile, and range), and blood flow (mean, SD, median, 98th percentile, and range) for sampling intervals between 10 and 15 s. Conclusions: The proposed method of PCA filtering combined with the AIF estimation technique allows low frequency scanning for DCE-CT study to reduce patient radiation dose. The results indicate that the method is useful in pixel-by-pixel kinetic analysis of DCE-CT data for patients with cervical cancer.

  7. Size-based estimation of the status of fish stocks: simulation analysis and comparison with age-based estimations

    DEFF Research Database (Denmark)

    Kokkalis, Alexandros; Thygesen, Uffe Høgsbro; Nielsen, Anders

    , were investigated and our estimations were compared to the ICES advice. Only size-specific catch data were used, in order to emulate data limited situations. The simulation analysis reveals that the status of the stock, i.e. F/Fmsy, is estimated more accurately than the fishing mortality F itself....... Specific knowledge of the natural mortality improves the estimation more than having information about all other life history parameters. Our approach gives, at least qualitatively, an estimated stock status which is similar to the results of an age-based assessment. Since our approach only uses size...

  8. Validation of generic cost estimates for construction-related activities at nuclear power plants: Final report

    International Nuclear Information System (INIS)

    Simion, G.; Sciacca, F.; Claiborne, E.; Watlington, B.; Riordan, B.; McLaughlin, M.

    1988-05-01

    This report represents a validation study of the cost methodologies and quantitative factors derived in Labor Productivity Adjustment Factors and Generic Methodology for Estimating the Labor Cost Associated with the Removal of Hardware, Materials, and Structures From Nuclear Power Plants. This cost methodology was developed to support NRC analysts in determining generic estimates of removal, installation, and total labor costs for construction-related activities at nuclear generating stations. In addition to the validation discussion, this report reviews the generic cost analysis methodology employed. It also discusses each of the individual cost factors used in estimating the costs of physical modifications at nuclear power plants. The generic estimating approach presented uses the /open quotes/greenfield/close quotes/ or new plant construction installation costs compiled in the Energy Economic Data Base (EEDB) as a baseline. These baseline costs are then adjusted to account for labor productivity, radiation fields, learning curve effects, and impacts on ancillary systems or components. For comparisons of estimated vs actual labor costs, approximately four dozen actual cost data points (as reported by 14 nuclear utilities) were obtained. Detailed background information was collected on each individual data point to give the best understanding possible so that the labor productivity factors, removal factors, etc., could judiciously be chosen. This study concludes that cost estimates that are typically within 40% of the actual values can be generated by prudently using the methodologies and cost factors investigated herein

  9. Noise level estimation in weakly nonlinear slowly time-varying systems

    International Nuclear Information System (INIS)

    Aerts, J R M; Dirckx, J J J; Lataire, J; Pintelon, R

    2008-01-01

    Recently, a method using multisine excitation was proposed for estimating the frequency response, the nonlinear distortions and the disturbing noise of weakly nonlinear time-invariant systems. This method has been demonstrated on the measurement of nonlinear distortions in the vibration of acoustically driven systems such as a latex membrane, which is a good example of a time-invariant system [1]. However, not all systems are perfectly time invariant, e.g. biomechanical systems. This time variation can be misinterpreted as an elevated noise floor, and the classical noise estimation method gives a wrong result. Two improved methods to retrieve the correct noise information from the measurements are presented. Both of them make use of multisine excitations. First, it is demonstrated that the improved methods give the same result as the classical noise estimation method when applied to a time-invariant system (high-quality microphone membrane). Next, it is demonstrated that the new methods clearly give an improved estimate of the noise level on time-varying systems. As an application example results for the vibration response of an eardrum are shown

  10. Qualitative and quantitative laser-induced breakdown spectroscopy of bronze objects

    International Nuclear Information System (INIS)

    Tankova, V; Blagoev, K; Grozeva, M; Malcheva, G; Penkova, P

    2016-01-01

    Laser-induced breakdown spectroscopy (LIBS) is an analytical technique for qualitative and quantitative elemental analysis of solids, liquids and gases. In this work, the method was applied for investigation of archaeological bronze objects. The analytical information obtained by LIBS was used for qualitative determination of the elements in the material used for manufacturing of the objects under study. Quantitative chemical analysis was also performed after generating calibration curves with standard samples of similar matrix composition. Quantitative estimation of the elemental concentration of the bulk of the samples was performed, together with investigation of the surface layer of the objects. The results of the quantitative analyses gave indications about the manufacturing process of the investigated objects. (paper)

  11. Quantitative Portfolio Optimization Techniques Applied to the Brazilian Stock Market

    Directory of Open Access Journals (Sweden)

    André Alves Portela Santos

    2012-09-01

    Full Text Available In this paper we assess the out-of-sample performance of two alternative quantitative portfolio optimization techniques - mean-variance and minimum variance optimization – and compare their performance with respect to a naive 1/N (or equally-weighted portfolio and also to the market portfolio given by the Ibovespa. We focus on short selling-constrained portfolios and consider alternative estimators for the covariance matrices: sample covariance matrix, RiskMetrics, and three covariance estimators proposed by Ledoit and Wolf (2003, Ledoit and Wolf (2004a and Ledoit and Wolf (2004b. Taking into account alternative portfolio re-balancing frequencies, we compute out-of-sample performance statistics which indicate that the quantitative approaches delivered improved results in terms of lower portfolio volatility and better risk-adjusted returns. Moreover, the use of more sophisticated estimators for the covariance matrix generated optimal portfolios with lower turnover over time.

  12. Generalized PSF modeling for optimized quantitation in PET imaging.

    Science.gov (United States)

    Ashrafinia, Saeed; Mohy-Ud-Din, Hassan; Karakatsanis, Nicolas A; Jha, Abhinav K; Casey, Michael E; Kadrmas, Dan J; Rahmim, Arman

    2017-06-21

    Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUV mean and SUV max , including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUV mean bias in small tumours. Overall, the results indicate that exactly matched PSF

  13. The notion of gift-giving and organ donation.

    Science.gov (United States)

    Gerrand, Nicole

    1994-04-01

    The analogy between gift-giving and organ donation was first suggested at the beginning of the transplantation era, when policy makers and legislators were promoting voluntary organ donation as the preferred procurement procedure. It was believed that the practice of gift-giving had some features which were also thought to be necessary to ensure that an organ procurement procedure would be morally acceptable, namely voluntarism and altruism. Twenty-five years later, the analogy between gift-giving and organ donation is still being made in the literature and used in organ donation awareness campaigns. In this paper I want to challenge this analogy. By examining a range of circumstances in which gift-giving occurs, I argue that the significant differences between the various types of gift-giving and organ donation makes any analogy between the two very general and superficial, and I suggest that a more appropriate analogy can be found elsewhere.

  14. Thinkers and feelers: Emotion and giving.

    Science.gov (United States)

    Corcoran, Katie E

    2015-07-01

    Voluntary organizations, such as religious congregations, ask their members to contribute money as a part of membership and rely on these contributions for their survival. Yet often only a small cadre of members provides the majority of the contributions. Past research on congregational giving focuses on cognitive rational processes, generally neglecting the role of emotion. Extending Collins' (2004) interaction ritual theory, I predict that individuals who experience positive emotions during religious services will be more likely to give a higher proportion of their income to their congregation than those who do not. Moreover, I argue that this effect will be amplified in congregational contexts characterized by high aggregate levels of positive emotion, strictness, dense congregational networks, and expressive rituals. Using data from the 2001 U.S. Congregational Life Survey and multilevel modeling, I find support for several of these hypotheses. The findings suggest that both cognitive and emotional processes underlie congregational giving. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Quantitative high-resolution genomic analysis of single cancer cells.

    Science.gov (United States)

    Hannemann, Juliane; Meyer-Staeckling, Sönke; Kemming, Dirk; Alpers, Iris; Joosse, Simon A; Pospisil, Heike; Kurtz, Stefan; Görndt, Jennifer; Püschel, Klaus; Riethdorf, Sabine; Pantel, Klaus; Brandt, Burkhard

    2011-01-01

    During cancer progression, specific genomic aberrations arise that can determine the scope of the disease and can be used as predictive or prognostic markers. The detection of specific gene amplifications or deletions in single blood-borne or disseminated tumour cells that may give rise to the development of metastases is of great clinical interest but technically challenging. In this study, we present a method for quantitative high-resolution genomic analysis of single cells. Cells were isolated under permanent microscopic control followed by high-fidelity whole genome amplification and subsequent analyses by fine tiling array-CGH and qPCR. The assay was applied to single breast cancer cells to analyze the chromosomal region centred by the therapeutical relevant EGFR gene. This method allows precise quantitative analysis of copy number variations in single cell diagnostics.

  16. On estimation of the intensity function of a point process

    NARCIS (Netherlands)

    Lieshout, van M.N.M.

    2010-01-01

    Abstract. Estimation of the intensity function of spatial point processes is a fundamental problem. In this paper, we interpret the Delaunay tessellation field estimator recently introduced by Schaap and Van de Weygaert as an adaptive kernel estimator and give explicit expressions for the mean and

  17. Quantitative proteome profiling of normal human circulating microparticles

    DEFF Research Database (Denmark)

    Østergaard, Ole; Nielsen, Christoffer T; Iversen, Line V

    2012-01-01

    Circulating microparticles (MPs) are produced as part of normal physiology. Their numbers, origin, and composition change in pathology. Despite this, the normal MP proteome has not yet been characterized with standardized high-resolution methods. We here quantitatively profile the normal MP...... proteome using nano-LC-MS/MS on an LTQ-Orbitrap with optimized sample collection, preparation, and analysis of 12 different normal samples. Analytical and procedural variation were estimated in triply processed samples analyzed in triplicate from two different donors. Label-free quantitation was validated...... by the correlation of cytoskeletal protein intensities with MP numbers obtained by flow cytometry. Finally, the validity of using pooled samples was evaluated using overlap protein identification numbers and multivariate data analysis. Using conservative parameters, 536 different unique proteins were quantitated...

  18. Basic research for developing the quantitative neutron radiography

    International Nuclear Information System (INIS)

    Tamaki, Masayoshi; Ikeda, Yasushi; Ohkubo, Kohei; Tasaka, Kanji; Yoneda, Kenji; Fujine, Shigenori.

    1992-01-01

    This investigation concerns the basic research and development on quantitative neutron radiography by using a honeycomb collimator which reduces the effect due to scattered neutrons in objective matter. On the observation of the hydrogenate materials such as metal hydrides, water and hydrocarbons by neutron radiography, scattered neutrons from these objectives make the quantitativeness of the neutron radiographic image lower grade. In order to improve the quantitativeness of the image, a honeycomb collimator, which is a honeycomb structure of neutron absorbing material, was introduced to the conventional neutron radiography system. By setting the neutron-absorbing honeycomb collimator between objective and imaging system, neutrons scattered in the objective were absorbed by the honeycomb material and attenuated before coming to the imaging system, but neutrons which were transmitted the objective sample without any interaction reached the imaging system and formed the image of the sample. As the image by purely transmitted neutrons is intrinsic due to the neutronic character of the sample, the image data give the quantitative information. In the present experiment, aluminum honeycomb which was coated with boron nitride was prepared and used in order to image the standard stepwise samples for the evaluation of the quantitative grade of the newly proposed neutron radiography method. From the comparison between macroscopic total cross section and the attenuation coefficient of the thermal neutron for aluminum, copper and hydrocarbons, it was confirmed that they were fairly consistent each other. It can be concluded that the newly proposed neutron radiography method using the neutron-absorbing honeycomb collimator for the elimination of the scattered neutrons improves remarkably the quantitativeness of the neutron radiography technique. (author)

  19. Quantitative analysis of water heavy by NMR spectroscopy

    International Nuclear Information System (INIS)

    Gomez Gil, V.

    1975-01-01

    Nuclear Magnetic Resonance has been applied to a wide variety of quantitative problems. A typical example has been the determination of isotopic composition. In this paper two different analytical methods for the determination of water in deuterium oxide are described. The first one, employs acetonitril as an internal standard compound and in the second one calibration curve of signal integral curve versus amount of D 2 O is constructed. Both methods give results comparable to those of mass spectrometry of IR spectroscopy. (Author) 5 refs

  20. Give blood at CERN

    CERN Multimedia

    SC Unit

    2008-01-01

    ACCIDENTS and ILLNESSES don’t take a break! DO SOMETHING AMAZING - GIVE BLOOD! IT’S IN ALL OUR INTERESTS. 30 July 2008 from 9.30 a.m. to 4 p.m. CERN RESTAURANT NOVAE First floor - Salle des Pas Perdus After you have given blood, you are invited to partake of refreshments kindly offered by NOVAE.

  1. Validation and measurement uncertainty estimation in food microbiology: differences between quantitative and qualitative methods

    Directory of Open Access Journals (Sweden)

    Vesna Režić Dereani

    2010-09-01

    Full Text Available The aim of this research is to describe quality control procedures, procedures for validation and measurement uncertainty (MU determination as an important element of quality assurance in food microbiology laboratory for qualitative and quantitative type of analysis. Accreditation is conducted according to the standard ISO 17025:2007. General requirements for the competence of testing and calibration laboratories, which guarantees the compliance with standard operating procedures and the technical competence of the staff involved in the tests, recently are widely introduced in food microbiology laboratories in Croatia. In addition to quality manual introduction, and a lot of general documents, some of the most demanding procedures in routine microbiology laboratories are measurement uncertainty (MU procedures and validation experiment design establishment. Those procedures are not standardized yet even at international level, and they require practical microbiological knowledge, altogether with statistical competence. Differences between validation experiments design for quantitative and qualitative food microbiology analysis are discussed in this research, and practical solutions are shortly described. MU for quantitative determinations is more demanding issue than qualitative MU calculation. MU calculations are based on external proficiency testing data and internal validation data. In this paper, practical schematic descriptions for both procedures are shown.

  2. State estimation for wave energy converters

    Energy Technology Data Exchange (ETDEWEB)

    Bacelli, Giorgio; Coe, Ryan Geoffrey

    2017-04-01

    This report gives a brief discussion and examples on the topic of state estimation for wave energy converters (WECs). These methods are intended for use to enable real-time closed loop control of WECs.

  3. Quantitative precipitation estimation in complex orography using quasi-vertical profiles of dual polarization radar variables

    Science.gov (United States)

    Montopoli, Mario; Roberto, Nicoletta; Adirosi, Elisa; Gorgucci, Eugenio; Baldini, Luca

    2017-04-01

    Weather radars are nowadays a unique tool to estimate quantitatively the rain precipitation near the surface. This is an important task for a plenty of applications. For example, to feed hydrological models, mitigate the impact of severe storms at the ground using radar information in modern warning tools as well as aid the validation studies of satellite-based rain products. With respect to the latter application, several ground validation studies of the Global Precipitation Mission (GPM) products have recently highlighted the importance of accurate QPE from ground-based weather radars. To date, a plenty of works analyzed the performance of various QPE algorithms making use of actual and synthetic experiments, possibly trained by measurement of particle size distributions and electromagnetic models. Most of these studies support the use of dual polarization variables not only to ensure a good level of radar data quality but also as a direct input in the rain estimation equations. Among others, one of the most important limiting factors in radar QPE accuracy is the vertical variability of particle size distribution that affects at different levels, all the radar variables acquired as well as rain rates. This is particularly impactful in mountainous areas where the altitudes of the radar sampling is likely several hundred of meters above the surface. In this work, we analyze the impact of the vertical profile variations of rain precipitation on several dual polarization radar QPE algorithms when they are tested a in complex orography scenario. So far, in weather radar studies, more emphasis has been given to the extrapolation strategies that make use of the signature of the vertical profiles in terms of radar co-polar reflectivity. This may limit the use of the radar vertical profiles when dual polarization QPE algorithms are considered because in that case all the radar variables used in the rain estimation process should be consistently extrapolated at the surface

  4. Estimation of spectral kurtosis

    Science.gov (United States)

    Sutawanir

    2017-03-01

    Rolling bearings are the most important elements in rotating machinery. Bearing frequently fall out of service for various reasons: heavy loads, unsuitable lubrications, ineffective sealing. Bearing faults may cause a decrease in performance. Analysis of bearing vibration signals has attracted attention in the field of monitoring and fault diagnosis. Bearing vibration signals give rich information for early detection of bearing failures. Spectral kurtosis, SK, is a parameter in frequency domain indicating how the impulsiveness of a signal varies with frequency. Faults in rolling bearings give rise to a series of short impulse responses as the rolling elements strike faults, SK potentially useful for determining frequency bands dominated by bearing fault signals. SK can provide a measure of the distance of the analyzed bearings from a healthy one. SK provides additional information given by the power spectral density (psd). This paper aims to explore the estimation of spectral kurtosis using short time Fourier transform known as spectrogram. The estimation of SK is similar to the estimation of psd. The estimation falls in model-free estimation and plug-in estimator. Some numerical studies using simulations are discussed to support the methodology. Spectral kurtosis of some stationary signals are analytically obtained and used in simulation study. Kurtosis of time domain has been a popular tool for detecting non-normality. Spectral kurtosis is an extension of kurtosis in frequency domain. The relationship between time domain and frequency domain analysis is establish through power spectrum-autocovariance Fourier transform. Fourier transform is the main tool for estimation in frequency domain. The power spectral density is estimated through periodogram. In this paper, the short time Fourier transform of the spectral kurtosis is reviewed, a bearing fault (inner ring and outer ring) is simulated. The bearing response, power spectrum, and spectral kurtosis are plotted to

  5. Why healthcare workers give prelacteal feeds.

    Science.gov (United States)

    Akuse, R M; Obinya, E A

    2002-08-01

    Because prelacteal feeds can adversely affect breastfeeding, UNICEF/WHO discourage their use unless medically indicated. The study was carried out to determine the proportion of healthcare workers who routinely give prelacteal feeds, and their reasons for doing so; further, to determine whether any differences exist between medically and non-medically trained healthcare workers in their administration of prelacteal feeds. Survey. Primary, secondary and tertiary health facilities in Kaduna township Nigeria. Of 1100 healthcare workers sampled, 747 (68%) responded. Of these 80% had received medical training, 20% had not. Use of a pretested validated questionnaire. Large proportions of both medical and non-medically trained healthcare workers stated they routinely give prelacteal feeds (doctors, 68.2%; nurses, 70.2%; and non-medical, 73.6%). However their reasons for doing so differed significantly (P=0.00001). Nurses gave mainly for perceived breast milk insufficiency, doctors for prevention of dehydration, hypoglycaemia and neonatal jaundice and non-medical staff to prepare the gastrointestinal tract for digestion and to quench thirst. Most healthcare workers (medical and non-medical) routinely and unnecessarily give prelacteal feeds. Therefore training and retraining programmes in lactation management are necessary and must include non-medical staff. These programmes, while emphasizing the danger of giving prelacteal feeds, must deal with the misconceptions of each group. Deliberate efforts have to be made to incorporate clinical training in breastfeeding in curricula of Schools of Medicine and Nursing.

  6. Stereological estimates of nuclear volume and other quantitative variables in supratentorial brain tumors. Practical technique and use in prognostic evaluation

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Braendgaard, H; Chistiansen, A O

    1991-01-01

    The use of morphometry and modern stereology in malignancy grading of brain tumors is only poorly investigated. The aim of this study was to present these quantitative methods. A retrospective feasibility study of 46 patients with supratentorial brain tumors was carried out to demonstrate...... the practical technique. The continuous variables were correlated with the subjective, qualitative WHO classification of brain tumors, and the prognostic value of the parameters was assessed. Well differentiated astrocytomas (n = 14) had smaller estimates of the volume-weighted mean nuclear volume and mean...... nuclear profile area, than those of anaplastic astrocytomas (n = 13) (2p = 3.1.10(-3) and 2p = 4.8.10(-3), respectively). No differences were seen between the latter type of tumor and glioblastomas (n = 19). The nuclear index was of the same magnitude in all three tumor types, whereas the mitotic index...

  7. A bottom-up approach in estimating the measurement uncertainty and other important considerations for quantitative analyses in drug testing for horses.

    Science.gov (United States)

    Leung, Gary N W; Ho, Emmie N M; Kwok, W Him; Leung, David K K; Tang, Francis P W; Wan, Terence S M; Wong, April S Y; Wong, Colton H F; Wong, Jenny K Y; Yu, Nola H

    2007-09-07

    Quantitative determination, particularly for threshold substances in biological samples, is much more demanding than qualitative identification. A proper assessment of any quantitative determination is the measurement uncertainty (MU) associated with the determined value. The International Standard ISO/IEC 17025, "General requirements for the competence of testing and calibration laboratories", has more prescriptive requirements on the MU than its superseded document, ISO/IEC Guide 25. Under the 2005 or 1999 versions of the new standard, an estimation of the MU is mandatory for all quantitative determinations. To comply with the new requirement, a protocol was established in the authors' laboratory in 2001. The protocol has since evolved based on our practical experience, and a refined version was adopted in 2004. This paper describes our approach in establishing the MU, as well as some other important considerations, for the quantification of threshold substances in biological samples as applied in the area of doping control for horses. The testing of threshold substances can be viewed as a compliance test (or testing to a specified limit). As such, it should only be necessary to establish the MU at the threshold level. The steps in a "Bottom-Up" approach adopted by us are similar to those described in the EURACHEM/CITAC guide, "Quantifying Uncertainty in Analytical Measurement". They involve first specifying the measurand, including the relationship between the measurand and the input quantities upon which it depends. This is followed by identifying all applicable uncertainty contributions using a "cause and effect" diagram. The magnitude of each uncertainty component is then calculated and converted to a standard uncertainty. A recovery study is also conducted to determine if the method bias is significant and whether a recovery (or correction) factor needs to be applied. All standard uncertainties with values greater than 30% of the largest one are then used to

  8. Fast quantitative MRI as a nonlinear tomography problem

    NARCIS (Netherlands)

    Sbrizzi, Alessandro; Heide, Oscar van der; Cloos, Martijn; van der Toorn, A; Hoogduin, Hans; Luijten, Peter R; van den Berg, Cornelis A T

    2018-01-01

    Quantitative Magnetic Resonance Imaging (MRI) is based on a two-steps approach: estimation of the magnetic moments distribution inside the body, followed by a voxel-by-voxel quantification of the human tissue properties. This splitting simplifies the computations but poses several constraints on the

  9. The quantitative genetics of phenotypic variation in animals

    NARCIS (Netherlands)

    Hill, W.G.; Mulder, H.A.; Zhang, X.S.

    2007-01-01

    Considerable attention has been paid to estimating genetic variability in quantitative traits and to how it is maintained and changed by selection in natural and domesticated populations, but rather little attention has been paid to how levels of environmental and phenotypic variance are influenced.

  10. Situational Factors of Influencing Drivers to Give Precedence to Jaywalking Pedestrians at Signalized Crosswalk

    Directory of Open Access Journals (Sweden)

    Xiaobei Jiang

    2011-12-01

    Full Text Available A large number of fatalities are caused by the vehicle-pedestrian accidents. Under a potential conflict between the vehicle and jaywalking pedestrian, giving precedence to the pedestrian will be a proper decision taken by the driver to avoid collision. Field traffic data has been collected by video recording and image processing at two signalized crosswalks. Vehicle speed performance in the single vehicle-pedestrian encounter and platoon vehicle-pedestrian encounter were analyzed for understanding the driver behavior in the conflict process. Binary logic model was proposed to estimate the drivers' giving precedence influenced by the situational factors and the model was validated to predict the drivers' choices accurately. The vehicle speed, pedestrian speed, pedestrian lateral distance and the vehicle longitudinal distance to the conflict point were proved to affect the drivers' choices in platoon driving. The research results would hopefully be helpful to the design of intelligent vehicles and pedestrian protection systems by the knowledge-based decision making process.

  11. A New Bias Corrected Version of Heteroscedasticity Consistent Covariance Estimator

    Directory of Open Access Journals (Sweden)

    Munir Ahmed

    2016-06-01

    Full Text Available In the presence of heteroscedasticity, different available flavours of the heteroscedasticity consistent covariance estimator (HCCME are used. However, the available literature shows that these estimators can be considerably biased in small samples. Cribari–Neto et al. (2000 introduce a bias adjustment mechanism and give the modified White estimator that becomes almost bias-free even in small samples. Extending these results, Cribari-Neto and Galvão (2003 present a similar bias adjustment mechanism that can be applied to a wide class of HCCMEs’. In the present article, we follow the same mechanism as proposed by Cribari-Neto and Galvão to give bias-correction version of HCCME but we use adaptive HCCME rather than the conventional HCCME. The Monte Carlo study is used to evaluate the performance of our proposed estimators.

  12. UNBIASED ESTIMATORS OF SPECIFIC CONNECTIVITY

    Directory of Open Access Journals (Sweden)

    Jean-Paul Jernot

    2011-05-01

    Full Text Available This paper deals with the estimation of the specific connectivity of a stationary random set in IRd. It turns out that the "natural" estimator is only asymptotically unbiased. The example of a boolean model of hypercubes illustrates the amplitude of the bias produced when the measurement field is relatively small with respect to the range of the random set. For that reason unbiased estimators are desired. Such an estimator can be found in the literature in the case where the measurement field is a right parallelotope. In this paper, this estimator is extended to apply to measurement fields of various shapes, and to possess a smaller variance. Finally an example from quantitative metallography (specific connectivity of a population of sintered bronze particles is given.

  13. Will Quantitative Proteomics Redefine Some of the Key Concepts in Skeletal Muscle Physiology?

    Science.gov (United States)

    Gizak, Agnieszka; Rakus, Dariusz

    2016-01-11

    Molecular and cellular biology methodology is traditionally based on the reasoning called "the mechanistic explanation". In practice, this means identifying and selecting correlations between biological processes which result from our manipulation of a biological system. In theory, a successful application of this approach requires precise knowledge about all parameters of a studied system. However, in practice, due to the systems' complexity, this requirement is rarely, if ever, accomplished. Typically, it is limited to a quantitative or semi-quantitative measurements of selected parameters (e.g., concentrations of some metabolites), and a qualitative or semi-quantitative description of expression/post-translational modifications changes within selected proteins. A quantitative proteomics approach gives a possibility of quantitative characterization of the entire proteome of a biological system, in the context of the titer of proteins as well as their post-translational modifications. This enables not only more accurate testing of novel hypotheses but also provides tools that can be used to verify some of the most fundamental dogmas of modern biology. In this short review, we discuss some of the consequences of using quantitative proteomics to verify several key concepts in skeletal muscle physiology.

  14. Will Quantitative Proteomics Redefine Some of the Key Concepts in Skeletal Muscle Physiology?

    Directory of Open Access Journals (Sweden)

    Agnieszka Gizak

    2016-01-01

    Full Text Available Molecular and cellular biology methodology is traditionally based on the reasoning called “the mechanistic explanation”. In practice, this means identifying and selecting correlations between biological processes which result from our manipulation of a biological system. In theory, a successful application of this approach requires precise knowledge about all parameters of a studied system. However, in practice, due to the systems’ complexity, this requirement is rarely, if ever, accomplished. Typically, it is limited to a quantitative or semi-quantitative measurements of selected parameters (e.g., concentrations of some metabolites, and a qualitative or semi-quantitative description of expression/post-translational modifications changes within selected proteins. A quantitative proteomics approach gives a possibility of quantitative characterization of the entire proteome of a biological system, in the context of the titer of proteins as well as their post-translational modifications. This enables not only more accurate testing of novel hypotheses but also provides tools that can be used to verify some of the most fundamental dogmas of modern biology. In this short review, we discuss some of the consequences of using quantitative proteomics to verify several key concepts in skeletal muscle physiology.

  15. Quantitative histopathological variables in in situ and invasive ductal and lobular carcinomas of the breast

    DEFF Research Database (Denmark)

    Ladekarl, M; Sørensen, Flemming Brandt

    1993-01-01

    This study was carried out to compare quantitative histopathological estimates obtained in normal breast epithelium (N = 15), lobular carcinoma in situ (N = 29), ductal carcinoma in situ (N = 24), invasive lobular carcinoma (N = 39), and invasive ductal carcinoma (N = 71) of the female breast....... Using unbiased stereology, the three-dimensional mean nuclear size, v v(nuc), was estimated in routine histological sections, along with morphometric point-counting based estimates of the mean nuclear profile area, aH(nuc), and estimates of the nuclear density index, NI, the mitotic index, MI......) with those obtained in tumors of pure lobular carcinoma in situ (N = 7), only the difference in mean NI reached statistical significance (2p = 0.001). Several significant differences were found between means of quantitative histopathological estimates obtained in normal breast epithelium, pure in situ...

  16. Quantitative histopathological variables in in situ and invasive ductal and lobular carcinomas of the breast

    DEFF Research Database (Denmark)

    Ladekarl, M; Sørensen, Flemming Brandt

    1993-01-01

    This study was carried out to compare quantitative histopathological estimates obtained in normal breast epithelium (N = 15), lobular carcinoma in situ (N = 29), ductal carcinoma in situ (N = 24), invasive lobular carcinoma (N = 39), and invasive ductal carcinoma (N = 71) of the female breast....... Using unbiased stereology, the three-dimensional mean nuclear size, v v(nuc), was estimated in routine histological sections, along with morphometric point-counting based estimates of the mean nuclear profile area, aH(nuc), and estimates of the nuclear density index, NI, the mitotic index, MI...... obtained in tumors of pure lobular carcinoma in situ (N = 7), only the difference in mean NI reached statistical significance (2p = 0.001). Several significant differences were found between means of quantitative histopathological estimates obtained in normal breast epithelium, pure in situ lesions...

  17. Probability Sampling - A Guideline for Quantitative Health Care ...

    African Journals Online (AJOL)

    A more direct definition is the method used for selecting a given ... description of the chosen population, the sampling procedure giving ... target population, precision, and stratification. The ... survey estimates, it is recommended that researchers first analyze a .... The optimum sample size has a relation to the type of planned ...

  18. Quantitative high-resolution genomic analysis of single cancer cells.

    Directory of Open Access Journals (Sweden)

    Juliane Hannemann

    Full Text Available During cancer progression, specific genomic aberrations arise that can determine the scope of the disease and can be used as predictive or prognostic markers. The detection of specific gene amplifications or deletions in single blood-borne or disseminated tumour cells that may give rise to the development of metastases is of great clinical interest but technically challenging. In this study, we present a method for quantitative high-resolution genomic analysis of single cells. Cells were isolated under permanent microscopic control followed by high-fidelity whole genome amplification and subsequent analyses by fine tiling array-CGH and qPCR. The assay was applied to single breast cancer cells to analyze the chromosomal region centred by the therapeutical relevant EGFR gene. This method allows precise quantitative analysis of copy number variations in single cell diagnostics.

  19. Multilayers quantitative X-ray fluorescence analysis applied to easel paintings.

    Science.gov (United States)

    de Viguerie, Laurence; Sole, V Armando; Walter, Philippe

    2009-12-01

    X-ray fluorescence spectrometry (XRF) allows a rapid and simple determination of the elemental composition of a material. As a non-destructive tool, it has been extensively used for analysis in art and archaeology since the early 1970s. Whereas it is commonly used for qualitative analysis, recent efforts have been made to develop quantitative treatment even with portable systems. However, the interpretation of the results obtained with this technique can turn out to be problematic in the case of layered structures such as easel paintings. The use of differential X-ray attenuation enables modelling of the various layers: indeed, the absorption of X-rays through different layers will result in modification of intensity ratio between the different characteristic lines. This work focuses on the possibility to use XRF with the fundamental parameters method to reconstruct the composition and thickness of the layers. This method was tested on several multilayers standards and gives a maximum error of 15% for thicknesses and errors of 10% for concentrations. On a painting test sample that was rather inhomogeneous, the XRF analysis provides an average value. This method was applied in situ to estimate the thickness of the layers a painting from Marco d'Oggiono, pupil of Leonardo da Vinci.

  20. Estimation of collective instabilities in RHIC

    International Nuclear Information System (INIS)

    MacKay, W.W.; Blaskiewicz, M.; Deng, D.; Mane, V.; Peggs, S.; Ratti, A.; Rose, J.; Shea, T.J.; Wei, J.

    1995-01-01

    The authors have estimated the broadband impedance in RHIC to be |Z/n| +79 ions at transition with an estimated 10% growth in emittance for Z/n = 1.5 Ω. They summarize the sources of broad and narrow band impedances in RHIC and investigate the multibunch instability limits throughout the machine cycle. The largest contribution to the broadband impedance comes from the abort and injection kickers. Since RHIC is designed to accelerate fully stripped ions from H + up to Au +79 they give results for both protons and gold ions; other ions should give results somewhere between these two extremes. All ion species are expected to be stable during storage. At lower energies damping systems and chromaticity corrections will limit any growth to acceptable levels during the short time it takes to inject and accelerate the beams

  1. New method for determination of quantitative indices of zooplankton feeding with the use of phosphorus isotopes under close to ''in situ'' conditions

    International Nuclear Information System (INIS)

    Zesenko, A.Ja.; Pavlovskaya, T.V.

    1985-01-01

    A new method determining food rations, duration of food digestion and elements of food balance in case of zooplankton fed on natural plankton was elaborated. A feature of radioactive mineral phosphorus entering into all components of plankton in natural marine water is a basis of this method. Fractionating the natural plankton according to the size of food particles with determination of specific radioactivity in them give a possibility to elucidate selectivity of feeding in zooplankton, its role in grazing and transforming the matter as well as regeneration of mineral phosphorus in pelagial ecosystems. High sensitivity (1 x 10 -4 μg P/indiv.) of the method allows to carry out experiments on separate individuals and estimate quantitatively variability of their food rations and separate elements of food balance. 28 refs., 3 figs., 3 tabs. (author)

  2. Quantitative Proteomic Analysis of Sulfolobus solfataricus Membrane Proteins

    NARCIS (Netherlands)

    Pham, T.K.; Sierocinski, P.; Oost, van der J.; Wright, P.C.

    2010-01-01

    A quantitative proteomic analysis of the membrane of the archaeon Sulfolobus solfataricus P2 using iTRAQ was successfully demonstrated in this technical note. The estimated number of membrane proteins of this organism is 883 (predicted based on Gravy score), corresponding to 30 % of the total

  3. The experience gives the Cuban program with children gives territories affected by the Chernobyl accident

    International Nuclear Information System (INIS)

    Garcia, O.; Llanes, R.

    1998-01-01

    From 1990 it works in Cuba a program destined to offer medical attention you specialize and to develop a plan sanatoria gives rehabilitation with children provided the different areas affected by the contamination radioactive resultant to the Chernobyl accident

  4. Improved diagnostic model for estimating wind energy

    Energy Technology Data Exchange (ETDEWEB)

    Endlich, R.M.; Lee, J.D.

    1983-03-01

    Because wind data are available only at scattered locations, a quantitative method is needed to estimate the wind resource at specific sites where wind energy generation may be economically feasible. This report describes a computer model that makes such estimates. The model uses standard weather reports and terrain heights in deriving wind estimates; the method of computation has been changed from what has been used previously. The performance of the current model is compared with that of the earlier version at three sites; estimates of wind energy at four new sites are also presented.

  5. A quantitative model for estimating mean annual soil loss in cultivated land using 137Cs measurements

    International Nuclear Information System (INIS)

    Yang Hao; Zhao Qiguo; Du Mingyuan; Minami, Katsuyuki; Hatta, Tamao

    2000-01-01

    The radioisotope 137 Cs has been widely used to determine rates of cultivated soil loss, Many calibration relationships (including both empirical relationships and theoretical models) have been employed to estimate erosion rates from the amount of 137 Cs lost from the cultivated soil profile. However, there are important limitations which restrict the reliability of these models, which consider only the uniform distribution of 137 Cs in the plough layer and the depth. As a result, erosion rates they may be overestimated or underestimated. This article presents a quantitative model for the relation the amount of 137 Cs lost from the cultivate soil profile and the rate of soil erosion. According to a mass balance model, during the construction of this model we considered the following parameters: the remaining fraction of the surface enrichment layer (F R ), the thickness of the surface enrichment layer (H s ), the depth of the plough layer (H p ), input fraction of the total 137 Cs fallout deposition during a given year t (F t ), radioactive decay of 137 Cs (k), and sampling year (t). The simulation results showed that the amounts of erosion rates estimated using this model were very sensitive to changes in the values of the parameters F R , H s , and H p . We also observed that the relationship between the rate of soil loss and 137 Cs depletion is neither linear nor logarithmic, and is very complex. Although the model is an improvement over existing approaches to derive calibration relationships for cultivated soil, it requires empirical information on local soil properties and the behavior of 137 Cs in the soil profile. There is clearly still a need for more precise information on the latter aspect and, in particular, on the retention of 137 Cs fallout in the top few millimeters of the soil profile and on the enrichment and depletion effects associated with soil redistribution (i.e. for determining accurate values of F R and H s ). (author)

  6. EX VIVO STUDY OF QUANTITATIVE ULTRASOUND PARAMETERS IN FATTY RABBIT LIVERS

    Science.gov (United States)

    Ghoshal, Goutam; Lavarello, Roberto J.; Kemmerer, Jeremy P.; Miller, Rita J.; Oelze, Michael L.

    2012-01-01

    Nonalcoholic fatty liver disease (NAFLD) affects more than 30% of Americans, and with increasing problems of obesity in the United States, NAFLD is poised to become an even more serious medical concern. At present, accurate classification of steatosis (fatty liver) represents a significant challenge. In this study, the use of high-frequency (8 to 25 MHz) quantitative ultrasound (QUS) imaging to quantify fatty liver was explored. QUS is an imaging technique that can be used to quantify properties of tissue giving rise to scattered ultrasound. The changes in the ultrasound properties of livers in rabbits undergoing atherogenic diets of varying durations were investigated using QUS. Rabbits were placed on a special fatty diet for 0, 3, or 6 weeks. The fattiness of the livers was quantified by estimating the total lipid content of the livers. Ultrasonic properties, such as speed of sound, attenuation, and backscatter coefficients, were estimated in ex vivo rabbit liver samples from animals that had been on the diet for varying periods. Two QUS parameters were estimated based on the backscatter coefficient: effective scatterer diameter (ESD) and effective acoustic concentration (EAC), using a spherical Gaussian scattering model. Two parameters were estimated based on the backscattered envelope statistics (the k parameter and the μ parameter) according to the homodyned K distribution. The speed of sound decreased from 1574 to 1565 m/s and the attenuation coefficient increased from 0.71 to 1.27 dB/cm/MHz, respectively, with increasing fat content in the liver. The ESD decreased from 31 to 17 μm and the EAC increased from 38 to 63 dB/cm3 with increasing fat content in the liver. A significant increase in the μ parameter from 0.18 to 0.93 scatterers/mm3 was observed with increasing fat content in the liver samples. The results of this study indicate that QUS parameters are sensitive to fat content in the liver. PMID:23062376

  7. Children are sensitive to norms of giving.

    Science.gov (United States)

    McAuliffe, Katherine; Raihani, Nichola J; Dunham, Yarrow

    2017-10-01

    People across societies engage in costly sharing, but the extent of such sharing shows striking cultural variation, highlighting the importance of local norms in shaping generosity. Despite this acknowledged role for norms, it is unclear when they begin to exert their influence in development. Here we use a Dictator Game to investigate the extent to which 4- to 9-year-old children are sensitive to selfish (give 20%) and generous (give 80%) norms. Additionally, we varied whether children were told how much other children give (descriptive norm) or what they should give according to an adult (injunctive norm). Results showed that children generally gave more when they were exposed to a generous norm. However, patterns of compliance varied with age. Younger children were more likely to comply with the selfish norm, suggesting a licensing effect. By contrast, older children were more influenced by the generous norm, yet capped their donations at 50%, perhaps adhering to a pre-existing norm of equality. Children were not differentially influenced by descriptive or injunctive norms, suggesting a primacy of norm content over norm format. Together, our findings indicate that while generosity is malleable in children, normative information does not completely override pre-existing biases. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. A calibration approach to glandular tissue composition estimation in digital mammography

    International Nuclear Information System (INIS)

    Kaufhold, J.; Thomas, J.A.; Eberhard, J.W.; Galbo, C.E.; Trotter, D.E. Gonzalez

    2002-01-01

    The healthy breast is almost entirely composed of a mixture of fatty, epithelial, and stromal tissues which can be grouped into two distinctly attenuating tissue types: fatty and glandular. Further, the amount of glandular tissue is linked to breast cancer risk, so an objective quantitative analysis of glandular tissue can aid in risk estimation. Highnam and Brady have measured glandular tissue composition objectively. However, they argue that their work should only be used for 'relative' tissue measurements unless a careful calibration has been performed. In this work, we perform such a 'careful calibration' on a digital mammography system and use it to estimate breast tissue composition of patient breasts. We imaged 0%, 50%, and 100% glandular-equivalent phantoms of varying thicknesses for a number of clinically relevant x-ray techniques on a digital mammography system. From these images, we extracted mean signal and noise levels and computed calibration curves that can be used for quantitative tissue composition estimation. In this way, we calculate the percent glandular composition of a patient breast on a pixelwise basis. This tissue composition estimation method was applied to 23 digital mammograms. We estimated the quantitative impact of different error sources on the estimates of tissue composition. These error sources include compressed breast height estimation error, residual scattered radiation, quantum noise, and beam hardening. Errors in the compressed breast height estimate contribute the most error in tissue composition--on the order of ±7% for a 4 cm compressed breast height. The spatially varying scattered radiation will contribute quantitatively less error overall, but may be significant in regions near the skinline. It is calculated that for a 4 cm compressed breast height, a residual scatter signal error is mitigated by approximately sixfold in the composition estimate. The error in composition due to the quantum noise, which is the limiting

  9. Monte Carlo-based tail exponent estimator

    Science.gov (United States)

    Barunik, Jozef; Vacha, Lukas

    2010-11-01

    In this paper we propose a new approach to estimation of the tail exponent in financial stock markets. We begin the study with the finite sample behavior of the Hill estimator under α-stable distributions. Using large Monte Carlo simulations, we show that the Hill estimator overestimates the true tail exponent and can hardly be used on samples with small length. Utilizing our results, we introduce a Monte Carlo-based method of estimation for the tail exponent. Our proposed method is not sensitive to the choice of tail size and works well also on small data samples. The new estimator also gives unbiased results with symmetrical confidence intervals. Finally, we demonstrate the power of our estimator on the international world stock market indices. On the two separate periods of 2002-2005 and 2006-2009, we estimate the tail exponent.

  10. Malware Function Estimation Using API in Initial Behavior

    OpenAIRE

    KAWAGUCHI, Naoto; OMOTE, Kazumasa

    2017-01-01

    Malware proliferation has become a serious threat to the Internet in recent years. Most current malware are subspecies of existing malware that have been automatically generated by illegal tools. To conduct an efficient analysis of malware, estimating their functions in advance is effective when we give priority to analyze malware. However, estimating the malware functions has been difficult due to the increasing sophistication of malware. Actually, the previous researches do not estimate the...

  11. Towards a quantitative, measurement-based estimate of the uncertainty in photon mass attenuation coefficients at radiation therapy energies

    Science.gov (United States)

    Ali, E. S. M.; Spencer, B.; McEwen, M. R.; Rogers, D. W. O.

    2015-02-01

    In this study, a quantitative estimate is derived for the uncertainty in the XCOM photon mass attenuation coefficients in the energy range of interest to external beam radiation therapy—i.e. 100 keV (orthovoltage) to 25 MeV—using direct comparisons of experimental data against Monte Carlo models and theoretical XCOM data. Two independent datasets are used. The first dataset is from our recent transmission measurements and the corresponding EGSnrc calculations (Ali et al 2012 Med. Phys. 39 5990-6003) for 10-30 MV photon beams from the research linac at the National Research Council Canada. The attenuators are graphite and lead, with a total of 140 data points and an experimental uncertainty of ˜0.5% (k = 1). An optimum energy-independent cross section scaling factor that minimizes the discrepancies between measurements and calculations is used to deduce cross section uncertainty. The second dataset is from the aggregate of cross section measurements in the literature for graphite and lead (49 experiments, 288 data points). The dataset is compared to the sum of the XCOM data plus the IAEA photonuclear data. Again, an optimum energy-independent cross section scaling factor is used to deduce the cross section uncertainty. Using the average result from the two datasets, the energy-independent cross section uncertainty estimate is 0.5% (68% confidence) and 0.7% (95% confidence). The potential for energy-dependent errors is discussed. Photon cross section uncertainty is shown to be smaller than the current qualitative ‘envelope of uncertainty’ of the order of 1-2%, as given by Hubbell (1999 Phys. Med. Biol 44 R1-22).

  12. OPINION GIVING SERVICES AS A SOURCE OF CONSUMER INFORMATION

    Directory of Open Access Journals (Sweden)

    Joanna Wyrwisz

    2015-09-01

    Full Text Available The goal of the article is to determine the place and role of opinion giving services in consumer behaviours. The discussion is conducted around the thesis saying that in the information society, opinion giving services constitute an important source of information for consumers in the process of selecting and purchasing both products and services. In the article the research approach based on the theoretical and empirical examinations was presented. The discussion starts with presenting a defi nition and types of opinion giving services which constitute the base for the characteristics of activities and usefulness of web portals collecting consumers opinions. The use of opinion giving services provided in the purchase process was evaluated. An essential interest in other consumers opinions, placed in Internet, was observed together with perceiving them as credible. Positive assessment of the functionality of opinion giving services was noticed.

  13. The New Planned Giving Officer.

    Science.gov (United States)

    Jordan, Ronald R.; Quynn, Katelyn L.

    1994-01-01

    A planned giving officer is seen as an asset to college/university development for technical expertise, credibility, and connections. Attorneys, certified public accountants, bank trust officers, financial planners, investment advisers, life insurance agents, and real estate brokers may be qualified but probably also need training. (MSE)

  14. 77 FR 41985 - Use of Influenza Disease Models To Quantitatively Evaluate the Benefits and Risks of Vaccines: A...

    Science.gov (United States)

    2012-07-17

    ... models to generate quantitative estimates of the benefits and risks of influenza vaccination. The public...] Use of Influenza Disease Models To Quantitatively Evaluate the Benefits and Risks of Vaccines: A... Influenza Disease Models to Quantitatively Evaluate the Benefits and Risks of Vaccines: A Technical Workshop...

  15. Children are sensitive to norms of giving

    OpenAIRE

    McAuliffe, K.; Raihani, N. J.; Dunham, Y.

    2017-01-01

    People across societies engage in costly sharing, but the extent of such sharing shows striking cultural variation, highlighting the importance of local norms in shaping generosity. Despite this acknowledged role for norms, it is unclear when they begin to exert their influence in development. Here we use a Dictator Game to investigate the extent to which 4- to 9-year-old children are sensitive to selfish (give 20%) and generous (give 80%) norms. Additionally, we varied whether children were ...

  16. Characterization of a multidrug resistant Salmonella enterica give ...

    African Journals Online (AJOL)

    Salmonella enterica Give is one of the serotypes that have been incriminated in Salmonella infections; sometimes associated with hospitalization and mortalities in humans and animals in some parts of the world. In this work, we characterized one Salmonella Give isolated from cloaca swab of an Agama agama lizard ...

  17. Qualitative and quantitative estimations of the effect of geomagnetic field variations on human brain functional state

    International Nuclear Information System (INIS)

    Belisheva, N.K.; Popov, A.N.; Petukhova, N.V.; Pavlova, L.P.; Osipov, K.S.; Tkachenko, S.Eh.; Baranova, T.I.

    1995-01-01

    The comparison of functional dynamics of human brain with reference to qualitative and quantitative characteristics of local geomagnetic field (GMF) variations was conducted. Steady and unsteady states of human brain can be determined: by geomagnetic disturbances before the observation period; by structure and doses of GMF variations; by different combinations of qualitative and quantitative characteristics of GMF variations. Decrease of optimal GMF activity level and the appearance of aperiodic disturbances of GMF can be a reason of unsteady brain's state. 18 refs.; 3 figs

  18. Quantitative image fusion in infrared radiometry

    Science.gov (United States)

    Romm, Iliya; Cukurel, Beni

    2018-05-01

    Towards high-accuracy infrared radiance estimates, measurement practices and processing techniques aimed to achieve quantitative image fusion using a set of multi-exposure images of a static scene are reviewed. The conventional non-uniformity correction technique is extended, as the original is incompatible with quantitative fusion. Recognizing the inherent limitations of even the extended non-uniformity correction, an alternative measurement methodology, which relies on estimates of the detector bias using self-calibration, is developed. Combining data from multi-exposure images, two novel image fusion techniques that ultimately provide high tonal fidelity of a photoquantity are considered: ‘subtract-then-fuse’, which conducts image subtraction in the camera output domain and partially negates the bias frame contribution common to both the dark and scene frames; and ‘fuse-then-subtract’, which reconstructs the bias frame explicitly and conducts image fusion independently for the dark and the scene frames, followed by subtraction in the photoquantity domain. The performances of the different techniques are evaluated for various synthetic and experimental data, identifying the factors contributing to potential degradation of the image quality. The findings reflect the superiority of the ‘fuse-then-subtract’ approach, conducting image fusion via per-pixel nonlinear weighted least squares optimization.

  19. Developments in quantitative electron probe microanalysis

    International Nuclear Information System (INIS)

    Tixier, R.

    1977-01-01

    A study of the range of validity of the formulae for corrections used with massive specimen analysis is made. The method used is original; we have shown that it was possible to use a property of invariability of corrected intensity ratios for standards. This invariance property provides a test for the self consistency of the theory. The theoretical and experimental conditions required for quantitative electron probe microanalysis of thin transmission electron microscope specimens are examined. The correction formulae for atomic number, absorption and fluorescence effects are calculated. Several examples of experimental results are given, relative to the quantitative analysis of intermetallic precipitates and carbides in steels. Advances in applications of electron probe instruments related to the use of computer and the present development of fully automated instruments are reviewed. The necessary statistics for measurements of X ray count data are studied. Estimation procedure and tests are developed. These methods are used to perform a statistical check of electron probe microanalysis measurements and to reject rogue values. An estimator of the confidence interval of the apparent concentration is derived. Formulae were also obtained to optimize the counting time in order to obtain the best precision in a minimum amount of time [fr

  20. Principle of Care and Giving to Help People in Need.

    Science.gov (United States)

    Bekkers, René; Ottoni-Wilhelm, Mark

    2016-01-01

    Theories of moral development posit that an internalized moral value that one should help those in need-the principle of care-evokes helping behaviour in situations where empathic concern does not. Examples of such situations are helping behaviours that involve cognitive deliberation and planning, that benefit others who are known only in the abstract, and who are out-group members. Charitable giving to help people in need is an important helping behaviour that has these characteristics. Therefore we hypothesized that the principle of care would be positively associated with charitable giving to help people in need, and that the principle of care would mediate the empathic concern-giving relationship. The two hypotheses were tested across four studies. The studies used four different samples, including three nationally representative samples from the American and Dutch populations, and included both self-reports of giving (Studies 1-3), giving observed in a survey experiment (Study 3), and giving observed in a laboratory experiment (Study 4). The evidence from these studies indicated that a moral principle to care for others was associated with charitable giving to help people in need and mediated the empathic concern-giving relationship. © 2016 The Authors. European Journal of Personality published by John Wiley & Sons Ltd on behalf of European Association of Personality Psychology.

  1. Comparative study of various methods of primary energy estimation in nucleon-nucleon interactions

    International Nuclear Information System (INIS)

    Goyal, D.P.; Yugindro Singh, K.; Singh, S.

    1986-01-01

    The various available methods for the estimation of primary energy in nucleon-nucleon interactions have been examined by using the experimental data on angular distributions of shower particles from p-N interactions at two accelerator energies, 67 and 400 GeV. Three different groups of shower particle multiplicities have been considered for interactions at both energies. It is found that the different methods give quite different estimates of primary energy. Moreover, each method is found to give different values of energy according to the choice of multiplicity groups. It is concluded that the E ch method is relatively the better method among all the methods available, and that within this method, the consideration of the group of small multiplicities gives a much better result. The method also yields plausible estimates of inelasticity in high energy nucleon-nucleon interactions. (orig.)

  2. Integration of Qualitative and Quantitative Methods: Building and Interpreting Clusters from Grounded Theory and Discourse Analysis

    Directory of Open Access Journals (Sweden)

    Aldo Merlino

    2007-01-01

    Full Text Available Qualitative methods present a wide spectrum of application possibilities as well as opportunities for combining qualitative and quantitative methods. In the social sciences fruitful theoretical discussions and a great deal of empirical research have taken place. This article introduces an empirical investigation which demonstrates the logic of combining methodologies as well as the collection and interpretation, both sequential as simultaneous, of qualitative and quantitative data. Specifically, the investigation process will be described, beginning with a grounded theory methodology and its combination with the techniques of structural semiotics discourse analysis to generate—in a first phase—an instrument for quantitative measuring and to understand—in a second phase—clusters obtained by quantitative analysis. This work illustrates how qualitative methods allow for the comprehension of the discursive and behavioral elements under study, and how they function as support making sense of and giving meaning to quantitative data. URN: urn:nbn:de:0114-fqs0701219

  3. Accuracy of prognosis estimates by four palliative care teams: a prospective cohort study

    Directory of Open Access Journals (Sweden)

    Costantini Massimo

    2002-03-01

    Full Text Available Abstract Background Prognosis estimates are used to access services, but are often inaccurate. This study aimed to determine the accuracy of giving a prognosis range. Methods and measurements A prospective cohort study in four multi-professional palliative care teams in England collected data on 275 consecutive cancer referrals who died. Prognosis estimates (minimum – maximum at referral, patient characteristics, were recorded by staff, and later compared with actual survival. Results Minimum survival estimates ranged Conclusions Offering a prognosis range has higher levels of accuracy (about double than traditional estimates, but is still very often inaccurate, except very close to death. Where possible clinicians should discuss scenarios with patients, rather than giving a prognosis range.

  4. Nonparametric modeling of longitudinal covariance structure in functional mapping of quantitative trait loci.

    Science.gov (United States)

    Yap, John Stephen; Fan, Jianqing; Wu, Rongling

    2009-12-01

    Estimation of the covariance structure of longitudinal processes is a fundamental prerequisite for the practical deployment of functional mapping designed to study the genetic regulation and network of quantitative variation in dynamic complex traits. We present a nonparametric approach for estimating the covariance structure of a quantitative trait measured repeatedly at a series of time points. Specifically, we adopt Huang et al.'s (2006, Biometrika 93, 85-98) approach of invoking the modified Cholesky decomposition and converting the problem into modeling a sequence of regressions of responses. A regularized covariance estimator is obtained using a normal penalized likelihood with an L(2) penalty. This approach, embedded within a mixture likelihood framework, leads to enhanced accuracy, precision, and flexibility of functional mapping while preserving its biological relevance. Simulation studies are performed to reveal the statistical properties and advantages of the proposed method. A real example from a mouse genome project is analyzed to illustrate the utilization of the methodology. The new method will provide a useful tool for genome-wide scanning for the existence and distribution of quantitative trait loci underlying a dynamic trait important to agriculture, biology, and health sciences.

  5. Anisotropic Density Estimation in Global Illumination

    DEFF Research Database (Denmark)

    Schjøth, Lars

    2009-01-01

    Density estimation employed in multi-pass global illumination algorithms gives cause to a trade-off problem between bias and noise. The problem is seen most evident as blurring of strong illumination features. This thesis addresses the problem, presenting four methods that reduce both noise...

  6. Robust estimation of adaptive tensors of curvature by tensor voting.

    Science.gov (United States)

    Tong, Wai-Shun; Tang, Chi-Keung

    2005-03-01

    Although curvature estimation from a given mesh or regularly sampled point set is a well-studied problem, it is still challenging when the input consists of a cloud of unstructured points corrupted by misalignment error and outlier noise. Such input is ubiquitous in computer vision. In this paper, we propose a three-pass tensor voting algorithm to robustly estimate curvature tensors, from which accurate principal curvatures and directions can be calculated. Our quantitative estimation is an improvement over the previous two-pass algorithm, where only qualitative curvature estimation (sign of Gaussian curvature) is performed. To overcome misalignment errors, our improved method automatically corrects input point locations at subvoxel precision, which also rejects outliers that are uncorrectable. To adapt to different scales locally, we define the RadiusHit of a curvature tensor to quantify estimation accuracy and applicability. Our curvature estimation algorithm has been proven with detailed quantitative experiments, performing better in a variety of standard error metrics (percentage error in curvature magnitudes, absolute angle difference in curvature direction) in the presence of a large amount of misalignment noise.

  7. Knowledge and reported confidence of final year midwifery students regarding giving advice on contraception and sexual health.

    Science.gov (United States)

    Walker, Susan H; Davis, Geraldine

    2014-05-01

    this study explored the views of three cohorts of final year midwifery students, regarding their confidence in giving advice to women on contraception and sexual health in the postnatal period. The project also investigated knowledge of contraception using a factual quiz, based on clinical scenarios regarding contraception and sexual health in the postpartum period. a mixed method design using qualitative data from focus groups, and mixed qualitative and quantitative data from a paper based questionnaire was used. the project was carried out in one higher educational institution in England. findings demonstrate that expressed confidence varies according to contraceptive method, with most confidence being reported when advising on the male condom. The findings of the factual quiz indicate that students applied theoretical knowledge poorly in a practically oriented context. These findings also indicated that most students limited advice to general advice. the paper concludes that midwifery students need more practically oriented education in contraception and sexual health, and that the role of mentors is very important in helping students feel confident when giving advice in this area. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Contractor-style tunnel cost estimating

    International Nuclear Information System (INIS)

    Scapuzzi, D.

    1990-06-01

    Keeping pace with recent advances in construction technology is a challenge for the cost estimating engineer. Using an estimating style that simulates the actual construction process and is similar in style to the contractor's estimate will give a realistic view of underground construction costs. For a contractor-style estimate, a mining method is chosen; labor crews, plant and equipment are selected, and advance rates are calculated for the various phases of work which are used to determine the length of time necessary to complete each phase of work. The durations are multiplied by the cost or labor and equipment per unit of time and, along with the costs for materials and supplies, combine to complete the estimate. Variations in advance rates, ground support, labor crew size, or other areas are more easily analyzed for their overall effect on the cost and schedule of a project. 14 figs

  9. Quantitative kinetics of In-111 autologous (In-AP) and homologous (Cr-HP) platelets in immune thrombocytopenic purpura (ITP)

    International Nuclear Information System (INIS)

    Lotter, M.G.; Heyns, A.D.P.; Badenhorst, P.N.; Minnaar, P.C.

    1984-01-01

    Contrary to the accepted view, the authors have found that platelet turnover is not always increased in ITP if the mean platelet survival time (PS) is measured with In-AP. The authors investigated the possible cause of the discrepancy by comparing kinetics of In-AP with those of Cr-HP in 10 patients with ITP. PS was estimated with the multiple hit model. The equilibrium and final in vivo distribution of In-AP was quantitated with the geometrical mean method. The patients could be divided into either those with splenic or diffuse RES platelet destruction. The authors conclude that in ITP platelet survival of In-AP is significantly (P < .05) longer than that of Cr-HP. Platelet turnover measured with In-AP is only normal in patients with mainly splenic platelet sequestration. Results with Cr-HP give a false impression of PS. It seems that in ITP those patients with severe disease also have a platelet production defect

  10. Neurocultural evidence that ideal affect match promotes giving.

    Science.gov (United States)

    Park, BoKyung; Blevins, Elizabeth; Knutson, Brian; Tsai, Jeanne L

    2017-07-01

    Why do people give to strangers? We propose that people trust and give more to those whose emotional expressions match how they ideally want to feel ("ideal affect match"). European Americans and Koreans played multiple trials of the Dictator Game with recipients who varied in emotional expression (excited, calm), race (White, Asian) and sex (male, female). Consistent with their culture's valued affect, European Americans trusted and gave more to excited than calm recipients, whereas Koreans trusted and gave more to calm than excited recipients. These findings held regardless of recipient race and sex. We then used fMRI to probe potential affective and mentalizing mechanisms. Increased activity in the nucleus accumbens (associated with reward anticipation) predicted giving, as did decreased activity in the right temporo-parietal junction (rTPJ; associated with reduced belief prediction error). Ideal affect match decreased rTPJ activity, suggesting that people may trust and give more to strangers whom they perceive to share their affective values. © The Author (2017). Published by Oxford University Press.

  11. Quantitative Measurements using Ultrasound Vector Flow Imaging

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2016-01-01

    scanner for pulsating flow mimicking the femoral artery from a CompuFlow 1000 pump (Shelley Medical). Data were used in four estimators based on directional transverse oscillation for velocity, flow angle, volume flow, and turbulence estimation and their respective precisions. An adaptive lag scheme gave...... the ability to estimate a large velocity range, or alternatively measure at two sites to find e.g. stenosis degree in a vessel. The mean angle at the vessel center was estimated to 90.9◦±8.2◦ indicating a laminar flow from a turbulence index being close to zero (0.1 ±0.1). Volume flow was 1.29 ±0.26 mL/stroke...... (true: 1.15 mL/stroke, bias: 12.2%). Measurements down to 160 mm were obtained with a relative standard deviation and bias of less than 10% for the lateral component for stationary, parabolic flow. The method can, thus, find quantitative velocities, angles, and volume flows at sites currently...

  12. Meta-analysis for quantitative microbiological risk assessments and benchmarking data

    NARCIS (Netherlands)

    Besten, den H.M.W.; Zwietering, M.H.

    2012-01-01

    Meta-analysis studies are increasingly being conducted in the food microbiology area to quantitatively integrate the findings of many individual studies on specific questions or kinetic parameters of interest. Meta-analyses provide global estimates of parameters and quantify their variabilities, and

  13. An improved procedure of mapping a quantitative trait locus via the ...

    Indian Academy of Sciences (India)

    Data on the quantitative trait under consideration and several codominant genetic markers with known genomic locations are collected from members of families and statistically .... Although the primary aim is to estimate , since the trait.

  14. 14 CFR 221.140 - Method of giving concurrence.

    Science.gov (United States)

    2010-01-01

    ...) Conflicting authority to be avoided. Care should be taken to avoid giving authority to two or more carriers... Aviation shall be used by a carrier to give authority to another carrier to issue and file with the... used as authority to file joint fares or charges in which the carrier to whom the concurrence is given...

  15. Generalized Centroid Estimators in Bioinformatics

    Science.gov (United States)

    Hamada, Michiaki; Kiryu, Hisanori; Iwasaki, Wataru; Asai, Kiyoshi

    2011-01-01

    In a number of estimation problems in bioinformatics, accuracy measures of the target problem are usually given, and it is important to design estimators that are suitable to those accuracy measures. However, there is often a discrepancy between an employed estimator and a given accuracy measure of the problem. In this study, we introduce a general class of efficient estimators for estimation problems on high-dimensional binary spaces, which represent many fundamental problems in bioinformatics. Theoretical analysis reveals that the proposed estimators generally fit with commonly-used accuracy measures (e.g. sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in many cases, and cover a wide range of problems in bioinformatics from the viewpoint of the principle of maximum expected accuracy (MEA). It is also shown that some important algorithms in bioinformatics can be interpreted in a unified manner. Not only the concept presented in this paper gives a useful framework to design MEA-based estimators but also it is highly extendable and sheds new light on many problems in bioinformatics. PMID:21365017

  16. Estimates of Fermilab Tevatron collider performance

    International Nuclear Information System (INIS)

    Dugan, G.

    1991-09-01

    This paper describes a model which has been used to estimate the average luminosity performance of the Tevatron collider. In the model, the average luminosity is related quantitatively to various performance parameters of the Fermilab Tevatron collider complex. The model is useful in allowing estimates to be developed for the improvements in average collider luminosity to be expected from changes in the fundamental performance parameters as a result of upgrades to various parts of the accelerator complex

  17. Quantitative determination and monitoring of water distribution in Aespoe granite

    International Nuclear Information System (INIS)

    Zimmer, U.

    1998-01-01

    To identify possible zones of two-phase-flow and the extension of the excavation disturbed zone, geoelectric measurements are conducted in the ZEDEX- and the DEMO-tunnel. The electric resistivity of a hard rock is usually determined by its water content, its water salinity and its porosity structure. By calibration measurements of the resistivity on rocks with well known water content, a relation between resistivity and water content for Aespoe granite is determined. This relation is used to correlate the in-situ resistivity with the water content of the rock. To determine the in-situ resistivity between the ZEDEX- and the DEMO-tunnel an electrode array of nearly 300 electrodes was installed along the tunnel walls and in one borehole. With a semiautomatic recording unit which is operated by a telephone connection from the GRS-office in Braunschweig/Germany, the resistivity is monitored between and around the tunnels. To correlate the resistivity with the water content, the measured apparent resistivity has to be converted into a resistivity model of the underground. Since many thin water bearing fractures complicate this inversion process, the accuracy and resolution of the different inversion programs are checked before their application to the data. It was found that an acceptable quantitative reconstruction of the resistivity requires the integration of geometric information about the fracture zones into the inversion process. For a rough estimation of the position of possible fracture zones, a simple inversion without any geometric boundary conditions can be used. Since the maximum investigation area is limited along a single tunnel for profile measurements, tomographic measurements were also applied to estimate the resistivity distribution between the ZEDEX- and the DEMO-tunnel. These tomographic measurements have a lower resolution than the profile measurements due to the required large computer power, but result in reconstructions that give an estimate of

  18. APLIKASI SPLINE ESTIMATOR TERBOBOT

    Directory of Open Access Journals (Sweden)

    I Nyoman Budiantara

    2001-01-01

    Full Text Available We considered the nonparametric regression model : Zj = X(tj + ej, j = 1,2,…,n, where X(tj is the regression curve. The random error ej are independently distributed normal with a zero mean and a variance s2/bj, bj > 0. The estimation of X obtained by minimizing a Weighted Least Square. The solution of this optimation is a Weighted Spline Polynomial. Further, we give an application of weigted spline estimator in nonparametric regression. Abstract in Bahasa Indonesia : Diberikan model regresi nonparametrik : Zj = X(tj + ej, j = 1,2,…,n, dengan X (tj kurva regresi dan ej sesatan random yang diasumsikan berdistribusi normal dengan mean nol dan variansi s2/bj, bj > 0. Estimasi kurva regresi X yang meminimumkan suatu Penalized Least Square Terbobot, merupakan estimator Polinomial Spline Natural Terbobot. Selanjutnya diberikan suatu aplikasi estimator spline terbobot dalam regresi nonparametrik. Kata kunci: Spline terbobot, Regresi nonparametrik, Penalized Least Square.

  19. Good practices for quantitative bias analysis.

    Science.gov (United States)

    Lash, Timothy L; Fox, Matthew P; MacLehose, Richard F; Maldonado, George; McCandless, Lawrence C; Greenland, Sander

    2014-12-01

    Quantitative bias analysis serves several objectives in epidemiological research. First, it provides a quantitative estimate of the direction, magnitude and uncertainty arising from systematic errors. Second, the acts of identifying sources of systematic error, writing down models to quantify them, assigning values to the bias parameters and interpreting the results combat the human tendency towards overconfidence in research results, syntheses and critiques and the inferences that rest upon them. Finally, by suggesting aspects that dominate uncertainty in a particular research result or topic area, bias analysis can guide efficient allocation of sparse research resources. The fundamental methods of bias analyses have been known for decades, and there have been calls for more widespread use for nearly as long. There was a time when some believed that bias analyses were rarely undertaken because the methods were not widely known and because automated computing tools were not readily available to implement the methods. These shortcomings have been largely resolved. We must, therefore, contemplate other barriers to implementation. One possibility is that practitioners avoid the analyses because they lack confidence in the practice of bias analysis. The purpose of this paper is therefore to describe what we view as good practices for applying quantitative bias analysis to epidemiological data, directed towards those familiar with the methods. We focus on answering questions often posed to those of us who advocate incorporation of bias analysis methods into teaching and research. These include the following. When is bias analysis practical and productive? How does one select the biases that ought to be addressed? How does one select a method to model biases? How does one assign values to the parameters of a bias model? How does one present and interpret a bias analysis?. We hope that our guide to good practices for conducting and presenting bias analyses will encourage

  20. Parameter estimation in X-ray astronomy

    International Nuclear Information System (INIS)

    Lampton, M.; Margon, B.; Bowyer, S.

    1976-01-01

    The problems of model classification and parameter estimation are examined, with the objective of establishing the statistical reliability of inferences drawn from X-ray observations. For testing the validities of classes of models, the procedure based on minimizing the chi 2 statistic is recommended; it provides a rejection criterion at any desired significance level. Once a class of models has been accepted, a related procedure based on the increase of chi 2 gives a confidence region for the values of the model's adjustable parameters. The procedure allows the confidence level to be chosen exactly, even for highly nonlinear models. Numerical experiments confirm the validity of the prescribed technique.The chi 2 /sub min/+1 error estimation method is evaluated and found unsuitable when several parameter ranges are to be derived, because it substantially underestimates their joint errors. The ratio of variances method, while formally correct, gives parameter confidence regions which are more variable than necessary

  1. Evaluation of sanitary impact of environmental pollution and quantitative evaluation of sanitary risks; Estimation de l'impact sanitaire d'une pollution environnementale et evaluation quantitative des risques sanitaires

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-09-15

    The calculation of a sanitary impact present a great interest at the decision level for the decision-makers and the whole of concerned actors. It constitutes a first step to organize a social debate around the risk acceptance, to analyze the feasibility of an inquiry or an epidemiological surveillance or to proportion an activity leading to pollutants emission in natural medium. Several conclusions are brought out: it is justified to estimate a sanitary impact from a sanitary risk excess, especially coming from animal tissue. It is conceivable to go beyond an estimation of the only individual risk and to calculate a number of cases in excess in the concerned population. The working group underlines that the characteristics of the situation are the determining factor to give the type of response to bring. The effective of the population is an important element and a situation has not to be underestimated because of the size at the pretext that the excess calculation leads to a number of cases inferior to one leading to believe that the impact is minor or negligible while the individual probability is high. The sanitary impact, expressed by the number of cancer cases in excess in an exposed population is quantified from the average value of excess of sanitary risk multiplied by the population effective, and expressed with a confidence interval. The sanitary impact can be expressed under the form of a percentage of the population present in the exposure area and goes past the comparison marks usually pointed up. This practice must be cheered. An analysis of uncertainties must be made as often as possible. (N.C.)

  2. Quantitative estimation of itopride hydrochloride and rabeprazole sodium from capsule formulation.

    Science.gov (United States)

    Pillai, S; Singhvi, I

    2008-09-01

    Two simple, accurate, economical and reproducible UV spectrophotometric methods and one HPLC method for simultaneous estimation of two component drug mixture of itopride hydrochloride and rabeprazole sodium from combined capsule dosage form have been developed. First developed method involves formation and solving of simultaneous equations using 265.2 nm and 290.8 nm as two wavelengths. Second method is based on two wavelength calculation, wavelengths selected for estimation of itopride hydrochloride was 278.0 nm and 298.8 nm and for rabeprazole sodium 253.6 nm and 275.2 nm. Developed HPLC method is a reverse phase chromatographic method using phenomenex C(18) column and acetonitrile: phosphate buffer (35:65 v/v) pH 7.0 as mobile phase. All developed methods obey Beer's law in concentration range employed for respective methods. Results of analysis were validated statistically and by recovery studies.

  3. Statistical equivalent of the classical TDT for quantitative traits and ...

    Indian Academy of Sciences (India)

    sion model to test the association for quantitative traits based on a trio design. We show that the method ... from the analyses and only one transmission is considered. Keywords. .... use the sample mean or median of Y, as an estimator of c in.

  4. Pointwise estimates of pseudo-differential operators

    DEFF Research Database (Denmark)

    Johnsen, Jon

    As a new technique it is shown how general pseudo-differential operators can be estimated at arbitrary points in Euclidean space when acting on functions u with compact spectra.The estimate is a factorisation inequality, in which one factor is the Peetre–Fefferman–Stein maximal function of u......, whilst the other is a symbol factor carrying the whole information on the symbol. The symbol factor is estimated in terms of the spectral radius of u, so that the framework is well suited for Littlewood–Paley analysis. It is also shown how it gives easy access to results on polynomial bounds...... and estimates in Lp , including a new result for type 1,1-operators that they are always bounded on Lp -functions with compact spectra....

  5. Pointwise estimates of pseudo-differential operators

    DEFF Research Database (Denmark)

    Johnsen, Jon

    2011-01-01

    As a new technique it is shown how general pseudo-differential operators can be estimated at arbitrary points in Euclidean space when acting on functions u with compact spectra. The estimate is a factorisation inequality, in which one factor is the Peetre–Fefferman–Stein maximal function of u......, whilst the other is a symbol factor carrying the whole information on the symbol. The symbol factor is estimated in terms of the spectral radius of u, so that the framework is well suited for Littlewood–Paley analysis. It is also shown how it gives easy access to results on polynomial bounds...... and estimates in Lp, including a new result for type 1, 1-operators that they are always bounded on Lp-functions with compact spectra....

  6. Effects of normalization on quantitative traits in association test

    Science.gov (United States)

    2009-01-01

    Background Quantitative trait loci analysis assumes that the trait is normally distributed. In reality, this is often not observed and one strategy is to transform the trait. However, it is not clear how much normality is required and which transformation works best in association studies. Results We performed simulations on four types of common quantitative traits to evaluate the effects of normalization using the logarithm, Box-Cox, and rank-based transformations. The impact of sample size and genetic effects on normalization is also investigated. Our results show that rank-based transformation gives generally the best and consistent performance in identifying the causal polymorphism and ranking it highly in association tests, with a slight increase in false positive rate. Conclusion For small sample size or genetic effects, the improvement in sensitivity for rank transformation outweighs the slight increase in false positive rate. However, for large sample size and genetic effects, normalization may not be necessary since the increase in sensitivity is relatively modest. PMID:20003414

  7. Effects of normalization on quantitative traits in association test

    Directory of Open Access Journals (Sweden)

    Yap Von Bing

    2009-12-01

    Full Text Available Abstract Background Quantitative trait loci analysis assumes that the trait is normally distributed. In reality, this is often not observed and one strategy is to transform the trait. However, it is not clear how much normality is required and which transformation works best in association studies. Results We performed simulations on four types of common quantitative traits to evaluate the effects of normalization using the logarithm, Box-Cox, and rank-based transformations. The impact of sample size and genetic effects on normalization is also investigated. Our results show that rank-based transformation gives generally the best and consistent performance in identifying the causal polymorphism and ranking it highly in association tests, with a slight increase in false positive rate. Conclusion For small sample size or genetic effects, the improvement in sensitivity for rank transformation outweighs the slight increase in false positive rate. However, for large sample size and genetic effects, normalization may not be necessary since the increase in sensitivity is relatively modest.

  8. Substitution or Symbiosis? Assessing the Relationship between Religious and Secular Giving

    Science.gov (United States)

    Hill, Jonathan P.; Vaidyanathan, Brandon

    2011-01-01

    Research on philanthropy has not sufficiently examined whether charitable giving to religious causes impinges on giving to secular causes. Examining three waves of national panel data, we find that the relationship between religious and secular giving is generally not of a zero-sum nature; families that increase their religious giving also…

  9. Sensitivity equation for quantitative analysis with multivariate curve resolution-alternating least-squares: theoretical and experimental approach.

    Science.gov (United States)

    Bauza, María C; Ibañez, Gabriela A; Tauler, Romà; Olivieri, Alejandro C

    2012-10-16

    A new equation is derived for estimating the sensitivity when the multivariate curve resolution-alternating least-squares (MCR-ALS) method is applied to second-order multivariate calibration data. The validity of the expression is substantiated by extensive Monte Carlo noise addition simulations. The multivariate selectivity can be derived from the new sensitivity expression. Other important figures of merit, such as limit of detection, limit of quantitation, and concentration uncertainty of MCR-ALS quantitative estimations can be easily estimated from the proposed sensitivity expression and the instrumental noise. An experimental example involving the determination of an analyte in the presence of uncalibrated interfering agents is described in detail, involving second-order time-decaying sensitized lanthanide luminescence excitation spectra. The estimated figures of merit are reasonably correlated with the analytical features of the analyzed experimental system.

  10. Induced quantitative variation in wild and cultivated urd and mungbean

    International Nuclear Information System (INIS)

    Ignacimuthu, S.; Babu, C.R.

    1993-01-01

    Seeds of wild and cultivated urd and mung beans were subjected to mutagenesis and some quantitative characters were analysed in the M 2 generation for the range of variability and its significance. Components of variability, heritability, and genetic advance were also estimated. The results indicate that induced mutations are random, polydirectional and quantitative in nature. They also bring about heritable changes in polygenic system. From the patterns of induced variability, it is clear that the threshold action of certain proportion of mutant loci is the basis for phenotypic modification. (author). 24 refs., 2 tabs

  11. Quantitative Estimation of Yeast on Maxillary Denture in Patients with Denture Stomatitis and the Effect of Chlorhexidine Gluconate in Reduction of Yeast

    Directory of Open Access Journals (Sweden)

    Jaykumar R Gade

    2011-01-01

    Full Text Available Denture stomatitis is a condition associated with wearing of a denture. The predisposing factor leading to denture stomatitis could be poor oral hygiene, ill-fitting denture and relief areas. Around 30 patients with denture stomatitis were advised to rinse with chlorhexidine gluconate mouthwash for 14 days and were directed to immerse the upper denture in the chlorhexidine solution for 8 hours. The samples were collected by scraping maxillary denture in saline at three intervals, prior to, at the end of 24 hours and after 14 days of treatment, then were inoculated and quantitative estimation of the yeast growth on Sabouraud′s dextrose agar plate was done. It was observed that after a period of 14 days, there was a reduction in the growth of yeast and also improvement in the clinical picture of the oral mucosa

  12. Quantitative study of Xanthosoma violaceum leaf surfaces using RIMAPS and variogram techniques.

    Science.gov (United States)

    Favret, Eduardo A; Fuentes, Néstor O; Molina, Ana M

    2006-08-01

    Two new imaging techniques (rotated image with maximum averaged power spectrum (RIMAPS) and variogram) are presented for the study and description of leaf surfaces. Xanthosoma violaceum was analyzed to illustrate the characteristics of both techniques. Both techniques produce a quantitative description of leaf surface topography. RIMAPS combines digitized images rotation with Fourier transform, and it is used to detect patterns orientation and characteristics of surface topography. Variogram relates the mathematical variance of a surface with the area of the sample window observed. It gives the typical scale lengths of the surface patterns. RIMAPS detects the morphological variations of the surface topography pattern between fresh and dried (herbarium) samples of the leaf. The variogram method finds the characteristic dimensions of the leaf microstructure, i.e., cell length, papillae diameter, etc., showing that there are not significant differences between dry and fresh samples. The results obtained show the robustness of RIMAPS and variogram analyses to detect, distinguish, and characterize leaf surfaces, as well as give scale lengths. Both techniques are tools for the biologist to study variations of the leaf surface when different patterns are present. The use of RIMAPS and variogram opens a wide spectrum of possibilities by providing a systematic, quantitative description of the leaf surface topography.

  13. Quantitative risk assessment of drinking water contaminants

    International Nuclear Information System (INIS)

    Cothern, C.R.; Coniglio, W.A.; Marcus, W.L.

    1986-01-01

    The development of criteria and standards for the regulation of drinking water contaminants involves a variety of processes, one of which is risk estimation. This estimation process, called quantitative risk assessment, involves combining data on the occurrence of the contaminant in drinking water and its toxicity. The human exposure to a contaminant can be estimated from occurrence data. Usually the toxicity or number of health effects per concentration level is estimated from animal bioassay studies using the multistage model. For comparison, other models will be used including the Weibull, probit, logit and quadratic ones. Because exposure and toxicity data are generally incomplete, assumptions need to be made and this generally results in a wide range of certainty in the estimates. This range can be as wide as four to six orders of magnitude in the case of the volatile organic compounds in drinking water and a factor of four to five for estimation of risk due to radionuclides in drinking water. As examples of the differences encountered in risk assessment of drinking water contaminants, discussions are presented on benzene, lead, radon and alachlor. The lifetime population risk estimates for these contaminants are, respectively, in the ranges of: <1 - 3000, <1 - 8000, 2000-40,000 and <1 - 80. 11 references, 1 figure, 1 table

  14. A Bayesian approach to spectral quantitative photoacoustic tomography

    International Nuclear Information System (INIS)

    Pulkkinen, A; Kaipio, J P; Tarvainen, T; Cox, B T; Arridge, S R

    2014-01-01

    A Bayesian approach to the optical reconstruction problem associated with spectral quantitative photoacoustic tomography is presented. The approach is derived for commonly used spectral tissue models of optical absorption and scattering: the absorption is described as a weighted sum of absorption spectra of known chromophores (spatially dependent chromophore concentrations), while the scattering is described using Mie scattering theory, with the proportionality constant and spectral power law parameter both spatially-dependent. It is validated using two-dimensional test problems composed of three biologically relevant chromophores: fat, oxygenated blood and deoxygenated blood. Using this approach it is possible to estimate the Grüneisen parameter, the absolute chromophore concentrations, and the Mie scattering parameters associated with spectral photoacoustic tomography problems. In addition, the direct estimation of the spectral parameters is compared to estimates obtained by fitting the spectral parameters to estimates of absorption, scattering and Grüneisen parameter at the investigated wavelengths. It is shown with numerical examples that the direct estimation results in better accuracy of the estimated parameters. (papers)

  15. A subagging regression method for estimating the qualitative and quantitative state of groundwater

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young

    2017-08-01

    A subsample aggregating (subagging) regression (SBR) method for the analysis of groundwater data pertaining to trend-estimation-associated uncertainty is proposed. The SBR method is validated against synthetic data competitively with other conventional robust and non-robust methods. From the results, it is verified that the estimation accuracies of the SBR method are consistent and superior to those of other methods, and the uncertainties are reasonably estimated; the others have no uncertainty analysis option. To validate further, actual groundwater data are employed and analyzed comparatively with Gaussian process regression (GPR). For all cases, the trend and the associated uncertainties are reasonably estimated by both SBR and GPR regardless of Gaussian or non-Gaussian skewed data. However, it is expected that GPR has a limitation in applications to severely corrupted data by outliers owing to its non-robustness. From the implementations, it is determined that the SBR method has the potential to be further developed as an effective tool of anomaly detection or outlier identification in groundwater state data such as the groundwater level and contaminant concentration.

  16. A Bayesian Framework for Remaining Useful Life Estimation

    Data.gov (United States)

    National Aeronautics and Space Administration — The estimation of remaining useful life (RUL) of a faulty component is at the center of system prognostics and health management. It gives operators a potent tool in...

  17. A methodology to calibrate water saturation estimated from 4D seismic data

    International Nuclear Information System (INIS)

    Davolio, Alessandra; Maschio, Célio; José Schiozer, Denis

    2014-01-01

    Time-lapse seismic data can be used to estimate saturation changes within a reservoir, which is valuable information for reservoir management as it plays an important role in updating reservoir simulation models. The process of updating reservoir properties, history matching, can incorporate estimated saturation changes qualitatively or quantitatively. For quantitative approaches, reliable information from 4D seismic data is important. This work proposes a methodology to calibrate the volume of water in the estimated saturation maps, as these maps can be wrongly estimated due to problems with seismic signals (such as noise, errors associated with data processing and resolution issues). The idea is to condition the 4D seismic data to known information provided by engineering, in this case the known amount of injected and produced water in the field. The application of the proposed methodology in an inversion process (previously published) that estimates saturation from 4D seismic data is presented, followed by a discussion concerning the use of such data in a history matching process. The methodology is applied to a synthetic dataset to validate the results, the main of which are: (1) reduction of the effects of noise and errors in the estimated saturation, yielding more reliable data to be used quantitatively or qualitatively and (2) an improvement in the properties update after using this data in a history matching procedure. (paper)

  18. Quantitation of TGF-beta1 mRNA in porcine mesangial cells by comparative kinetic RT/PCR: comparison with ribonuclease protection assay and in situ hybridization.

    Science.gov (United States)

    Ceol, M; Forino, M; Gambaro, G; Sauer, U; Schleicher, E D; D'Angelo, A; Anglani, F

    2001-01-01

    Gene expression can be examined with different techniques including ribonuclease protection assay (RPA), in situ hybridisation (ISH), and quantitative reverse transcription-polymerase chain reaction (RT/PCR). These methods differ considerably in their sensitivity and precision in detecting and quantifying low abundance mRNA. Although there is evidence that RT/PCR can be performed in a quantitative manner, the quantitative capacity of this method is generally underestimated. To demonstrate that the comparative kinetic RT/PCR strategy-which uses a housekeeping gene as internal standard-is a quantitative method to detect significant differences in mRNA levels between different samples, the inhibitory effect of heparin on phorbol 12-myristate 13-acetate (PMA)-induced-TGF-beta1 mRNA expression was evaluated by RT/PCR and RPA, the standard method of mRNA quantification, and the results were compared. The reproducibility of RT/PCR amplification was calculated by comparing the quantity of G3PDH and TGF-beta1 PCR products, generated during the exponential phases, estimated from two different RT/PCR (G3PDH, r = 0.968, P = 0.0000; TGF-beta1, r = 0.966, P = 0.0000). The quantitative capacity of comparative kinetic RT/PCR was demonstrated by comparing the results obtained from RPA and RT/PCR using linear regression analysis. Starting from the same RNA extraction, but using only 1% of the RNA for the RT/PCR compared to RPA, significant correlation was observed (r = 0.984, P = 0.0004). Moreover the morphometric analysis of ISH signal was applied for the semi-quantitative evaluation of the expression and localisation of TGF-beta1 mRNA in the entire cell population. Our results demonstrate the close similarity of the RT/PCR and RPA methods in giving quantitative information on mRNA expression and indicate the possibility to adopt the comparative kinetic RT/PCR as reliable quantitative method of mRNA analysis. Copyright 2001 Wiley-Liss, Inc.

  19. Boundary methods for mode estimation

    Science.gov (United States)

    Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.

    1999-08-01

    This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).

  20. Model-based estimation with boundary side information or boundary regularization

    International Nuclear Information System (INIS)

    Chiao, P.C.; Rogers, W.L.; Fessler, J.A.; Clinthorne, N.H.; Hero, A.O.

    1994-01-01

    The authors have previously developed a model-based strategy for joint estimation of myocardial perfusion and boundaries using ECT (Emission Computed Tomography). The authors have also reported difficulties with boundary estimation in low contrast and low count rate situations. In this paper, the authors propose using boundary side information (obtainable from high resolution MRI and CT images) or boundary regularization to improve both perfusion and boundary estimation in these situations. To fuse boundary side information into the emission measurements, the authors formulate a joint log-likelihood function to include auxiliary boundary measurements as well as ECT projection measurements. In addition, the authors introduce registration parameters to align auxiliary boundary measurements with ECT measurements and jointly estimate these parameters with other parameters of interest from the composite measurements. In simulated PET O-15 water myocardial perfusion studies using a simplified model, the authors show that the joint estimation improves perfusion estimation performance and gives boundary alignment accuracy of <0.5 mm even at 0.2 million counts. The authors implement boundary regularization through formulating a penalized log-likelihood function. The authors also demonstrate in simulations that simultaneous regularization of the epicardial boundary and myocardial thickness gives comparable perfusion estimation accuracy with the use of boundary side information

  1. The Case for Infusing Quantitative Literacy into Introductory Geoscience Courses

    Directory of Open Access Journals (Sweden)

    Jennifer M. Wenner

    2009-01-01

    - Infusing Quantitative Literacy into Introductory Geoscience Courses. These portions of the website are designed to give geoscience faculty the resources they need to infuse quantitative content into their entry-level courses, thereby building the QL of the students who enroll. The infusion of QL in the introductory geoscience classroom allows faculty to realistically represent the quantitative nature of the science to the students who may need it most. Ultimately, the inclusion of pedagogically sound quantitative activities and exercises will serve to increase QL of our educated citizenry.

  2. A concise account of techniques available for shipboard sea state estimation

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam

    2017-01-01

    This article gives a review of techniques applied to make sea state estimation on the basis of measured responses on a ship. The general concept of the procedures is similar to that of a classical wave buoy, which exploits a linear assumption between waves and the associated motions. In the frequ......This article gives a review of techniques applied to make sea state estimation on the basis of measured responses on a ship. The general concept of the procedures is similar to that of a classical wave buoy, which exploits a linear assumption between waves and the associated motions...

  3. Why Don't They Just Give Us Money? Project Cost Estimating and Cost Reporting

    Science.gov (United States)

    Comstock, Douglas A.; Van Wychen, Kristin; Zimmerman, Mary Beth

    2015-01-01

    Successful projects require an integrated approach to managing cost, schedule, and risk. This is especially true for complex, multi-year projects involving multiple organizations. To explore solutions and leverage valuable lessons learned, NASA's Virtual Project Management Challenge will kick off a three-part series examining some of the challenges faced by project and program managers when it comes to managing these important elements. In this first session of the series, we will look at cost management, with an emphasis on the critical roles of cost estimating and cost reporting. By taking a proactive approach to both of these activities, project managers can better control life cycle costs, maintain stakeholder confidence, and protect other current and future projects in the organization's portfolio. Speakers will be Doug Comstock, Director of NASA's Cost Analysis Division, Kristin Van Wychen, Senior Analyst in the GAO Acquisition and Sourcing Management Team, and Mary Beth Zimmerman, Branch Chief for NASA's Portfolio Analysis Branch, Strategic Investments Division. Moderator Ramien Pierre is from NASA's Academy for Program/Project and Engineering Leadership (APPEL).

  4. A new approach to estimate nuclide ratios from measurements with activities close to background

    International Nuclear Information System (INIS)

    Kirchner, G.; Steiner, M.; Zaehringer, M.

    2009-01-01

    Measurements of low-level radioactivity often give results of the order of the detection limit. For many applications, interest is not only in estimating activity concentrations of a single radioactive isotope, but focuses on multi-isotope analyses, which often enable inference on the source of the activity detected (e.g. from activity ratios). Obviously, such conclusions become questionable if the measurement merely gives a detection limit for a specific isotope. This is particularly relevant if the presence of an isotope, which shows a low signal only (e.g. due to a short half-life or a small transition probability), is crucial for gaining the information of interest. This paper discusses a new approach which has the potential to solve these problems. Using Bayesian statistics, a method is presented which allows statistical inference on nuclide ratios taking into account both prior knowledge and all information collected from the measurements. It is shown that our method allows quantitative conclusion to be drawn if counts of single isotopes are low or become even negative after background subtraction. Differences to the traditional statistical approach of specifying decision thresholds or detection limits are highlighted. Application of this new approach is illustrated by a number of examples of environmental low-level radioactivity measurements. The capabilities of our approach for spectrum interpretation and source identification are demonstrated with real spectra from air filters, sewage sludge and soil samples.

  5. Estimating structural equation models with non-normal variables by using transformations

    NARCIS (Netherlands)

    Montfort, van K.; Mooijaart, A.; Meijerink, F.

    2009-01-01

    We discuss structural equation models for non-normal variables. In this situation the maximum likelihood and the generalized least-squares estimates of the model parameters can give incorrect estimates of the standard errors and the associated goodness-of-fit chi-squared statistics. If the sample

  6. A Note On the Estimation of the Poisson Parameter

    Directory of Open Access Journals (Sweden)

    S. S. Chitgopekar

    1985-01-01

    distribution when there are errors in observing the zeros and ones and obtains both the maximum likelihood and moments estimates of the Poisson mean and the error probabilities. It is interesting to note that either method fails to give unique estimates of these parameters unless the error probabilities are functionally related. However, it is equally interesting to observe that the estimate of the Poisson mean does not depend on the functional relationship between the error probabilities.

  7. Regression and direct methods do not give different estimates of digestible and metabolizable energy values of barley, sorghum, and wheat for pigs.

    Science.gov (United States)

    Bolarinwa, O A; Adeola, O

    2016-02-01

    ,960 kcal/kg DM, respectively) and ME (3,889 and 3,874 kcal/kg DM, respectively) of wheat were not different (0.8 direct methods do not give different estimates of DE and ME in barley, sorghum, and wheat for pigs.

  8. Quantitative ion implantation

    International Nuclear Information System (INIS)

    Gries, W.H.

    1976-06-01

    This is a report of the study of the implantation of heavy ions at medium keV-energies into electrically conducting mono-elemental solids, at ion doses too small to cause significant loss of the implanted ions by resputtering. The study has been undertaken to investigate the possibility of accurate portioning of matter in submicrogram quantities, with some specific applications in mind. The problem is extensively investigated both on a theoretical level and in practice. A mathematical model is developed for calculating the loss of implanted ions by resputtering as a function of the implanted ion dose and the sputtering yield. Numerical data are produced therefrom which permit a good order-of-magnitude estimate of the loss for any ion/solid combination in which the ions are heavier than the solid atoms, and for any ion energy from 10 to 300 keV. The implanted ion dose is measured by integration of the ion beam current, and equipment and techniques are described which make possible the accurate integration of an ion current in an electromagnetic isotope separator. The methods are applied to two sample cases, one being a stable isotope, the other a radioisotope. In both cases independent methods are used to show that the implantation is indeed quantitative, as predicted. At the same time the sample cases are used to demonstrate two possible applications for quantitative ion implantation, viz. firstly for the manufacture of calibration standards for instrumental micromethods of elemental trace analysis in metals, and secondly for the determination of the half-lives of long-lived radioisotopes by a specific activity method. It is concluded that the present study has advanced quantitative ion implantation to the state where it can be successfully applied to the solution of problems in other fields

  9. Prognostic, quantitative histopathologic variables in lobular carcinoma of the breast

    DEFF Research Database (Denmark)

    Ladekarl, M; Sørensen, Flemming Brandt

    1993-01-01

    BACKGROUND: A retrospective investigation of 53 consecutively treated patients with operable lobular carcinoma of the breast, with a median follow-up of 6.6 years, was performed to examine the prognostic value of quantitative histopathologic parameters.METHODS: The measurements were performed...... of disease, vv(nuc), MI, and NI were of significant independent, prognostic value. On the basis of the multivariate analyses, a prognostic index with highly distinguishing capacity between prognostically poor and favorable cases was constructed.CONCLUSION: Quantitative histopathologic variables are of value...... for objective grading of malignancy in lobular carcinomas. The new parameter--estimates of the mean nuclear volume--is highly reproducible and suitable for routine use. However, larger and prospective studies are needed to establish the true value of the quantitative histopathologic variables in the clinical...

  10. Prognostic, quantitative histopathologic variables in lobular carcinoma of the breast

    DEFF Research Database (Denmark)

    Ladekarl, M; Sørensen, Flemming Brandt

    1993-01-01

    BACKGROUND: A retrospective investigation of 53 consecutively treated patients with operable lobular carcinoma of the breast, with a median follow-up of 6.6 years, was performed to examine the prognostic value of quantitative histopathologic parameters. METHODS: The measurements were performed...... of disease, vv(nuc), MI, and NI were of significant independent, prognostic value. On the basis of the multivariate analyses, a prognostic index with highly distinguishing capacity between prognostically poor and favorable cases was constructed. CONCLUSION: Quantitative histopathologic variables are of value...... for objective grading of malignancy in lobular carcinomas. The new parameter--estimates of the mean nuclear volume--is highly reproducible and suitable for routine use. However, larger and prospective studies are needed to establish the true value of the quantitative histopathologic variables in the clinical...

  11. Quantitative estimation of defects from measurement obtained by remote field eddy current inspection

    International Nuclear Information System (INIS)

    Davoust, M.E.; Fleury, G.

    1999-01-01

    Remote field eddy current technique is used for dimensioning grooves that may occurs in ferromagnetic pipes. This paper proposes a method to estimate the depth and the length of corrosion grooves from measurement of a pick-up coil signal phase at different positions close to the defect. Grooves dimensioning needs the knowledge of the physical relation between measurements and defect dimensions. So, finite element calculations are performed to obtain a parametric algebraic function of the physical phenomena. By means of this model and a previously defined general approach, an estimate of groove size may be given. In this approach, algebraic function parameters and groove dimensions are linked through a polynomial function. In order to validate this estimation procedure, a statistical study has been performed. The approach is proved to be suitable for real measurements. (authors)

  12. Nonparametric functional mapping of quantitative trait loci.

    Science.gov (United States)

    Yang, Jie; Wu, Rongling; Casella, George

    2009-03-01

    Functional mapping is a useful tool for mapping quantitative trait loci (QTL) that control dynamic traits. It incorporates mathematical aspects of biological processes into the mixture model-based likelihood setting for QTL mapping, thus increasing the power of QTL detection and the precision of parameter estimation. However, in many situations there is no obvious functional form and, in such cases, this strategy will not be optimal. Here we propose to use nonparametric function estimation, typically implemented with B-splines, to estimate the underlying functional form of phenotypic trajectories, and then construct a nonparametric test to find evidence of existing QTL. Using the representation of a nonparametric regression as a mixed model, the final test statistic is a likelihood ratio test. We consider two types of genetic maps: dense maps and general maps, and the power of nonparametric functional mapping is investigated through simulation studies and demonstrated by examples.

  13. Quantitative myocardial perfusion from static cardiac and dynamic arterial CT

    Science.gov (United States)

    Bindschadler, Michael; Branch, Kelley R.; Alessio, Adam M.

    2018-05-01

    Quantitative myocardial blood flow (MBF) estimation by dynamic contrast enhanced cardiac computed tomography (CT) requires multi-frame acquisition of contrast transit through the blood pool and myocardium to inform the arterial input and tissue response functions. Both the input and the tissue response functions for the entire myocardium are sampled with each acquisition. However, the long breath holds and frequent sampling can result in significant motion artifacts and relatively high radiation dose. To address these limitations, we propose and evaluate a new static cardiac and dynamic arterial (SCDA) quantitative MBF approach where (1) the input function is well sampled using either prediction from pre-scan timing bolus data or measured from dynamic thin slice ‘bolus tracking’ acquisitions, and (2) the whole-heart tissue response data is limited to one contrast enhanced CT acquisition. A perfusion model uses the dynamic arterial input function to generate a family of possible myocardial contrast enhancement curves corresponding to a range of MBF values. Combined with the timing of the single whole-heart acquisition, these curves generate a lookup table relating myocardial contrast enhancement to quantitative MBF. We tested the SCDA approach in 28 patients that underwent a full dynamic CT protocol both at rest and vasodilator stress conditions. Using measured input function plus single (enhanced CT only) or plus double (enhanced and contrast free baseline CT’s) myocardial acquisitions yielded MBF estimates with root mean square (RMS) error of 1.2 ml/min/g and 0.35 ml/min/g, and radiation dose reductions of 90% and 83%, respectively. The prediction of the input function based on timing bolus data and the static acquisition had an RMS error compared to the measured input function of 26.0% which led to MBF estimation errors greater than threefold higher than using the measured input function. SCDA presents a new, simplified approach for quantitative

  14. Estimating the Doppler centroid of SAR data

    DEFF Research Database (Denmark)

    Madsen, Søren Nørvang

    1989-01-01

    attractive properties. An evaluation based on an existing SEASAT processor is reported. The time-domain algorithms are shown to be extremely efficient with respect to requirements on calculations and memory, and hence they are well suited to real-time systems where the Doppler estimation is based on raw SAR......After reviewing frequency-domain techniques for estimating the Doppler centroid of synthetic-aperture radar (SAR) data, the author describes a time-domain method and highlights its advantages. In particular, a nonlinear time-domain algorithm called the sign-Doppler estimator (SDE) is shown to have...... data. For offline processors where the Doppler estimation is performed on processed data, which removes the problem of partial coverage of bright targets, the ΔE estimator and the CDE (correlation Doppler estimator) algorithm give similar performance. However, for nonhomogeneous scenes it is found...

  15. Benefits of Giving (A Book Review Using Islamic Perspective

    Directory of Open Access Journals (Sweden)

    M. Hamdar Arraiyyah

    2016-01-01

    Full Text Available This writing is a book review. It discusses a book entitled Give and Take. The book introduces a new approach to success. It makes three categories of people in doing interaction or communication. They are takers, matchers, and givers. The writer of the book, Adam Grant, explains the principles and characteristics of each category. He shows a lot of facts to prove that being a giver brings benefits for people and the doer as well. The objects of giving here comprise different kinds help like wealth, ideas, knowledge, skills and information. Therefore, he motivates people to become givers. In this connection, the reviewer would like to show that Islamic religion also motivates its followers to give helps to others. Though, there are some similarities and differences between the benefits of giving mentioned in the book and the verses of the Holy Qur’an and the sayings of Prophet Muhammad Peace be upon him.

  16. Ion-solid interaction at low energies: principles and application of quantitative ISS

    International Nuclear Information System (INIS)

    Niehus, H.; Spitzl, R.

    1991-01-01

    Quantitative surface analysis with low-energy (500-5000 eV) ion scattering spectroscopy is known to be difficult, most often because of strong charge transfer and multiple scattering effects occurring during ion-surface interaction. In order to avoid neutralization problems, either alkali primary ions or noble gas ions in combination with the detection of all scattered particles was applied. Multiple scattering occurs predominantly at forward scattering and might confound the analysis. Backward scattering (i.e. 180 o impact collision ion scattering) bypasses strongly the multiple scattering complication and has been used successfully for the analysis of a number of surface structures for metals, semiconductors and binary alloys. A simple triangulation concept gives access to mass-selective qualitative surface crystallography. Quantitative surface structures were determined by comparison with computer simulations. (author)

  17. Standard guide for estimating the atmospheric corrosion resistance of low-alloy steels

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2004-01-01

    1.1 This guide presents two methods for estimating the atmospheric corrosion resistance of low-alloy weathering steels, such as those described in Specifications A242/A242M, A588/A588M, A606 Type 4, A709/A709M grades 50W, HPS 70W, and 100W, A852/A852M, and A871/A871M. One method gives an estimate of the long-term thickness loss of a steel at a specific site based on results of short-term tests. The other gives an estimate of relative corrosion resistance based on chemical composition. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard.

  18. USING OF SOFTWARE FOR ESTIMATION OF EXPERT COMPETENCE

    Directory of Open Access Journals (Sweden)

    Oleg N. Velichko

    2015-01-01

    Full Text Available The features of estimation of expert’s competence in the field of higher education are considered, got with a help, both universal statistical software and special software. The comparative analysis of quantitative estimation of expert’s competence is conducted, that showed possibility of forming of the most competent group of experts for realization of necessary group expert estimation in the field of higher education. An analysis showed the high degree of coincidence of results that allow declining less competent experts. 

  19. Quantitative and statistical approaches to geography a practical manual

    CERN Document Server

    Matthews, John A

    2013-01-01

    Quantitative and Statistical Approaches to Geography: A Practical Manual is a practical introduction to some quantitative and statistical techniques of use to geographers and related scientists. This book is composed of 15 chapters, each begins with an outline of the purpose and necessary mechanics of a technique or group of techniques and is concluded with exercises and the particular approach adopted. These exercises aim to enhance student's ability to use the techniques as part of the process by which sound judgments are made according to scientific standards while tackling complex problems. After a brief introduction to the principles of quantitative and statistical geography, this book goes on dealing with the topics of measures of central tendency; probability statements and maps; the problem of time-dependence, time-series analysis, non-normality, and data transformations; and the elements of sampling methodology. Other chapters cover the confidence intervals and estimation from samples, statistical hy...

  20. Quantitative estimation of compliance of human systemic veins by occlusion plethysmography with radionuclide

    International Nuclear Information System (INIS)

    Takatsu, Hisato; Gotoh, Kohshi; Suzuki, Takahiko; Ohsumi, Yukio; Yagi, Yasuo; Tsukamoto, Tatsuo; Terashima, Yasushi; Nagashima, Kenshi; Hirakawa, Senri

    1989-01-01

    Volume-pressure relationship and compliance of human systemic veins were estimated quantitatively and noninvasively using radionuclide. The effect of nitroglycerin (NTG) on these parameters was examined. Plethysmography with radionuclide (RN) was performed using the occlusion method on the forearm in 56 patients with various cardiac diseases after RN angiocardiography with 99m Tc-RBC. The RN counts-venous pressure curve was constructed from (1) the changes in radioactivity from region of interest on the forearm that were considered to reflect the changes in the blood volume of the forearm, and (2) the changes in the pressure of the forearm vein (fv) due to venous occlusion. The specific compliance of the forearm veins (Csp.fv; (1/V)·(ΔV/ΔP)) was obtained graphically from this curve at each patient's venous pressure (Pv). Csp.fv was 0.044±0.012 mmHg -1 in class I (mean±SD; n=13), 0.033±0.007 mmHg -1 in class II (n=30), and 0.019±0.007 mmHg -1 in class III (n=13), of the previous NYHA classification of work tolerance. There were significant differences in Csp.fv among the three classes. The systemic venous blood volume (Vsv) was determined by subtracting the central blood volume, measured by RN-angiocardiography, from total blood volume, measured by the indicator dilution method utilizing 131 I-human serum albumin. Systemic venous compliance (Csv) was calculated from Csv=Csp.fv·Vsv. The Csv was 127.2±24.8 ml·mmHg -1 (mean±SD) in class I, 101.1±24.1 ml·mmHg -1 in class II and 62.2±28.1 ml·mmHg -1 in class III. There were significant differences in Csv among the three classes. The class I Csv value was calculated to be 127.2±24.8 ml·mmHg -1 and the Csv/body weight was calculated to be 2.3±0.7 ml·mmHg -1 ·kg -1 of body weight. The administration of NTG increased Csv significantly in all cases. (J.P.N.)

  1. Estimation of soil salinity in a drip irrigation system by using joint inversion of multicoil electromagnetic induction measurements

    KAUST Repository

    Jadoon, Khan Zaib

    2015-05-12

    Low frequency electromagnetic induction (EMI) is becoming a useful tool for soil characterization due to its fast measurement capability and sensitivity to soil moisture and salinity. In this research, a new EMI system (the CMD mini-Explorer) is used for subsurface characterization of soil salinity in a drip irrigation system via a joint inversion approach of multiconfiguration EMI measurements. EMI measurements were conducted across a farm where Acacia trees are irrigated with brackish water. In situ measurements of vertical bulk electrical conductivity (σb) were recorded in different pits along one of the transects to calibrate the EMI measurements and to compare with the modeled electrical conductivity (σ) obtained by the joint inversion of multiconfiguration EMI measurements. Estimates of σ were then converted into the universal standard of soil salinity measurement (i.e., electrical conductivity of a saturated soil paste extract – ECe). Soil apparent electrical conductivity (ECa) was repeatedly measured with the CMD mini-Explorer to investigate the temperature stability of the new system at a fixed location, where the ambient air temperature increased from 26°C to 46°C. Results indicate that the new EMI system is very stable in high temperature environments, especially above 40°C, where most other approaches give unstable measurements. In addition, the distribution pattern of soil salinity is well estimated quantitatively by the joint inversion of multicomponent EMI measurements. The approach of joint inversion of EMI measurements allows for the quantitative mapping of the soil salinity distribution pattern and can be utilized for the management of soil salinity.

  2. GENE ACTION AND HERITABILITY ESTIMATES OF QUANTITATIVE CHARACTERS AMONG LINES DERIVED FROM VARIETAL CROSSES OF SOYBEAN

    Directory of Open Access Journals (Sweden)

    Lukman Hakim

    2017-09-01

    Full Text Available The knowledge of genetic action, heritability and genetic variability is useful and permits plant breeder to design efficient breeding strategies in soybean.  The objectives of this study were to determine gene action, genetic variability, heritability and genetic advance of quantitative characters that could be realized through selection of segregation progenies. The F1 population and F2 progenies of six crosses among five soybean varieties were evaluated at Muneng Experimental Station, East Java during the dry season of 2014.  The lines were planted in a randomized block design with four replications.  The seeds of each F1 and F2 progenies and parents were planted in four rows of 3 m long, 40 cm x 20 cm plant spacing, one plant per hill. The result showed that pod number per plant, seed yield, plant yield and harvest index were found to be predominantly controlled by additive gene effects.  Seed size was also controlled by additive gene effects, with small seed dominant to large seed size.  Plant height was found to be controlled by both additive and nonadditive gene effects.  Similarly, days to maturity was due mainly to additive and nonadditive gene effects, with earliness dominant to lateness.  Days to maturity had the highest heritability estimates of 49.3%, followed by seed size (47.0%, harvest index (45.8%, and pod number per plant (45.5%.  Therefore, they could be used in the selection of a high yielding soybean genotype in the F3 generation. 

  3. Estimation of the defect detection probability for ultrasonic tests on thick sections steel weldments. Technical report

    International Nuclear Information System (INIS)

    Johnson, D.P.; Toomay, T.L.; Davis, C.S.

    1979-02-01

    An inspection uncertainty analysis of published PVRC Specimen 201 data is reported to obtain an estimate of the probability of recording an indication as a function of imperfection height for ASME Section XI Code ultrasonic inspections of the nuclear reactor vessel plate seams and to demonstrate the advantages of inspection uncertainty analysis over conventional detection/nondetection counting analysis. This analysis found the probability of recording a significant defect with an ASME Section XI Code ultrasonic inspection to be very high, if such a defect should exist in the plate seams of a nuclear reactor vessel. For a one-inch high crack, for example, this analysis gives a best estimate recording probability of .985 and a 90% lower confidence bound recording probabilty of .937. It is also shown that inspection uncertainty analysis gives more accurate estimates and gives estimates over a much greater flaw size range than is possible with conventional analysis. There is reason to believe that the estimation procedure used is conservative, the estimation is based on data generated several years ago, on very small defects, in an environment that is different from the actual in-service inspection environment

  4. Estimates for the parameters of the heavy quark expansion

    Energy Technology Data Exchange (ETDEWEB)

    Heinonen, Johannes; Mannel, Thomas [Universitaet Siegen (Germany)

    2015-07-01

    We give improved estimates for the non-perturbative parameters appearing in the heavy quark expansion for inclusive decays. While the parameters appearing in low orders of this expansion can be extracted from data, the number of parameters in higher orders proliferates strongly, making a determination of these parameters from data impossible. Thus, one has to rely on theoretical estimates which may be obtained from an insertion of intermediate states. We refine this method and attempt to estimate the uncertainties of this approach.

  5. Quantitative PET Imaging in Drug Development: Estimation of Target Occupancy.

    Science.gov (United States)

    Naganawa, Mika; Gallezot, Jean-Dominique; Rossano, Samantha; Carson, Richard E

    2017-12-11

    Positron emission tomography, an imaging tool using radiolabeled tracers in humans and preclinical species, has been widely used in recent years in drug development, particularly in the central nervous system. One important goal of PET in drug development is assessing the occupancy of various molecular targets (e.g., receptors, transporters, enzymes) by exogenous drugs. The current linear mathematical approaches used to determine occupancy using PET imaging experiments are presented. These algorithms use results from multiple regions with different target content in two scans, a baseline (pre-drug) scan and a post-drug scan. New mathematical estimation approaches to determine target occupancy, using maximum likelihood, are presented. A major challenge in these methods is the proper definition of the covariance matrix of the regional binding measures, accounting for different variance of the individual regional measures and their nonzero covariance, factors that have been ignored by conventional methods. The novel methods are compared to standard methods using simulation and real human occupancy data. The simulation data showed the expected reduction in variance and bias using the proper maximum likelihood methods, when the assumptions of the estimation method matched those in simulation. Between-method differences for data from human occupancy studies were less obvious, in part due to small dataset sizes. These maximum likelihood methods form the basis for development of improved PET covariance models, in order to minimize bias and variance in PET occupancy studies.

  6. They Make Space and Give Time

    Indian Academy of Sciences (India)

    ... Resonance – Journal of Science Education; Volume 3; Issue 3. They Make Space and Give Time The Engineer as Poet. Gangan Prathap. Book Review Volume 3 ... Author Affiliations. Gangan Prathap1. National Aerospace Laboratories and the Jawaharlal Nehru Centre for Advanced Scientific Research in Bangalore.

  7. Conductance method for quantitative determination of Photobacterium phosphoreum in fish products

    DEFF Research Database (Denmark)

    Dalgaard, Paw; Mejlholm, Ole; Huss, Hans Henrik

    1996-01-01

    This paper presents the development of a sensitive and selective conductance method for quantitative determination of Photobacterium phosphoreum in fresh fish. A calibration curve with a correlation coefficient of -0.981 was established from conductance detection times (DT) for estimation of cell...

  8. Evaluation of two "integrated" polarimetric Quantitative Precipitation Estimation (QPE) algorithms at C-band

    Science.gov (United States)

    Tabary, Pierre; Boumahmoud, Abdel-Amin; Andrieu, Hervé; Thompson, Robert J.; Illingworth, Anthony J.; Le Bouar, Erwan; Testud, Jacques

    2011-08-01

    SummaryTwo so-called "integrated" polarimetric rate estimation techniques, ZPHI ( Testud et al., 2000) and ZZDR ( Illingworth and Thompson, 2005), are evaluated using 12 episodes of the year 2005 observed by the French C-band operational Trappes radar, located near Paris. The term "integrated" means that the concentration parameter of the drop size distribution is assumed to be constant over some area and the algorithms retrieve it using the polarimetric variables in that area. The evaluation is carried out in ideal conditions (no partial beam blocking, no ground-clutter contamination, no bright band contamination, a posteriori calibration of the radar variables ZH and ZDR) using hourly rain gauges located at distances less than 60 km from the radar. Also included in the comparison, for the sake of benchmarking, is a conventional Z = 282 R1.66 estimator, with and without attenuation correction and with and without adjustment by rain gauges as currently done operationally at Météo France. Under those ideal conditions, the two polarimetric algorithms, which rely solely on radar data, appear to perform as well if not better, pending on the measurements conditions (attenuation, rain rates, …), than the conventional algorithms, even when the latter take into account rain gauges through the adjustment scheme. ZZDR with attenuation correction is the best estimator for hourly rain gauge accumulations lower than 5 mm h -1 and ZPHI is the best one above that threshold. A perturbation analysis has been conducted to assess the sensitivity of the various estimators with respect to biases on ZH and ZDR, taking into account the typical accuracy and stability that can be reasonably achieved with modern operational radars these days (1 dB on ZH and 0.2 dB on ZDR). A +1 dB positive bias on ZH (radar too hot) results in a +14% overestimation of the rain rate with the conventional estimator used in this study (Z = 282R1.66), a -19% underestimation with ZPHI and a +23

  9. A General Model for Estimating Macroevolutionary Landscapes.

    Science.gov (United States)

    Boucher, Florian C; Démery, Vincent; Conti, Elena; Harmon, Luke J; Uyeda, Josef

    2018-03-01

    The evolution of quantitative characters over long timescales is often studied using stochastic diffusion models. The current toolbox available to students of macroevolution is however limited to two main models: Brownian motion and the Ornstein-Uhlenbeck process, plus some of their extensions. Here, we present a very general model for inferring the dynamics of quantitative characters evolving under both random diffusion and deterministic forces of any possible shape and strength, which can accommodate interesting evolutionary scenarios like directional trends, disruptive selection, or macroevolutionary landscapes with multiple peaks. This model is based on a general partial differential equation widely used in statistical mechanics: the Fokker-Planck equation, also known in population genetics as the Kolmogorov forward equation. We thus call the model FPK, for Fokker-Planck-Kolmogorov. We first explain how this model can be used to describe macroevolutionary landscapes over which quantitative traits evolve and, more importantly, we detail how it can be fitted to empirical data. Using simulations, we show that the model has good behavior both in terms of discrimination from alternative models and in terms of parameter inference. We provide R code to fit the model to empirical data using either maximum-likelihood or Bayesian estimation, and illustrate the use of this code with two empirical examples of body mass evolution in mammals. FPK should greatly expand the set of macroevolutionary scenarios that can be studied since it opens the way to estimating macroevolutionary landscapes of any conceivable shape. [Adaptation; bounds; diffusion; FPK model; macroevolution; maximum-likelihood estimation; MCMC methods; phylogenetic comparative data; selection.].

  10. THE QUADRANTS METHOD TO ESTIMATE QUANTITATIVE VARIABLES IN MANAGEMENT PLANS IN THE AMAZON

    Directory of Open Access Journals (Sweden)

    Gabriel da Silva Oliveira

    2015-12-01

    Full Text Available This work aimed to evaluate the accuracy in estimates of abundance, basal area and commercial volume per hectare, by the quadrants method applied to an area of 1.000 hectares of rain forest in the Amazon. Samples were simulated by random and systematic process with different sample sizes, ranging from 100 to 200 sampling points. The amounts estimated by the samples were compared with the parametric values recorded in the census. In the analysis we considered as the population all trees with diameter at breast height equal to or greater than 40 cm. The quadrants method did not reach the desired level of accuracy for the variables basal area and commercial volume, overestimating the observed values recorded in the census. However, the accuracy of the estimates of abundance, basal area and commercial volume was satisfactory for applying the method in forest inventories for management plans in the Amazon.

  11. Bidding to give in the field

    NARCIS (Netherlands)

    Onderstal, Sander; Schram, Arthur J. H. C.; Soetevent, Adriaan R.

    In a door-to-door fundraising field experiment, we study the impact of fundraising mechanisms on charitable giving. We approached about 4500 households, each participating in an all-pay auction, a lottery, a non-anonymous voluntary contribution mechanism (VCM), or an anonymous VCM. In contrast to

  12. Bidding to give in the field

    NARCIS (Netherlands)

    Onderstal, S.; Schram, A.J.H.C.; Soetevent, A.R.

    2013-01-01

    In a door-to-door fundraising field experiment, we study the impact of fundraising mechanisms on charitable giving. We approached about 4500 households, each participating in an all-pay auction, a lottery, a non-anonymous voluntary contribution mechanism (VCM), or an anonymous VCM. In contrast to

  13. On the Estimation of the k-RSA Attack

    Directory of Open Access Journals (Sweden)

    Anatoliy Sergeyevich Makeyev

    2016-03-01

    Full Text Available In this paper, we discuss the attack on the RSA cryptosystem with  modules (≥2. We also provide estimation of the attacks’s complexity. Finally, we give the experimental results for different modules and open exponents.

  14. Progress in motion estimation for video format conversion

    NARCIS (Netherlands)

    Haan, de G.

    2000-01-01

    There are now two generations of ICs for motion-compensated video format conversion (MC-VFC). Real-time DSP software for MC-VFC has previously been demonstrated, with the breakthroughs enabling this progress coming from motion estimation. The paper gives an overview.

  15. Gift-giving in the medical student--patient relationship.

    Science.gov (United States)

    Alamri, Yassar Abdullah S

    2012-08-01

    There is paucity in the published literature that provides any ethical guidance guiding gift-giving within the student--patient relationship. This is perhaps because the dynamics of the medical student--patient relationship have not yet been explored as extensively as the doctor--patient relationship. More importantly, however, gift--giving in the doctor-patient relationship has traditionally been from the patient to the doctor and not vice versa. This article examines the literature published in this vicinity reflecting on an encounter with a patient.

  16. 75 FR 29537 - Draft Transportation Conformity Guidance for Quantitative Hot-spot Analyses in PM2.5

    Science.gov (United States)

    2010-05-26

    ... Quantitative Hot- spot Analyses in PM 2.5 and PM 10 Nonattainment and Maintenance Areas AGENCY: Environmental... finalized, this guidance would help state and local agencies complete quantitative PM 2.5 and PM 10 hot-spot... projects. A hot-spot analysis includes an estimation of project-level emissions, air quality modeling, and...

  17. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  18. Quantitative test for concave aspheric surfaces using a Babinet compensator.

    Science.gov (United States)

    Saxena, A K

    1979-08-15

    A quantitative test for the evaluation of surface figures of concave aspheric surfaces using a Babinet compensator is reported. A theoretical estimate of the sensitivity is 0.002lambda for a minimum detectable phase change of 2 pi x 10(-3) rad over a segment length of 1.0 cm.

  19. Counting and confusion: Bayesian rate estimation with multiple populations

    Science.gov (United States)

    Farr, Will M.; Gair, Jonathan R.; Mandel, Ilya; Cutler, Curt

    2015-01-01

    We show how to obtain a Bayesian estimate of the rates or numbers of signal and background events from a set of events when the shapes of the signal and background distributions are known, can be estimated, or approximated; our method works well even if the foreground and background event distributions overlap significantly and the nature of any individual event cannot be determined with any certainty. We give examples of determining the rates of gravitational-wave events in the presence of background triggers from a template bank when noise parameters are known and/or can be fit from the trigger data. We also give an example of determining globular-cluster shape, location, and density from an observation of a stellar field that contains a nonuniform background density of stars superimposed on the cluster stars.

  20. Correlation between the model accuracy and model-based SOC estimation

    International Nuclear Information System (INIS)

    Wang, Qianqian; Wang, Jiao; Zhao, Pengju; Kang, Jianqiang; Yan, Few; Du, Changqing

    2017-01-01

    State-of-charge (SOC) estimation is a core technology for battery management systems. Considerable progress has been achieved in the study of SOC estimation algorithms, especially the algorithm on the basis of Kalman filter to meet the increasing demand of model-based battery management systems. The Kalman filter weakens the influence of white noise and initial error during SOC estimation but cannot eliminate the existing error of the battery model itself. As such, the accuracy of SOC estimation is directly related to the accuracy of the battery model. Thus far, the quantitative relationship between model accuracy and model-based SOC estimation remains unknown. This study summarizes three equivalent circuit lithium-ion battery models, namely, Thevenin, PNGV, and DP models. The model parameters are identified through hybrid pulse power characterization test. The three models are evaluated, and SOC estimation conducted by EKF-Ah method under three operating conditions are quantitatively studied. The regression and correlation of the standard deviation and normalized RMSE are studied and compared between the model error and the SOC estimation error. These parameters exhibit a strong linear relationship. Results indicate that the model accuracy affects the SOC estimation accuracy mainly in two ways: dispersion of the frequency distribution of the error and the overall level of the error. On the basis of the relationship between model error and SOC estimation error, our study provides a strategy for selecting a suitable cell model to meet the requirements of SOC precision using Kalman filter.

  1. Probing myocardium biomechanics using quantitative optical coherence elastography

    Science.gov (United States)

    Wang, Shang; Lopez, Andrew L.; Morikawa, Yuka; Tao, Ge; Li, Jiasong; Larina, Irina V.; Martin, James F.; Larin, Kirill V.

    2015-03-01

    We present a quantitative optical coherence elastographic method for noncontact assessment of the myocardium elasticity. The method is based on shear wave imaging optical coherence tomography (SWI-OCT), where a focused air-puff system is used to induce localized tissue deformation through a low-pressure short-duration air stream and a phase-sensitive OCT system is utilized to monitor the propagation of the induced tissue displacement with nanoscale sensitivity. The 1-D scanning of M-mode OCT imaging and the application of optical phase retrieval and mapping techniques enable the reconstruction and visualization of 2-D depth-resolved shear wave propagation in tissue with ultra-high frame rate. The feasibility of this method in quantitative elasticity measurement is demonstrated on tissue-mimicking phantoms with the estimated Young's modulus compared with uniaxial compression tests. We also performed pilot experiments on ex vivo mouse cardiac muscle tissues with normal and genetically altered cardiomyocytes. Our results indicate this noncontact quantitative optical coherence elastographic method can be a useful tool for the cardiac muscle research and studies.

  2. A Quantitative Version of a Theorem due to Borwein-Reich-Shafrir

    DEFF Research Database (Denmark)

    Kohlenbach, Ulrich

    2001-01-01

    We give a quantitative analysis of a result due to Borwein, Reich and Shafrir on the asymptotic behaviour of the general Krasnoselski-Mann iteration for nonexpansive self-mappings of convex sets in arbitrary normed spaces. Besides providing explicit bounds we also get new qualitative results...... bounds were known in that bounded case. For the unbounded case, no quantitative information was known before. Our results were obtained in a case study of analysing non-effective proofs in analysis by certain logical methods. General logical meta-theorems of the author guarantee (at least under some...... concerning the independence of the rate of asymptotic regularity of that iteration from various input data. In the special case of bounded convex sets, where by well-known results of Ishikawa, Edelstein/O'Brien and Goebel/Kirk the norm of the iteration converges to zero, we obtain uniform bounds which do...

  3. Giving USA 1997: The Annual Report on Philanthropy for the Year 1996.

    Science.gov (United States)

    Kaplan, Ann E., Ed.

    This report presents a comprehensive review of private philanthropy in the United States during 1996. After a preliminary section, the first section presents data on giving, using text, graphs, and charts. Sections cover: overall 1996 contributions; changes in giving by source and use; total giving (1966-1996); inflation-adjusted giving in 5-year…

  4. Role of image analysis in quantitative characterisation of nuclear fuel materials

    International Nuclear Information System (INIS)

    Dubey, J.N.; Rao, T.S.; Pandey, V.D.; Majumdar, S.

    2005-01-01

    Image analysis is one of the important techniques, widely used for materials characterization. It provides the quantitative estimation of the microstructural features present in the material. This information is very much valuable for finding out the criteria for taking up the fuel for high burn up. Radiometallurgy Division has been carrying out development and fabrication of plutonium related fuels for different type of reactors viz. Purnima, Fast Breeder Test Reactor (FBTR), Prototype Fast Breeder Reactor (PFBR), Boiling Water Reactor (BWR), Advanced Heavy Water Reactor (AHWR), Pressurised Heavy Water Reactor (PHWR) and KAMINI Reactor. Image analysis has been carried out on microstructures of PHWR, AHWR, FBTR and KAMINI fuels. Samples were prepared as per standard ASTM metallographic procedure. Digital images of the microstructure of these specimens were obtained using CCD camera, attached to the optical microscope. These images are stores on computer and used for detection and analysis of features of interest with image analysis software. Quantitative image analysis technique has been standardised and used for finding put type of the porosity, its size, shape and distribution in the above sintered oxide and carbide fuels. This technique has also been used for quantitative estimation of different phases present in KAMINI fuel. Image analysis results have been summarised and presented in this paper. (author)

  5. Objective evaluation of reconstruction methods for quantitative SPECT imaging in the absence of ground truth.

    Science.gov (United States)

    Jha, Abhinav K; Song, Na; Caffo, Brian; Frey, Eric C

    2015-04-13

    Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method provided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.

  6. Determination of Calcium in Cereal with Flame Atomic Absorption Spectroscopy: An Experiment for a Quantitative Methods of Analysis Course

    Science.gov (United States)

    Bazzi, Ali; Kreuz, Bette; Fischer, Jeffrey

    2004-01-01

    An experiment for determination of calcium in cereal using two-increment standard addition method in conjunction with flame atomic absorption spectroscopy (FAAS) is demonstrated. The experiment is intended to introduce students to the principles of atomic absorption spectroscopy giving them hands on experience using quantitative methods of…

  7. Conjugate whole-body scanning system for quantitative measurement of organ distribution in vivo

    International Nuclear Information System (INIS)

    Tsui, B.M.W.; Chen, C.T.; Yasillo, N.J.; Ortega, C.J.; Charleston, D.B.; Lathrop, K.A.

    1979-01-01

    The determination of accurate, quantitative, biokinetic distribution of an internally dispersed radionuclide in humans is important in making realistic radiation absorbed dose estimates, studying biochemical transformations in health and disease, and developing clinical procedures indicative of abnormal functions. In order to collect these data, a whole-body imaging system is required which provides both adequate spatial resolution and some means of absolute quantitation. Based on these considerations, a new whole-body scanning system has been designed and constructed that employs the conjugate counting technique. The conjugate whole-body scanning system provides an efficient and accurate means of collecting absolute quantitative organ distribution data of radioactivity in vivo

  8. Contrast-enhanced 3T MR perfusion of musculoskeletal tumours. T1 value heterogeneity assessment and evaluation of the influence of T1 estimation methods on quantitative parameters

    Energy Technology Data Exchange (ETDEWEB)

    Gondim Teixeira, Pedro Augusto; Leplat, Christophe; Verbizier, Jacques de; Blum, Alain [Hopital Central, CHRU-Nancy, Service d' Imagerie Guilloz, Nancy (France); Chen, Bailiang; Beaumont, Marine [Universite de Lorraine, Laboratoire IADI, UMR S 947, Nancy (France); Badr, Sammy; Cotten, Anne [CHRU Lille Centre de Consultations et d' Imagerie de l' Appareil Locomoteur, Department of Radiology and Musculoskeletal Imaging, Lille (France)

    2017-12-15

    To evaluate intra-tumour and striated muscle T1 value heterogeneity and the influence of different methods of T1 estimation on the variability of quantitative perfusion parameters. Eighty-two patients with a histologically confirmed musculoskeletal tumour were prospectively included in this study and, with ethics committee approval, underwent contrast-enhanced MR perfusion and T1 mapping. T1 value variations in viable tumour areas and in normal-appearing striated muscle were assessed. In 20 cases, normal muscle perfusion parameters were calculated using three different methods: signal based and gadolinium concentration based on fixed and variable T1 values. Tumour and normal muscle T1 values were significantly different (p = 0.0008). T1 value heterogeneity was higher in tumours than in normal muscle (variation of 19.8% versus 13%). The T1 estimation method had a considerable influence on the variability of perfusion parameters. Fixed T1 values yielded higher coefficients of variation than variable T1 values (mean 109.6 ± 41.8% and 58.3 ± 14.1% respectively). Area under the curve was the least variable parameter (36%). T1 values in musculoskeletal tumours are significantly different and more heterogeneous than normal muscle. Patient-specific T1 estimation is needed for direct inter-patient comparison of perfusion parameters. (orig.)

  9. MCM - 2 and Ki - 67 as proliferation markers in renal cell carcinoma: A quantitative and semi - quantitative analysis.

    Science.gov (United States)

    Mehdi, Muhammad Zain; Nagi, Abdul Hanan; Naseem, Nadia

    2016-01-01

    Fuhrman nuclear grade is the most important histological parameter to predict prognosis in a patient of renal cell carcinoma (RCC). However, it suffers from inter-observer and intra-observer variation giving rise to need of a parameter that not only correlates with nuclear grade but is also objective and reproducible. Proliferation is the measure of aggressiveness of a tumour and it is strongly correlated with Fuhrman nuclear grade, clinical survival and recurrence in RCC. Ki-67 is conventionally used to assess proliferation. Mini-chromosome maintenance 2 (MCM-2) is a lesser known marker of proliferation and identifies a greater proliferation faction. This study was designed to assess the prognostic significance of MCM-2 by comparing it with Fuhrman nuclear grade and Ki-67. n=50 cases of various ages, stages, histological subtypes and grades of RCC were selected for this study. Immunohistochemical staining using Ki-67(MIB-1, Mouse monoclonal antibody, Dako) and MCM-2 (Mouse monoclonal antibody, Thermo) was performed on the paraffin embedded blocks in the department of Morbid anatomy and Histopathology, University of Health Sciences, Lahore. Labeling indices (LI) were determined by two pathologists independently using quantitative and semi-quantitative analysis. Statistical analysis was carried out using SPSS 20.0. Kruskall-Wallis test was used to determine a correlation of proliferation markers with grade, and Pearson's correlate was used to determine correlation between the two proliferation markers. Labeling index of MCM-2 (median=24.29%) was found to be much higher than Ki-67(median=13.05%). Both markers were significantly related with grade (p=0.00; Kruskall-Wallis test). LI of MCM-2 was found to correlate significantly with LI of Ki-67(r=0.0934;p=0.01 with Pearson's correlate). Results of semi-quantitative analysis correlated well with quantitative analysis. Both Ki-67 and MCM-2 are markers of proliferation which are closely linked to grade. Therefore, they

  10. Quantitative isotopes miction cystoureterography (QIMCU)

    International Nuclear Information System (INIS)

    Szy, D.A.G.; Stroetges, M.W.; Funke-Voelkers, R.

    1982-01-01

    A simple method for a quantitative evaluation of vesicoureteral reflux was developed. It allows the determination of a) the volume of reflux b) the volume of the bladder at each point of time during the examination. The QIMCU gives an insight into the dynamic of reflux, of reflux volume, and of actual bladder volume. The clinical application in 37 patients with 53 insufficient ureteral orifices (i.e. reflux) showed that the onset of reflux occured in 60% as early as in the first five minutes of the examination but later in the remaining 40%. The maximal reflux was found only in 26% during the first five minutes. The reflux volume exceeded in more than 50% the amount of 3.5 ml. The international grading corresponds with the reflux volume determined by this method. Radionuclide cystoureterography can be used as well in childhood as in adults. Because the radiaction exposure is low, the method can be recommended for the initial examination and for follow up studies. (Author)

  11. Quantitative evaluations in planar myocardial scintigraphy using 201-thallium

    International Nuclear Information System (INIS)

    Kaiser, J.W.

    1987-01-01

    The observation that the judgements of myocardial images obtained by 201-thallium scintigraphy tend to vary considerably between investigators has prompted us to develop two versions of a quantitative evaluation technique which - after orthogonal-polar adjustment of the coordinates (with the centre of the left ventricle being the origin of the coordinate system) - would allow the counting rates to be expressed as goniometric functions and shown in graphs. The methods under investigation did, however, not appear to give reasonable approximations to a 'normal range', on the basis of which it would be possible to make clearer distinctions between scintiscans with and without pathological findings. (orig./MG) [de

  12. New generation quantitative x-ray microscopy encompassing phase-contrast

    International Nuclear Information System (INIS)

    Wilkins, S.W.; Mayo, S.C.; Gureyev, T.E.; Miller, P.R.; Pogany, A.; Stevenson, A.W.; Gao, D.; Davis, T.J.; Parry, D.J.; Paganin, D.

    2000-01-01

    Full text: We briefly outline a new approach to X-ray ultramicroscopy using projection imaging in a scanning electron microscope (SEM). Compared to earlier approaches, the new approach offers spatial resolution of ≤0.1 micron and includes novel features such as: i) phase contrast to give additional sample information over a wide energy range, rapid phase/amplitude extraction algorithms to enable new real-time modes of microscopic imaging widespread applications are envisaged to fields such as materials science, biomedical research, and microelectronics device inspection. Some illustrative examples are presented. The quantitative methods described here are also very relevant to X-ray projection microscopy using synchrotron sources

  13. Long-Term Quantitative Precipitation Estimates (QPE) at High Spatial and Temporal Resolution over CONUS: Bias-Adjustment of the Radar-Only National Mosaic and Multi-sensor QPE (NMQ/Q2) Precipitation Reanalysis (2001-2012)

    Science.gov (United States)

    Prat, Olivier; Nelson, Brian; Stevens, Scott; Seo, Dong-Jun; Kim, Beomgeun

    2015-04-01

    The processing of radar-only precipitation via the reanalysis from the National Mosaic and Multi-Sensor Quantitative (NMQ/Q2) based on the WSR-88D Next-generation Radar (NEXRAD) network over Continental United States (CONUS) is completed for the period covering from 2001 to 2012. This important milestone constitutes a unique opportunity to study precipitation processes at a 1-km spatial resolution for a 5-min temporal resolution. However, in order to be suitable for hydrological, meteorological and climatological applications, the radar-only product needs to be bias-adjusted and merged with in-situ rain gauge information. Several in-situ datasets are available to assess the biases of the radar-only product and to adjust for those biases to provide a multi-sensor QPE. The rain gauge networks that are used such as the Global Historical Climatology Network-Daily (GHCN-D), the Hydrometeorological Automated Data System (HADS), the Automated Surface Observing Systems (ASOS), and the Climate Reference Network (CRN), have different spatial density and temporal resolution. The challenges related to incorporating non-homogeneous networks over a vast area and for a long-term record are enormous. Among the challenges we are facing are the difficulties incorporating differing resolution and quality surface measurements to adjust gridded estimates of precipitation. Another challenge is the type of adjustment technique. The objective of this work is threefold. First, we investigate how the different in-situ networks can impact the precipitation estimates as a function of the spatial density, sensor type, and temporal resolution. Second, we assess conditional and un-conditional biases of the radar-only QPE for various time scales (daily, hourly, 5-min) using in-situ precipitation observations. Finally, after assessing the bias and applying reduction or elimination techniques, we are using a unique in-situ dataset merging the different RG networks (CRN, ASOS, HADS, GHCN-D) to

  14. Analysis of Ingredient Lists to Quantitatively Characterize ...

    Science.gov (United States)

    The EPA’s ExpoCast program is developing high throughput (HT) approaches to generate the needed exposure estimates to compare against HT bioactivity data generated from the US inter-agency Tox21 and the US EPA ToxCast programs. Assessing such exposures for the thousands of chemicals in consumer products requires data on product composition. This is a challenge since quantitative product composition data are rarely available. We developed methods to predict the weight fractions of chemicals in consumer products from weight fraction-ordered chemical ingredient lists, and curated a library of such lists from online manufacturer and retailer sites. The probabilistic model predicts weight fraction as a function of the total number of reported ingredients, the rank of the ingredient in the list, the minimum weight fraction for which ingredients were reported, and the total weight fraction of unreported ingredients. Weight fractions predicted by the model compared very well to available quantitative weight fraction data obtained from Material Safety Data Sheets for products with 3-8 ingredients. Lists were located from the online sources for 5148 products containing 8422 unique ingredient names. A total of 1100 of these names could be located in EPA’s HT chemical database (DSSTox), and linked to 864 unique Chemical Abstract Service Registration Numbers (392 of which were in the Tox21 chemical library). Weight fractions were estimated for these 864 CASRN. Using a

  15. Mesoscale and Local Scale Evaluations of Quantitative Precipitation Estimates by Weather Radar Products during a Heavy Rainfall Event

    Directory of Open Access Journals (Sweden)

    Basile Pauthier

    2016-01-01

    Full Text Available A 24-hour heavy rainfall event occurred in northeastern France from November 3 to 4, 2014. The accuracy of the quantitative precipitation estimation (QPE by PANTHERE and ANTILOPE radar-based gridded products during this particular event, is examined at both mesoscale and local scale, in comparison with two reference rain-gauge networks. Mesoscale accuracy was assessed for the total rainfall accumulated during the 24-hour event, using the Météo France operational rain-gauge network. Local scale accuracy was assessed for both total event rainfall and hourly rainfall accumulations, using the recently developed HydraVitis high-resolution rain gauge network Evaluation shows that (1 PANTHERE radar-based QPE underestimates rainfall fields at mesoscale and local scale; (2 both PANTHERE and ANTILOPE successfully reproduced the spatial variability of rainfall at local scale; (3 PANTHERE underestimates can be significantly improved at local scale by merging these data with rain gauge data interpolation (i.e., ANTILOPE. This study provides a preliminary evaluation of radar-based QPE at local scale, suggesting that merged products are invaluable for applications at very high resolution. The results obtained underline the importance of using high-density rain-gauge networks to obtain information at high spatial and temporal resolution, for better understanding of local rainfall variation, to calibrate remotely sensed rainfall products.

  16. A study to determine whether the volume-weighted computed tomography dose index gives reasonable estimates of organ doses for thai patients undergoing abdomen and pelvis computed tomography examinations

    Directory of Open Access Journals (Sweden)

    Supawitoo Sookpeng

    2017-01-01

    Full Text Available Introduction: Values for the CTDIvol, which is displayed on scanner consoles, give doses relative to a phantom much larger than most Thai patients, and the CTDIvoldoes not take account of differences in patient size, which affect organ doses. Objective: The purpose of this study was to evaluate relationships for size specific dose estimate (SSDE and volume weighted computed tomography (CT dose index (CTDIvol with patient size for CT scanners operating under automatic tube current modulation (ATCM. Methods: Retrospective data from 244 patients who had undergone abdomen and pelvis examination on GE and Siemens CT scanners were included in this study. The combination of anteroposterior (AP and lateral dimensions at the level of the first lumbar vertebra (L1 was used to represent patient size. Image noise within the liver was measured, and values of the absorbed dose for organs covered by the primary beam such as the liver, stomach and kidney were calculated using methods described in the literature. Values of CTDIvolwere recorded and SSDE calculated according to the American Association of Physics in Medicine (AAPM Report No.204. Linear regression models were used to evaluate the relationship between SSDE, CTDIvol, image noise and patient size. Results: SSDE is 20%-50% larger than the CTDIvol, with values for larger patients being more representative. Both the CTDIvoland image noise decreased with patient size for Siemens scanners, but the decline in SSDE was less significant. For the GE scanner, the CTDIvolwas a factor of 3-4 lower in small patients compared to larger ones, while the SSDE only decreased by a factor of two. Noise actually decreased slightly with patient size. Conclusion: Values of SSDE were similar to the doses calculated for the liver, stomach and kidney, which are covered by the primary beam, confirming that it provides a good estimate of organ-absorbed dose.

  17. Application of LC–MS/MS for quantitative analysis of glucocorticoids and stimulants in biological fluids

    OpenAIRE

    Haneef, Jamshed; Shaharyar, Mohammad; Husain, Asif; Rashid, Mohd; Mishra, Ravinesh; Parveen, Shama; Ahmed, Niyaz; Pal, Manoj; Kumar, Deepak

    2013-01-01

    Liquid chromatography tandem mass chromatography (LCâMS/MS) is an important hyphenated technique for quantitative analysis of drugs in biological fluids. Because of high sensitivity and selectivity, LCâMS/MS has been used for pharmacokinetic studies, metabolites identification in the plasma and urine. This manuscript gives comprehensive analytical review, focusing on chromatographic separation approaches (column packing materials, column length and mobile phase) as well as different acquisiti...

  18. Application of LC–MS/MS for quantitative analysis of glucocorticoids and stimulants in biological fluids

    OpenAIRE

    Haneef, Jamshed; Shaharyar, Mohammad; Husain, Asif; Rashid, Mohd; Mishra, Ravinesh; Parveen, Shama; Ahmed, Niyaz; Pal, Manoj; Kumar, Deepak

    2013-01-01

    Liquid chromatography tandem mass chromatography (LC–MS/MS) is an important hyphenated technique for quantitative analysis of drugs in biological fluids. Because of high sensitivity and selectivity, LC–MS/MS has been used for pharmacokinetic studies, metabolites identification in the plasma and urine. This manuscript gives comprehensive analytical review, focusing on chromatographic separation approaches (column packing materials, column length and mobile phase) as well as different acquisiti...

  19. Bottom-up modeling approach for the quantitative estimation of parameters in pathogen-host interactions.

    Science.gov (United States)

    Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo

    2015-01-01

    Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely

  20. Phytoremediation: realistic estimation of modern efficiency and future possibility

    International Nuclear Information System (INIS)

    Kravets, A.; Pavlenko, Y.; Kusmenko, L.; Ermak, M.

    1996-01-01

    Kinetic peculiarities of the radionuclides migration in the system 'soil-plant' of the Chernobyl region have been investigated by means of numerical modelling. Quantitative estimation of half-time of natural cleaning of soil has been realised. Potential possibility and efficiency of the modem phytoremediation technology has been estimated. Outlines of the general demands and future possibility of biotechnology of the phytoremediation creation have been formulated. (author)

  1. Phytoremediation: realistic estimation of modern efficiency and future possibility

    Energy Technology Data Exchange (ETDEWEB)

    Kravets, A; Pavlenko, Y [Institute of Cell Biology and Genetic Engineering NAS, Kiev (Ukraine); Kusmenko, L; Ermak, M [Institute of Plant Physiology and Genetic NAS, Vasilkovsky, Kiev (Ukraine)

    1996-11-01

    Kinetic peculiarities of the radionuclides migration in the system 'soil-plant' of the Chernobyl region have been investigated by means of numerical modelling. Quantitative estimation of half-time of natural cleaning of soil has been realised. Potential possibility and efficiency of the modem phytoremediation technologyhas been estimated. Outlines of the general demands and future possibility of biotechnology of the phytoremediation creation have been formulated. (author)

  2. Directional Transverse Oscillation Vector Flow Estimation

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2017-01-01

    A method for estimating vector velocities using transverse oscillation (TO) combined with directional beamforming is presented. In Directional Transverse Oscillation (DTO) a normal focused field is emitted and the received signals are beamformed in the lateral direction transverse to the ultrasound...... beam to increase the amount of data for vector velocity estimation. The approach is self-calibrating as the lateral oscillation period is estimated from the directional signal through a Fourier transform to yield quantitative velocity results over a large range of depths. The approach was extensively...... simulated using Field IIpro and implemented on the experimental SARUS scanner in connection with a BK Medical 8820e convex array transducer. Velocity estimates for DTO are found for beam-to-flow angles of 60, 75, and 90, and vessel depths from 24 to 156 mm. Using 16 emissions the Standard Deviation (SD...

  3. Optimal estimations of random fields using kriging

    International Nuclear Information System (INIS)

    Barua, G.

    2004-01-01

    Kriging is a statistical procedure of estimating the best weights of a linear estimator. Suppose there is a point or an area or a volume of ground over which we do not know a hydrological variable and wish to estimate it. In order to produce an estimator, we need some information to work on, usually available in the form of samples. There can, be an infinite number of linear unbiased estimators for which the weights sum up to one. The problem is how to determine the best weights for which the estimation variance is the least. The system of equations as shown above is generally known as the kriging system and the estimator produced is the kriging estimator. The variance of the kriging estimator can be found by substitution of the weights in the general estimation variance equation. We assume here a linear model for the semi-variogram. Applying the model to the equation, we obtain a set of kriging equations. By solving these equations, we obtain the kriging variance. Thus, for the one-dimensional problem considered, kriging definitely gives a better estimation variance than the extension variance

  4. Estimation of the tail index for lattice-valued sequences

    DEFF Research Database (Denmark)

    Matsui, Muneya; Mikosch, Thomas Valentin; Tafakori, Laleh

    2013-01-01

    If one applies the Hill, Pickands or Dekkers–Einmahl–de Haan estimators of the tail index of a distribution to data which are rounded off one often observes that these estimators oscillate strongly as a function of the number k of order statistics involved.We study this phenomenon in the case of ...... of a Pareto distribution. We provide formulas for the expected value and variance of the Hill estimator and give bounds on k when the central limit theorem is still applicable. We illustrate the theory by using simulated and real-life data....

  5. The effect of respiratory induced density variations on non-TOF PET quantitation in the lung

    Science.gov (United States)

    Holman, Beverley F.; Cuplov, Vesna; Hutton, Brian F.; Groves, Ashley M.; Thielemans, Kris

    2016-04-01

    Accurate PET quantitation requires a matched attenuation map. Obtaining matched CT attenuation maps in the thorax is difficult due to the respiratory cycle which causes both motion and density changes. Unlike with motion, little attention has been given to the effects of density changes in the lung on PET quantitation. This work aims to explore the extent of the errors caused by pulmonary density attenuation map mismatch on dynamic and static parameter estimates. Dynamic XCAT phantoms were utilised using clinically relevant 18F-FDG and 18F-FMISO time activity curves for all organs within the thorax to estimate the expected parameter errors. The simulations were then validated with PET data from 5 patients suffering from idiopathic pulmonary fibrosis who underwent PET/Cine-CT. The PET data were reconstructed with three gates obtained from the Cine-CT and the average Cine-CT. The lung TACs clearly displayed differences between true and measured curves with error depending on global activity distribution at the time of measurement. The density errors from using a mismatched attenuation map were found to have a considerable impact on PET quantitative accuracy. Maximum errors due to density mismatch were found to be as high as 25% in the XCAT simulation. Differences in patient derived kinetic parameter estimates and static concentration between the extreme gates were found to be as high as 31% and 14%, respectively. Overall our results show that respiratory associated density errors in the attenuation map affect quantitation throughout the lung, not just regions near boundaries. The extent of this error is dependent on the activity distribution in the thorax and hence on the tracer and time of acquisition. Consequently there may be a significant impact on estimated kinetic parameters throughout the lung.

  6. The effect of respiratory induced density variations on non-TOF PET quantitation in the lung

    International Nuclear Information System (INIS)

    Holman, Beverley F; Cuplov, Vesna; Hutton, Brian F; Groves, Ashley M; Thielemans, Kris

    2016-01-01

    Accurate PET quantitation requires a matched attenuation map. Obtaining matched CT attenuation maps in the thorax is difficult due to the respiratory cycle which causes both motion and density changes. Unlike with motion, little attention has been given to the effects of density changes in the lung on PET quantitation. This work aims to explore the extent of the errors caused by pulmonary density attenuation map mismatch on dynamic and static parameter estimates. Dynamic XCAT phantoms were utilised using clinically relevant 18 F-FDG and 18 F-FMISO time activity curves for all organs within the thorax to estimate the expected parameter errors. The simulations were then validated with PET data from 5 patients suffering from idiopathic pulmonary fibrosis who underwent PET/Cine-CT. The PET data were reconstructed with three gates obtained from the Cine-CT and the average Cine-CT. The lung TACs clearly displayed differences between true and measured curves with error depending on global activity distribution at the time of measurement. The density errors from using a mismatched attenuation map were found to have a considerable impact on PET quantitative accuracy. Maximum errors due to density mismatch were found to be as high as 25% in the XCAT simulation. Differences in patient derived kinetic parameter estimates and static concentration between the extreme gates were found to be as high as 31% and 14%, respectively. Overall our results show that respiratory associated density errors in the attenuation map affect quantitation throughout the lung, not just regions near boundaries. The extent of this error is dependent on the activity distribution in the thorax and hence on the tracer and time of acquisition. Consequently there may be a significant impact on estimated kinetic parameters throughout the lung. (paper)

  7. Model-based estimation with boundary side information or boundary regularization [cardiac emission CT].

    Science.gov (United States)

    Chiao, P C; Rogers, W L; Fessler, J A; Clinthorne, N H; Hero, A O

    1994-01-01

    The authors have previously developed a model-based strategy for joint estimation of myocardial perfusion and boundaries using ECT (emission computed tomography). They have also reported difficulties with boundary estimation in low contrast and low count rate situations. Here they propose using boundary side information (obtainable from high resolution MRI and CT images) or boundary regularization to improve both perfusion and boundary estimation in these situations. To fuse boundary side information into the emission measurements, the authors formulate a joint log-likelihood function to include auxiliary boundary measurements as well as ECT projection measurements. In addition, they introduce registration parameters to align auxiliary boundary measurements with ECT measurements and jointly estimate these parameters with other parameters of interest from the composite measurements. In simulated PET O-15 water myocardial perfusion studies using a simplified model, the authors show that the joint estimation improves perfusion estimation performance and gives boundary alignment accuracy of <0.5 mm even at 0.2 million counts. They implement boundary regularization through formulating a penalized log-likelihood function. They also demonstrate in simulations that simultaneous regularization of the epicardial boundary and myocardial thickness gives comparable perfusion estimation accuracy with the use of boundary side information.

  8. Separation and Quantitation of Polyamines in Plant Tissue by High Performance Liquid Chromatography of Their Dansyl Derivatives

    Science.gov (United States)

    Smith, Mary A.; Davies, Peter J.

    1985-01-01

    High performance liquid chromatography in combination with fluorescence spectrophotometry can be used to separate and quantitate polyamines (putrescine, cadaverine, spermidine, spermine), prepared as their dansyl derivatives, from plant tissue. The procedure gives sensitive and consistent results for polyamine determinations in plant tissue. In a standard mixture, the minimal detection level was less than 1 picomole of polyamines. PMID:16664216

  9. Quantitative EDXS: Influence of geometry on a four detector system

    International Nuclear Information System (INIS)

    Kraxner, Johanna; Schäfer, Margit; Röschel, Otto; Kothleitner, Gerald; Haberfehlner, Georg; Paller, Manuel; Grogger, Werner

    2017-01-01

    The influence of the geometry on quantitative energy dispersive X-ray spectrometry (EDXS) analysis is determined for a ChemiSTEM system (Super-X) in combination with a low-background double-tilt specimen holder. For the first time a combination of experimental measurements with simulations is used to determine the positions of the individual detectors of a Super-X system. These positions allow us to calculate the detector's solid angles and estimate the amount of detector shadowing and its influence on quantitative EDXS analysis, including absorption correction using the ζ-factor method. Both shadowing by the brass portions and the beryllium specimen carrier of the holder severely affect the quantification of low to medium atomic number elements. A multi-detector system is discussed in terms of practical consequences of the described effects, and a quantitative evaluation of a Fayalit sample is demonstrated. Corrections and suggestions for minimizing systematic errors are discussed to improve quantitative methods for a multi-detector system. - Highlights: • Geometrical issues for EDXS quantification on a Super-X system. • Realistic model of a specimen holder using X-ray computed tomography. • Determination of the exact detector positions of a Super-X system. • Influence of detector shadowing and Be specimen carrier on quantitative EDXS.

  10. The effects of resonances on time delay estimation for water leak detection in plastic pipes

    Science.gov (United States)

    Almeida, Fabrício C. L.; Brennan, Michael J.; Joseph, Phillip F.; Gao, Yan; Paschoalini, Amarildo T.

    2018-04-01

    In the use of acoustic correlation methods for water leak detection, sensors are placed at pipe access points either side of a suspected leak, and the peak in the cross-correlation function of the measured signals gives the time difference (delay) between the arrival times of the leak noise at the sensors. Combining this information with the speed at which the leak noise propagates along the pipe, gives an estimate for the location of the leak with respect to one of the measurement positions. It is possible for the structural dynamics of the pipe system to corrupt the time delay estimate, which results in the leak being incorrectly located. In this paper, data from test-rigs in the United Kingdom and Canada are used to demonstrate this phenomenon, and analytical models of resonators are coupled with a pipe model to replicate the experimental results. The model is then used to investigate which of the two commonly used correlation algorithms, the Basic Cross-Correlation (BCC) function or the Phase Transform (PHAT), is more robust to the undesirable structural dynamics of the pipe system. It is found that time delay estimation is highly sensitive to the frequency bandwidth over which the analysis is conducted. Moreover, it is found that the PHAT is particularly sensitive to the presence of resonances and can give an incorrect time delay estimate, whereas the BCC function is found to be much more robust, giving a consistently accurate time delay estimate for a range of dynamic conditions.

  11. THE METHODS FOR ESTIMATING REGIONAL PROFESSIONAL MOBILE RADIO MARKET POTENTIAL

    Directory of Open Access Journals (Sweden)

    Y.À. Korobeynikov

    2008-12-01

    Full Text Available The paper represents the author’s methods of estimating regional professional mobile radio market potential, that belongs to high-tech b2b markets. These methods take into consideration such market peculiarities as great range and complexity of products, technological constraints and infrastructure development for the technological systems operation. The paper gives an estimation of professional mobile radio potential in Perm region. This estimation is already used by one of the systems integrator for its strategy development.

  12. Analysis gives alterations stable chromosomic induced by the radiation in vitro the sanguine samples to well-known dose. Preliminary results obtained by means of chromosomic painting

    International Nuclear Information System (INIS)

    Prieto, M.J.; Moreno, M.; Gomez-Espi, M.; Olivares, P.; Herranz, R.

    1998-01-01

    In the University General Hospital Gregorio Marannon, once standardized the technique in situ hybridization with fluorescence by means of painting chromosomic the couples 1 and 2 you this carrying out the irradiation gives sanguine samples to well-known dose The objective these irradiations it is the elaboration in vitro a calibration chart dose effect for gamma ray. This new curve will allow to estimate dose in individuals with suspicion overexposure to ionizing radiations, solving some gives the limitations that it presents the technique classic cytogenetics

  13. A new quantitative model of ecological compensation based on ecosystem capital in Zhejiang Province, China*

    Science.gov (United States)

    Jin, Yan; Huang, Jing-feng; Peng, Dai-liang

    2009-01-01

    Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors. PMID:19353749

  14. REASON-GIVING IN COURT PRACTICE: THE EXAMPLE OF FRENCH IMMIGRATION LITIGATION

    Directory of Open Access Journals (Sweden)

    Mathilde Cohen, Columbia Law School-School of Law, Estados Unidos

    2012-10-01

    Full Text Available Abstract: This Article examines the thesis according to which the practice of giving reasons for decisions is a central element of liberal democracies. In this view, public institutions’ practice—and sometimes duty—to give reasons is required so that each individual may view the state as reasonable and therefore, according to deliberative democratic theory, legitimate. Does the giving of reasons in actual court practice achieve these goals?  Drawing on empirical research carried out in a French administrative court, this Article argues that, in practice, reason-giving often falls either short of democracy or beyond democracy. Reasons fall short of democracy in the first case because they are transformed from a device designed to “protect” citizens from arbitrariness into a professional norm intended to “protect” the judges themselves and perhaps further their career goals. In the second case, reasons go beyond democracy because judges’ ambitions are much greater than to merely provide petitioners with a ground for understanding and criticizing the decision: they aim at positively—and paternalistically in some instances—guiding people’s conduct.  The discussion proceeds by drawing attention to social aspects that are often neglected in theoretical discussions on reason-giving. A skeptical conclusion is suggested: one can rarely guarantee that any predetermined value will be achieved by the giving of reasons. The degree to which individuals are empowered by the reasons given to them is dependent on the way in which decision-givers envision their reason-giving activity, and this representation is itself conditioned by the social setting of the court. Keywords: Arbitrariness. Reason-giving. Judges.

  15. The accompanying adult: authority to give consent in the UK.

    Science.gov (United States)

    Lal, Seema Madhur Lata; Parekh, Susan; Mason, Carol; Roberts, Graham

    2007-05-01

    Children may be accompanied by various people when attending for dental treatment. Before treatment is started, there is a legal requirement that the operator obtain informed consent for the proposed procedure. In the case of minors, the person authorized to give consent (parental responsibility) is usually a parent. To ascertain if accompanying persons of children attending the Department of Paediatric Dentistry at the Eastman Dental Hospital, London were empowered to give consent for the child's dental treatment. A total of 250 accompanying persons of children attending were selected, over a 6-month period. A questionnaire was used to establish whether the accompanying person(s) were authorized to give consent. The study showed that 12% of accompanying persons had no legal authority to give consent for the child's dental treatment. Clinicians need to be aware of the status of persons accompanying children to ensure valid consent is obtained.

  16. Comparison of estimated and measured sediment yield in the Gualala River

    Science.gov (United States)

    Matthew O’Connor; Jack Lewis; Robert Pennington

    2012-01-01

    This study compares quantitative erosion rate estimates developed at different spatial and temporal scales. It is motivated by the need to assess potential water quality impacts of a proposed vineyard development project in the Gualala River watershed. Previous erosion rate estimates were developed using sediment source assessment techniques by the North Coast Regional...

  17. Spectral Velocity Estimation in the Transverse Direction

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2013-01-01

    A method for estimating the velocity spectrum for a fully transverse flow at a beam-to-flow angle of 90is described. The approach is based on the transverse oscillation (TO) method, where an oscillation across the ultrasound beam is made during receive processing. A fourth-order estimator based...... on the correlation of the received signal is derived. A Fourier transform of the correlation signal yields the velocity spectrum. Performing the estimation for short data segments gives the velocity spectrum as a function of time as for ordinary spectrograms, and it also works for a beam-to-flow angle of 90...... estimation scheme can reliably find the spectrum at 90, where a traditional estimator yields zero velocity. Measurements have been conducted with the SARUS experimental scanner and a BK 8820e convex array transducer (BK Medical, Herlev, Denmark). A CompuFlow 1000 (Shelley Automation, Inc, Toronto, Canada...

  18. Estimation of Internal Flooding Frequency for Screening Analysis of Flooding PSA

    International Nuclear Information System (INIS)

    Choi, Sun Yeong; Yang, Jun Eon

    2005-01-01

    The purpose of this paper is to estimate the internal frequency for the quantitative screening analysis of the flooding PSA (Probabilistic Safety Assessment) with the appropriate data and estimation method. In the case of the existing flood PSA for domestic NPPs (Nuclear Power Plant), the screening analysis was performed firstly and then detailed analysis was performed for the area not screened out. For the quantitative screening analysis, the plant area based flood frequency by MLE (Maximum Likelihood Estimation) method was used, while the component based flood frequency is used for the detailed analysis. The existing quantitative screening analysis for domestic NPPs have used data from all LWRs (Light Water Reactor), namely PWR (Pressurized Water Reactor) and BWR (Boiling Water Reactor) for the internal flood frequency of the auxiliary building and turbine building. However, in the case of the primary auxiliary building, the applicability of the data from all LWRs needs to be examined carefully because of the significant difference in equipments between the PWR and BWR structure. NUREG/CR-5750 suggested the Bayesian update method with Jeffrey's noninformative prior to estimate the initiating event frequency for the flood. It, however, did not describe any procedure of the flood PSA. Recently, Fleming and Lydell suggested the internal flooding frequency in the unit of the plant operation year-pipe length (in meter) by pipe size of each specific system which is susceptible to the flooding such as the service water system and the circulating water system. They used the failure rate, the rupture conditional probability given the failure to estimate the internal flooding frequency, and the Bayesian update to reduce uncertainties. To perform the quantitative screening analysis with the method, it requires pipe length by each pipe size of the specific system per each divided area to change the concept of the component based frequency to the concept of the plant area

  19. Estimating unknown parameters in haemophilia using expert judgement elicitation.

    Science.gov (United States)

    Fischer, K; Lewandowski, D; Janssen, M P

    2013-09-01

    The increasing attention to healthcare costs and treatment efficiency has led to an increasing demand for quantitative data concerning patient and treatment characteristics in haemophilia. However, most of these data are difficult to obtain. The aim of this study was to use expert judgement elicitation (EJE) to estimate currently unavailable key parameters for treatment models in severe haemophilia A. Using a formal expert elicitation procedure, 19 international experts provided information on (i) natural bleeding frequency according to age and onset of bleeding, (ii) treatment of bleeds, (iii) time needed to control bleeding after starting secondary prophylaxis, (iv) dose requirements for secondary prophylaxis according to onset of bleeding, and (v) life-expectancy. For each parameter experts provided their quantitative estimates (median, P10, P90), which were combined using a graphical method. In addition, information was obtained concerning key decision parameters of haemophilia treatment. There was most agreement between experts regarding bleeding frequencies for patients treated on demand with an average onset of joint bleeding (1.7 years): median 12 joint bleeds per year (95% confidence interval 0.9-36) for patients ≤ 18, and 11 (0.8-61) for adult patients. Less agreement was observed concerning estimated effective dose for secondary prophylaxis in adults: median 2000 IU every other day The majority (63%) of experts expected that a single minor joint bleed could cause irreversible damage, and would accept up to three minor joint bleeds or one trauma related joint bleed annually on prophylaxis. Expert judgement elicitation allowed structured capturing of quantitative expert estimates. It generated novel data to be used in computer modelling, clinical care, and trial design. © 2013 John Wiley & Sons Ltd.

  20. Bragg peak prediction from quantitative proton computed tomography using different path estimates

    International Nuclear Information System (INIS)

    Wang Dongxu; Mackie, T Rockwell; Tome, Wolfgang A

    2011-01-01

    This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of ∼0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy.

  1. Bragg peak prediction from quantitative proton computed tomography using different path estimates

    Energy Technology Data Exchange (ETDEWEB)

    Wang Dongxu; Mackie, T Rockwell; Tome, Wolfgang A, E-mail: tome@humonc.wisc.edu [Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, WI 53705 (United States)

    2011-02-07

    This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of {approx}0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy.

  2. Bragg peak prediction from quantitative proton computed tomography using different path estimates

    Science.gov (United States)

    Wang, Dongxu; Mackie, T Rockwell

    2015-01-01

    This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of ~0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy. PMID:21212472

  3. Rapid Quantitation of Ascorbic and Folic Acids in SRM 3280 Multivitamin/Multielement Tablets using Flow-Injection Tandem Mass Spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Bhandari, Deepak [ORNL; Kertesz, Vilmos [ORNL; Van Berkel, Gary J [ORNL

    2013-01-01

    RATIONALE: Ascorbic acid (AA) and folic acid (FA) are water-soluble vitamins and are usually fortified in food and dietary supplements. For the safety of human health, proper intake of these vitamins is recommended. Improvement in the analysis time required for the quantitative determination of these vitamins in food and nutritional formulations is desired. METHODS: A simple and fast (~5 min) in-tube sample preparation was performed, independently for FA and AA, by mixing extraction solvent with a powdered sample aliquot followed by agitation, centrifugation, and filtration to recover an extract for analysis. Quantitative detection was achieved by flow-injection (1 L injection volume) electrospray ionization tandem mass spectrometry (ESI-MS/MS) in negative ion mode using the method of standard addition. RESULTS: Method of standard addition was employed for the quantitative estimation of each vitamin in a sample extract. At least 2 spiked and 1 non-spiked sample extract were injected in triplicate for each quantitative analysis. Given an injection-to-injection interval of approximately 2 min, about 18 min was required to complete the quantitative estimation of each vitamin. The concentration values obtained for the respective vitamins in the standard reference material (SRM) 3280 using this approach were within the statistical range of the certified values provided in the NIST Certificate of Analysis. The estimated limit of detections of FA and AA were 13 and 5.9 ng/g, respectively. CONCLUSIONS: Flow-injection ESI-MS/MS was successfully applied for the rapid quantitation of FA and AA in SRM 3280 multivitamin/multielement tablets.

  4. Debate on Uncertainty in Estimating Bathing Water Quality

    DEFF Research Database (Denmark)

    Larsen, Torben

    1992-01-01

    Estimating the bathing water quality along the shore near a planned sewage discharge requires data on the source strength of bacteria, the die-off of bacteria and the actual dilution of the sewage. Together these 3 factors give the actual concentration of bacteria on the interesting spots...

  5. Grids for Kids gives next-generation IT an early start

    CERN Multimedia

    2008-01-01

    "Grids for Kids gives children a crash course in grid computing," explains co-organiser Anna Cook of the Enabling Grids for E-sciencE project. "We introduce them to concepts such as middleware, parallel processing and supercomputing, and give them opportunities for hands-on learning.

  6. Social Relations of Fieldwork: Giving Back in a Research Setting

    Directory of Open Access Journals (Sweden)

    Clare Gupta

    2014-07-01

    Full Text Available The project of this special issue emerged from the guest editors' experiences as field researchers in sub-Saharan Africa. During this time both researchers faced the difficult question of "giving back" to the communities in which, and with whom, they worked—communities that were often far less privileged than the researchers were in terms of wealth, mobility, education, and access to health care. Returning from their field sites, both researchers felt a combination of guilt and frustration that they had not done enough or had not done things right. Thus emerged the idea of bringing together a group of researchers, from a range of disciplines, to discuss the topic of giving back in field research. This editorial describes the idea and process that led to the present collection of articles. The guest editors situate the project in the literature on feminist studies and briefly summarize each of the four thematic sections in this special issue. They conclude by emphasizing that their collection is not a guide to giving back. Rather than lay out hard and fast rules about what, how much, and to whom field researchers should give, their collection offers a series of examples and considerations for giving back in fieldwork.

  7. Evaluation of design flood estimates with respect to sample size

    Science.gov (United States)

    Kobierska, Florian; Engeland, Kolbjorn

    2016-04-01

    Estimation of design floods forms the basis for hazard management related to flood risk and is a legal obligation when building infrastructure such as dams, bridges and roads close to water bodies. Flood inundation maps used for land use planning are also produced based on design flood estimates. In Norway, the current guidelines for design flood estimates give recommendations on which data, probability distribution, and method to use dependent on length of the local record. If less than 30 years of local data is available, an index flood approach is recommended where the local observations are used for estimating the index flood and regional data are used for estimating the growth curve. For 30-50 years of data, a 2 parameter distribution is recommended, and for more than 50 years of data, a 3 parameter distribution should be used. Many countries have national guidelines for flood frequency estimation, and recommended distributions include the log Pearson II, generalized logistic and generalized extreme value distributions. For estimating distribution parameters, ordinary and linear moments, maximum likelihood and Bayesian methods are used. The aim of this study is to r-evaluate the guidelines for local flood frequency estimation. In particular, we wanted to answer the following questions: (i) Which distribution gives the best fit to the data? (ii) Which estimation method provides the best fit to the data? (iii) Does the answer to (i) and (ii) depend on local data availability? To answer these questions we set up a test bench for local flood frequency analysis using data based cross-validation methods. The criteria were based on indices describing stability and reliability of design flood estimates. Stability is used as a criterion since design flood estimates should not excessively depend on the data sample. The reliability indices describe to which degree design flood predictions can be trusted.

  8. Application of myocardial perfusion quantitative imaging for the evaluation of therapeutic effect in canine with myocardial infarction

    International Nuclear Information System (INIS)

    Liang Hong; Chen Ju; Liu Sheng; Zeng Shiquan

    2000-01-01

    Myocardial blood perfusion (MBP) ECT and quantitative analysis were performed in 10 canines with experimental acute myocardial infarct (AMI). The accuracy of main myocardial quantitative index, including defect volume (DV) and defect fraction (DF), was estimated and correlated with histochemical staining (HS) of infarcted area. Other 21/AMI canines were divided into Nd:YAG laser trans-myocardial revascularization treated group LTMR and control group. All canines were performed MBP ECT after experimental AMI. Results found that the infarcted volume (IV) measured by HS has well correlated (r 0.88) with DV estimated by myocardial quantitative analysis. But the DF values calculated by both methods was not significantly different (t = 1.28 P > 0.05). In LTMR group 27.5% +- 3.9%, the DF is smaller than control group 32.1% +- 4.6% (t = 2.49 P 99m Tc-MIBI myocardial perfusion SPECT and quantitative study can accurately predict the myocardial blood flow and magnitude of injured myocardium. Nd:YAG LTMR could improve myocardial blood perfusion of ischemic myocardium and decrease effectively the infarct areas

  9. Properties of estimated characteristic roots

    OpenAIRE

    Bent Nielsen; Heino Bohn Nielsen

    2008-01-01

    Estimated characteristic roots in stationary autoregressions are shown to give rather noisy information about their population equivalents. This is remarkable given the central role of the characteristic roots in the theory of autoregressive processes. In the asymptotic analysis the problems appear when multiple roots are present as this implies a non-differentiablity so the δ-method does not apply, convergence rates are slow, and the asymptotic distribution is non-normal. In finite samples ...

  10. A Quantitative Needs Assessment Technique for Cross-Cultural Work Adjustment Training.

    Science.gov (United States)

    Selmer, Lyn

    2000-01-01

    A study of 67 Swedish expatriate bosses and 104 local Hong Kong middle managers tested a quantitative needs assessment technique measuring work values. Two-thirds of middle managers' work values were not correctly estimated by their bosses, especially instrumental values (pay, benefits, security, working hours and conditions), indicating a need…

  11. Labeled experimental choice design for estimating attribute and availability cross effects with N attributes and specific brand attribute levels

    DEFF Research Database (Denmark)

    Nguyen, Thong Tien

    2011-01-01

    Experimental designs are required in widely used techniques in marketing research, especially for preference-based conjoint analysis and discrete-choice studies. Ideally, marketing researchers prefer orthogonal designs because this technique could give uncorrelated parameter estimates. However, o...... for implementing designs that is efficient enough to estimate model with N brands, each brand have K attributes, and brand attribute has specific levels. The paper also illustrates an example in food consumption study.......Experimental designs are required in widely used techniques in marketing research, especially for preference-based conjoint analysis and discrete-choice studies. Ideally, marketing researchers prefer orthogonal designs because this technique could give uncorrelated parameter estimates. However......, orthogonal design is not available for every situation. Instead, efficient design based on computerized design algorithm is always available. This paper presents the method of efficient design for estimating brand models having attribute and availability cross effects. The paper gives a framework...

  12. ACCURATE ESTIMATES OF CHARACTERISTIC EXPONENTS FOR SECOND ORDER DIFFERENTIAL EQUATION

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In this paper, a second order linear differential equation is considered, and an accurate estimate method of characteristic exponent for it is presented. Finally, we give some examples to verify the feasibility of our result.

  13. A combined usage of stochastic and quantitative risk assessment methods in the worksites: Application on an electric power provider

    International Nuclear Information System (INIS)

    Marhavilas, P.K.; Koulouriotis, D.E.

    2012-01-01

    An individual method cannot build either a realistic forecasting model or a risk assessment process in the worksites, and future perspectives should focus on the combined forecasting/estimation approach. The main purpose of this paper is to gain insight into a risk prediction and estimation methodological framework, using the combination of three different methods, including the proportional quantitative-risk-assessment technique (PRAT), the time-series stochastic process (TSP), and the method of estimating the societal-risk (SRE) by F–N curves. In order to prove the usefulness of the combined usage of stochastic and quantitative risk assessment methods, an application on an electric power provider industry is presented to, using empirical data.

  14. Microfluorometric mithramycin assay for quantitating the effects of immunotoxicants on lymphocyte activation

    International Nuclear Information System (INIS)

    Quattrone, A.J.; Ranney, D.F.

    1981-01-01

    A semiautomated, microfluorometric assay has been developed for the detection of toxicant-induced changes in lymphocyte DNA content at standard intervals after mitogen activation. DNA is quantitated by solubilizing the cells and determining the fluorescence enhancement that results from formation of the highly specific mithramycin:DNA adduct. The limit of detection is 0.21 μg (30,000 resting cell equivalents) per microliter well. Correlation with the less sensitive, nonautomatable, diphenylamine DNA assay give a correlation coefficient r = 0.91. Prototype substances representative of true immunotoxicants (prostaglandin E 2 ) and common interfering substances (thymidine at 14 M) have been tested. The latter substance produces false positive results in the standard [ 3 H] thymidine assay. The mithramycin assay does not inappropriately detect this interfering substance. It has the characteristics of a highly specific, accurate technique of screening and quantitating immunotoxic drugs, agents, and mediators in patient sera and other complex biological fluids

  15. The 'Own Children' fertility estimation procedure: a reappraisal.

    Science.gov (United States)

    Avery, Christopher; St Clair, Travis; Levin, Michael; Hill, Kenneth

    2013-07-01

    The Full Birth History has become the dominant source of estimates of fertility levels and trends for countries lacking complete birth registration. An alternative, the 'Own Children' method, derives fertility estimates from household age distributions, but is now rarely used, partly because of concerns about its accuracy. We compared the estimates from these two procedures by applying them to 56 recent Demographic and Health Surveys. On average, 'Own Children' estimates of recent total fertility rates are 3 per cent lower than birth-history estimates. Much of this difference stems from selection bias in the collection of birth histories: women with more children are more likely to be interviewed. We conclude that full birth histories overestimate total fertility, and that the 'Own Children' method gives estimates of total fertility that may better reflect overall national fertility. We recommend the routine application of the 'Own Children' method to census and household survey data to estimate fertility levels and trends.

  16. Estimation of sample size and testing power (Part 4).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-01-01

    Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.

  17. Estimation of the applicability domain of kernel-based machine learning models for virtual screening

    Directory of Open Access Journals (Sweden)

    Fechner Nikolas

    2010-03-01

    Full Text Available Abstract Background The virtual screening of large compound databases is an important application of structural-activity relationship models. Due to the high structural diversity of these data sets, it is impossible for machine learning based QSAR models, which rely on a specific training set, to give reliable results for all compounds. Thus, it is important to consider the subset of the chemical space in which the model is applicable. The approaches to this problem that have been published so far mostly use vectorial descriptor representations to define this domain of applicability of the model. Unfortunately, these cannot be extended easily to structured kernel-based machine learning models. For this reason, we propose three approaches to estimate the domain of applicability of a kernel-based QSAR model. Results We evaluated three kernel-based applicability domain estimations using three different structured kernels on three virtual screening tasks. Each experiment consisted of the training of a kernel-based QSAR model using support vector regression and the ranking of a disjoint screening data set according to the predicted activity. For each prediction, the applicability of the model for the respective compound is quantitatively described using a score obtained by an applicability domain formulation. The suitability of the applicability domain estimation is evaluated by comparing the model performance on the subsets of the screening data sets obtained by different thresholds for the applicability scores. This comparison indicates that it is possible to separate the part of the chemspace, in which the model gives reliable predictions, from the part consisting of structures too dissimilar to the training set to apply the model successfully. A closer inspection reveals that the virtual screening performance of the model is considerably improved if half of the molecules, those with the lowest applicability scores, are omitted from the screening

  18. Estimation of the applicability domain of kernel-based machine learning models for virtual screening.

    Science.gov (United States)

    Fechner, Nikolas; Jahn, Andreas; Hinselmann, Georg; Zell, Andreas

    2010-03-11

    The virtual screening of large compound databases is an important application of structural-activity relationship models. Due to the high structural diversity of these data sets, it is impossible for machine learning based QSAR models, which rely on a specific training set, to give reliable results for all compounds. Thus, it is important to consider the subset of the chemical space in which the model is applicable. The approaches to this problem that have been published so far mostly use vectorial descriptor representations to define this domain of applicability of the model. Unfortunately, these cannot be extended easily to structured kernel-based machine learning models. For this reason, we propose three approaches to estimate the domain of applicability of a kernel-based QSAR model. We evaluated three kernel-based applicability domain estimations using three different structured kernels on three virtual screening tasks. Each experiment consisted of the training of a kernel-based QSAR model using support vector regression and the ranking of a disjoint screening data set according to the predicted activity. For each prediction, the applicability of the model for the respective compound is quantitatively described using a score obtained by an applicability domain formulation. The suitability of the applicability domain estimation is evaluated by comparing the model performance on the subsets of the screening data sets obtained by different thresholds for the applicability scores. This comparison indicates that it is possible to separate the part of the chemspace, in which the model gives reliable predictions, from the part consisting of structures too dissimilar to the training set to apply the model successfully. A closer inspection reveals that the virtual screening performance of the model is considerably improved if half of the molecules, those with the lowest applicability scores, are omitted from the screening. The proposed applicability domain formulations

  19. Moderate deviations principles for the kernel estimator of ...

    African Journals Online (AJOL)

    Abstract. The aim of this paper is to provide pointwise and uniform moderate deviations principles for the kernel estimator of a nonrandom regression function. Moreover, we give an application of these moderate deviations principles to the construction of condence regions for the regression function. Resume. L'objectif de ...

  20. Norm, gender, and bribe-giving: Insights from a behavioral game.

    Directory of Open Access Journals (Sweden)

    Tian Lan

    Full Text Available Previous research has suggested that bribery is more normative in some countries than in others. To understand the underlying process, this paper examines the effects of social norm and gender on bribe-giving behavior. We argue that social norms provide information for strategic planning and impression management, and thus would impact participants' bribe amount. Besides, males are more agentic and focus more on impression management than females. We predicted that males would defy the norm in order to win when the amount of their bribe was kept private, but would conform to the norm when it was made public. To test this hypothesis, we conducted two studies using a competitive game. In each game, we asked three participants to compete in five rounds of creative tasks, and the winner was determined by a referee's subjective judgment of the participants' performance on the tasks. Participants were allowed to give bribes to the referee. Bribe-giving norms were manipulated in two domains: norm level (high vs. low and norm context (private vs. public, in order to investigate the influence of informational and affiliational needs. Studies 1 and 2 consistently showed that individuals conformed to the norm level of bribe-giving while maintaining a relative advantage for economic benefit. Study 2 found that males gave larger bribes in the private context than in the public, whereas females gave smaller bribes in both contexts. We used a latent growth curve model (LGCM to depict the development of bribe-giving behaviors during five rounds of competition. The results showed that gender, creative performance, and norm level all influence the trajectory of bribe-giving behavior.

  1. Norm, gender, and bribe-giving: Insights from a behavioral game.

    Science.gov (United States)

    Lan, Tian; Hong, Ying-Yi

    2017-01-01

    Previous research has suggested that bribery is more normative in some countries than in others. To understand the underlying process, this paper examines the effects of social norm and gender on bribe-giving behavior. We argue that social norms provide information for strategic planning and impression management, and thus would impact participants' bribe amount. Besides, males are more agentic and focus more on impression management than females. We predicted that males would defy the norm in order to win when the amount of their bribe was kept private, but would conform to the norm when it was made public. To test this hypothesis, we conducted two studies using a competitive game. In each game, we asked three participants to compete in five rounds of creative tasks, and the winner was determined by a referee's subjective judgment of the participants' performance on the tasks. Participants were allowed to give bribes to the referee. Bribe-giving norms were manipulated in two domains: norm level (high vs. low) and norm context (private vs. public), in order to investigate the influence of informational and affiliational needs. Studies 1 and 2 consistently showed that individuals conformed to the norm level of bribe-giving while maintaining a relative advantage for economic benefit. Study 2 found that males gave larger bribes in the private context than in the public, whereas females gave smaller bribes in both contexts. We used a latent growth curve model (LGCM) to depict the development of bribe-giving behaviors during five rounds of competition. The results showed that gender, creative performance, and norm level all influence the trajectory of bribe-giving behavior.

  2. Quantitative scenario analysis of low and intermediate level radioactive repository

    International Nuclear Information System (INIS)

    Lee, Keon Jae; Lee, Sang Yoon; Park, Keon Baek; Song, Min Cheon; Lee, Ho Jin

    1998-03-01

    Derivation of hypothetical radioactive waste disposal facility os conducted through sub-component characteristic analysis and conceptual modeling. It is studied that quantitative analysis of constructed scenario in terms of annual effective dose equivalent. This study is sequentially conducted according to performance assessment of radioactive waste disposal facility such as : ground water flow analysis, source term analysis, ground water transport, surface water transport, dose and pathways. The routine program module such as VAM2D-PAGAN-GENII is used for quantitative scenario analysis. Detailed data used in this module are come from experimental data of Korean territory and default data given within this module. Is case of blank data for code execution, it is estimated through reasonable engineering sense

  3. A semi-quantitative model for risk appreciation and risk weighing

    DEFF Research Database (Denmark)

    Bos, Peter M.J.; Boon, Polly E.; van der Voet, Hilko

    2009-01-01

    Risk managers need detailed information on (1) the type of effect, (2) the size (severity) of the expected effect(s) and (3) the fraction of the population at risk to decide on well-balanced risk reduction measures. A previously developed integrated probabilistic risk assessment (IPRA) model...... provides quantitative information on these three parameters. A semi-quantitative tool is presented that combines information on these parameters into easy-readable charts that will facilitate risk evaluations of exposure situations and decisions on risk reduction measures. This tool is based on a concept...... detailed information on the estimated health impact in a given exposure situation. These graphs will facilitate the discussions on appropriate risk reduction measures to be taken....

  4. Quantitative verification of ab initio self-consistent laser theory.

    Science.gov (United States)

    Ge, Li; Tandy, Robert J; Stone, A D; Türeci, Hakan E

    2008-10-13

    We generalize and test the recent "ab initio" self-consistent (AISC) time-independent semiclassical laser theory. This self-consistent formalism generates all the stationary lasing properties in the multimode regime (frequencies, thresholds, internal and external fields, output power and emission pattern) from simple inputs: the dielectric function of the passive cavity, the atomic transition frequency, and the transverse relaxation time of the lasing transition.We find that the theory gives excellent quantitative agreement with full time-dependent simulations of the Maxwell-Bloch equations after it has been generalized to drop the slowly-varying envelope approximation. The theory is infinite order in the non-linear hole-burning interaction; the widely used third order approximation is shown to fail badly.

  5. Estimation of morbidity effects

    International Nuclear Information System (INIS)

    Ostro, B.

    1994-01-01

    Many researchers have related exposure to ambient air pollution to respiratory morbidity. To be included in this review and analysis, however, several criteria had to be met. First, a careful study design and a methodology that generated quantitative dose-response estimates were required. Therefore, there was a focus on time-series regression analyses relating daily incidence of morbidity to air pollution in a single city or metropolitan area. Studies that used weekly or monthly average concentrations or that involved particulate measurements in poorly characterized metropolitan areas (e.g., one monitor representing a large region) were not included in this review. Second, studies that minimized confounding ad omitted variables were included. For example, research that compared two cities or regions and characterized them as 'high' and 'low' pollution area were not included because of potential confounding by other factors in the respective areas. Third, concern for the effects of seasonality and weather had to be demonstrated. This could be accomplished by either stratifying and analyzing the data by season, by examining the independent effects of temperature and humidity, and/or by correcting the model for possible autocorrelation. A fourth criterion for study inclusion was that the study had to include a reasonably complete analysis of the data. Such analysis would include an careful exploration of the primary hypothesis as well as possible examination of te robustness and sensitivity of the results to alternative functional forms, specifications, and influential data points. When studies reported the results of these alternative analyses, the quantitative estimates that were judged as most representative of the overall findings were those that were summarized in this paper. Finally, for inclusion in the review of particulate matter, the study had to provide a measure of particle concentration that could be converted into PM10, particulate matter below 10

  6. Application of sensitivity analysis to a quantitative assessment of neutron cross-section requirements for the TFTR: an interim report

    International Nuclear Information System (INIS)

    Gerstl, S.A.W.; Dudziak, D.J.; Muir, D.W.

    1975-09-01

    A computational method to determine cross-section requirements quantitatively is described and applied to the Tokamak Fusion Test Reactor (TFTR). In order to provide a rational basis for the priorities assigned to new cross-section measurements or evaluations, this method includes quantitative estimates of the uncertainty of currently available data, the sensitivity of important nuclear design parameters to selected cross sections, and the accuracy desired in predicting nuclear design parameters. Perturbation theory is used to combine estimated cross-section uncertainties with calculated sensitivities to determine the variance of any nuclear design parameter of interest

  7. REKF and RUKF for pico satellite attitude estimation in the presence of measurement faults

    Institute of Scientific and Technical Information of China (English)

    Halil Ersin Söken; Chingiz Hajiyev

    2014-01-01

    When a pico satel ite is under normal operational condi-tions, whether it is extended or unscented, a conventional Kalman filter gives sufficiently good estimation results. However, if the measurements are not reliable because of any kind of malfunc-tions in the estimation system, the Kalman filter gives inaccurate results and diverges by time. This study compares two different robust Kalman filtering algorithms, robust extended Kalman filter (REKF) and robust unscented Kalman filter (RUKF), for the case of measurement malfunctions. In both filters, by the use of de-fined variables named as the measurement noise scale factor, the faulty measurements are taken into the consideration with a smal weight, and the estimations are corrected without affecting the characteristic of the accurate ones. The proposed robust Kalman filters are applied for the attitude estimation process of a pico satel-lite, and the results are compared.

  8. Novel coherent receivers for AF distributed STBC using disintegrated channel estimation

    KAUST Repository

    Khan, Fahd Ahmed; Chen, Yunfei; Alouini, Mohamed-Slim

    2011-01-01

    For a single relay network, disintegrated channel estimation (DCE), where the source-relay channel is estimated at the relay and the relay-destination channel is estimated at the destination, gives better performance than the cascaded channel estimation. We derive novel receivers for the relay network with disintegrated channel estimation. The derived receivers do not require channel estimation at the destination, as they use the received pilot signals and the source-relay channel estimate for decoding directly. We also consider the effect of quantized source-relay channel estimate on the performance of the designed receivers. Simulation results show that a performance gain of up to 2.2 dB can be achieved by the new receivers, compared with the conventional mismatched coherent receiver with DCE. © 2011 IEEE.

  9. Novel coherent receivers for AF distributed STBC using disintegrated channel estimation

    KAUST Repository

    Khan, Fahd Ahmed

    2011-05-01

    For a single relay network, disintegrated channel estimation (DCE), where the source-relay channel is estimated at the relay and the relay-destination channel is estimated at the destination, gives better performance than the cascaded channel estimation. We derive novel receivers for the relay network with disintegrated channel estimation. The derived receivers do not require channel estimation at the destination, as they use the received pilot signals and the source-relay channel estimate for decoding directly. We also consider the effect of quantized source-relay channel estimate on the performance of the designed receivers. Simulation results show that a performance gain of up to 2.2 dB can be achieved by the new receivers, compared with the conventional mismatched coherent receiver with DCE. © 2011 IEEE.

  10. Reversible dementia: more than 10% or less than 1%? A quantitative review

    NARCIS (Netherlands)

    Weytingh, M. D.; Bossuyt, P. M.; van Crevel, H.

    1995-01-01

    Dementia is reversible in some cases and these should be diagnosed without over-investigating the many others with irreversible disease. To estimate how often dementia can be reversed, we carried out a quantitative review of studies reported between 1972 and 1994 in which reversible dementia was

  11. Charity Begins At Home: How Socialization Experiences Influence Giving and Volunteering

    OpenAIRE

    Bekkers, R.H.F.P.

    2005-01-01

    This paper shows that charity begins at home. Using retrospective reports on youth experiences from the Giving in the Netherlands Panel Survey (n=1,964, 2001) I find that (1) parents who volunteer when their children are young promote giving and volunteering of their children once they have become adults; (2) the intensity of youth participation in nonprofit organizations is positively related to current giving and volunteering; (3) that parental volunteering and youth participation promote c...

  12. An interview study of Gift-giving in China at New Year

    OpenAIRE

    Zhang, Shuo

    2007-01-01

    The purpose of this dissertation is to examine to what a extent the Chinese culture including the custom during Chinese New Year, reciprocity, face and Guanxi has influence on gift-giving among Chinese people and how these factors affect the behavior of gift-giving during Chinese New Year using qualitative research method with in-depth interview and limited observation. This dissertation stemmed from the observations of gift-giving in Bejing institute of geological engineering (BIGE) in...

  13. Apollo Video Photogrammetry Estimation Of Plume Impingement Effects

    Science.gov (United States)

    Immer, Christopher; Lane, John; Metzger, Philip T.; Clements, Sandra

    2008-01-01

    The Constellation Project's planned return to the moon requires numerous landings at the same site. Since the top few centimeters are loosely packed regolith, plume impingement from the Lander ejects the granular material at high velocities. Much work is needed to understand the physics of plume impingement during landing in order to protect hardware surrounding the landing sites. While mostly qualitative in nature, the Apollo Lunar Module landing videos can provide a wealth of quantitative information using modem photogrammetry techniques. The authors have used the digitized videos to quantify plume impingement effects of the landing exhaust on the lunar surface. The dust ejection angle from the plume is estimated at 1-3 degrees. The lofted particle density is estimated at 10(exp 8)- 10(exp 13) particles per cubic meter. Additionally, evidence for ejection of large 10-15 cm sized objects and a dependence of ejection angle on thrust are presented. Further work is ongoing to continue quantitative analysis of the landing videos.

  14. The good news about giving bad news to patients.

    Science.gov (United States)

    Farber, Neil J; Urban, Susan Y; Collier, Virginia U; Weiner, Joan; Polite, Ronald G; Davis, Elizabeth B; Boyer, E Gil

    2002-12-01

    There are few data available on how physicians inform patients about bad news. We surveyed internists about how they convey this information. We surveyed internists about their activities in giving bad news to patients. One set of questions was about activities for the emotional support of the patient (11 items), and the other was about activities for creating a supportive environment for delivering bad news (9 items). The impact of demographic factors on the performance of emotionally supportive items, environmentally supportive items, and on the number of minutes reportedly spent delivering news was analyzed by analysis of variance and multiple regression analysis. More than half of the internists reported that they always or frequently performed 10 of the 11 emotionally supportive items and 6 of the 9 environmentally supportive items while giving bad news to patients. The average time reportedly spent in giving bad news was 27 minutes. Although training in giving bad news had a significant impact on the number of emotionally supportive items reported (P woman, unmarried, and having a history of major illness were also associated with reporting a greater number of emotionally supportive activities. Internists report that they inform patients of bad news appropriately. Some deficiencies exist, specifically in discussing prognosis and referral of patients to support groups. Physician educational efforts should include discussion of prognosis with patients as well as the availability of support groups.

  15. Fishery characteristics and abundance estimates of the mangrove ...

    African Journals Online (AJOL)

    The mud crab Scylla serrata is lightly exploited along the East African seaboard. This study reports on fishing practices and gives preliminary estimates of abundance and size structures of the mud crab populations in Utende, Chole Island and Juani Island, Tanzania, and west of Quirimba and Ibo Island, Moçambique.

  16. Uncertainty estimation with a small number of measurements, part II: a redefinition of uncertainty and an estimator method

    Science.gov (United States)

    Huang, Hening

    2018-01-01

    This paper is the second (Part II) in a series of two papers (Part I and Part II). Part I has quantitatively discussed the fundamental limitations of the t-interval method for uncertainty estimation with a small number of measurements. This paper (Part II) reveals that the t-interval is an ‘exact’ answer to a wrong question; it is actually misused in uncertainty estimation. This paper proposes a redefinition of uncertainty, based on the classical theory of errors and the theory of point estimation, and a modification of the conventional approach to estimating measurement uncertainty. It also presents an asymptotic procedure for estimating the z-interval. The proposed modification is to replace the t-based uncertainty with an uncertainty estimator (mean- or median-unbiased). The uncertainty estimator method is an approximate answer to the right question to uncertainty estimation. The modified approach provides realistic estimates of uncertainty, regardless of whether the population standard deviation is known or unknown, or if the sample size is small or large. As an application example of the modified approach, this paper presents a resolution to the Du-Yang paradox (i.e. Paradox 2), one of the three paradoxes caused by the misuse of the t-interval in uncertainty estimation.

  17. Students Can Give Psychology Away: Oral Presentations on YouTube

    Science.gov (United States)

    Malouff, John M.; Emmerton, Ashley J.

    2014-01-01

    This article describes a novel assignment involving students giving a presentation on YouTube about how to apply behavior-modification principles to change a specific type of behavior, chosen by each student. The presentations covered topics such as how to end nail biting and how to reduce anxiety about public speaking. Giving an oral presentation…

  18. Clinical use of estimated glomerular filtration rate for evaluation of kidney function

    DEFF Research Database (Denmark)

    Broberg, Bo; Lindhardt, Morten; Rossing, Peter

    2013-01-01

    is a significant predictor for cardiovascular disease and may along with classical cardiovascular risk factors add useful information to risk estimation. Several cautions need to be taken into account, e.g. rapid changes in kidney function, dialysis, high age, obesity, underweight and diverging and unanticipated......Estimating glomerular filtration rate by the Modification of Diet in Renal Disease or Chronic Kidney Disease Epidemiology Collaboration formulas gives a reasonable estimate of kidney function for e.g. classification of chronic kidney disease. Additionally the estimated glomerular filtration rate...

  19. QUANTITATIVE INDICATORS OF THE SECURITIZATION OF ASSETS

    Directory of Open Access Journals (Sweden)

    Denis VOSTRICOV

    2018-02-01

    Full Text Available Securitization is instrumental in return on capital increment through the withdrawal from the balance oflending activities being accompanied by off-balance incomes flow from fees, which are less capital-intensive. Thepurpose of this paper is to analyze the quantitative indicators characterizing the securitization of assets. For draftingthis article, the method of analysis, synthesis method, logic and dialectic method, normative method, the study ofstatistical sampling and time series of expert evaluations (Standard and Poor’s, personal observations, andmonographic studies have been used. The main difference between the securitization of assets from traditional waysof financing is related to the achievement of a plenty of secondary goals in attracting financial resources, whichcan play a significant role in choosing to favour the securitization of assets or other types of financing. Inparticular, it gives a possibility to write off the assets from the balance sheet along with the relevant obligationsunder the securities, to expand the range of potential investors accompanied by the reducing of credit risk, interestrate and liquidity risk, as well as to improve the management quality of assets, liabilities and risks. All of thesesecondary effects are achieved by the isolation of selected assets from the total credit risk of the enterprise, raisingits funds, which forms the pivotal actuality and significance of asset securitization. The article containsdemonstrations of quantitative and qualitative indicators characterizing the securitization of assets.

  20. A quantitative calculation for software reliability evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Young-Jun; Lee, Jang-Soo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    To meet these regulatory requirements, the software used in the nuclear safety field has been ensured through the development, validation, safety analysis, and quality assurance activities throughout the entire process life cycle from the planning phase to the installation phase. A variety of activities, such as the quality assurance activities are also required to improve the quality of a software. However, there are limitations to ensure that the quality is improved enough. Therefore, the effort to calculate the reliability of the software continues for a quantitative evaluation instead of a qualitative evaluation. In this paper, we propose a quantitative calculation method for the software to be used for a specific operation of the digital controller in an NPP. After injecting random faults in the internal space of a developed controller and calculating the ability to detect the injected faults using diagnostic software, we can evaluate the software reliability of a digital controller in an NPP. We tried to calculate the software reliability of the controller in an NPP using a new method that differs from a traditional method. It calculates the fault detection coverage after injecting the faults into the software memory space rather than the activity through the life cycle process. We attempt differentiation by creating a new definition of the fault, imitating the software fault using the hardware, and giving a consideration and weights for injection faults.

  1. Estimating Heritability from Nuclear Family and Pedigree Data.

    Science.gov (United States)

    Bochud, Murielle

    2017-01-01

    Heritability is a measure of familial resemblance. Estimating the heritability of a trait could be one of the first steps in the gene mapping process. This chapter describes how to estimate heritability for quantitative traits from nuclear and pedigree data using the ASSOC program in the Statistical Analysis in Genetic Epidemiology (S.A.G.E.) software package. Estimating heritability rests on the assumption that the total phenotypic variance of a quantitative trait can be partitioned into independent genetic and environmental components. In turn, the genetic variance can be divided into an additive (polygenic) genetic variance, a dominance variance (nonlinear interaction effects between alleles at the same locus) and an epistatic variance (interaction effects between alleles at different loci). The last two are often assumed to be zero. The additive genetic variance represents the average effects of individual alleles on the phenotype and reflects transmissible resemblance between relatives. Heritability in the narrow sense (h 2 ) refers to the ratio of the additive genetic variance to the total phenotypic variance. Heritability is a dimensionless population-specific parameter. ASSOC estimates association parameters (regression coefficients) and variance components from family data. ASSOC uses a linear regression model in which the total residual variance is partitioned, after regressing on covariates, into the sum of random components such as an additive polygenic component, a random sibship component, random nuclear family components, a random marital component, and an individual-specific random component. Assortative mating, nonrandom ascertainment of families, and failure to account for key confounding factors may bias heritability estimates.

  2. Using data from respondent-driven sampling studies to estimate the number of people who inject drugs: Application to the Kohtla-Järve region of Estonia.

    Directory of Open Access Journals (Sweden)

    Jiacheng Wu

    Full Text Available Estimating the size of key risk populations is essential for determining the resources needed to implement effective public health intervention programs. Several standard methods for population size estimation exist, but the statistical and practical assumptions required for their use may not be met when applied to HIV risk groups. We apply three approaches to estimate the number of people who inject drugs (PWID in the Kohtla-Järve region of Estonia using data from a respondent-driven sampling (RDS study: the standard "multiplier" estimate gives 654 people (95% CI 509-804, the "successive sampling" method gives estimates between 600 and 2500 people, and a network-based estimate that uses the RDS recruitment chain gives between 700 and 2800 people. We critically assess the strengths and weaknesses of these statistical approaches for estimating the size of hidden or hard-to-reach HIV risk groups.

  3. Opportunities for measuring DNA synthesis time by quantitative autoradiography

    International Nuclear Information System (INIS)

    Vasileva, D.

    1980-01-01

    DNA sysntesis time (Tsub(s)) in cells of the canine erythropoiesis and myelopoiesis pools was determined by quantitative autoradiography according to Doermer. In contrast to mitosis labelling for Tsub(s) estimation as so far applied, this technique uses well-differentiated cells. After blocking endogeneous DNA synthesis with 5-fluorodeoxyuridine, its further course becomes dependent on exogeneous supply of thymidine, in the form of 14 C-thymidine. From incroporation of the latter into the individual cell within a definite time span (3-7 min) and taking into account its total amount, Tsub(s) may be calculated. The data thus obtained were found to agree with Tsub(s) values as estimated from the labelled mitosis curve

  4. Estimation and valuation in accounting

    Directory of Open Access Journals (Sweden)

    Cicilia Ionescu

    2014-03-01

    Full Text Available The relationships of the enterprise with the external environment give rise to a range of informational needs. Satisfying those needs requires the production of coherent, comparable, relevant and reliable information included into the individual or consolidated financial statements. International Financial Reporting Standards IAS / IFRS aim to ensure the comparability and relevance of the accounting information, providing, among other things, details about the issue of accounting estimates and changes in accounting estimates. Valuation is a process continually used, in order to assign values to the elements that are to be recognised in the financial statements. Most of the times, the values reflected in the books are clear, they are recorded in the contracts with third parties, in the supporting documents, etc. However, the uncertainties in which a reporting entity operates determines that, sometimes, the assigned or values attributable to some items composing the financial statements be determined by use estimates.

  5. Interleaved quantitative BOLD: Combining extravascular R2' - and intravascular R2-measurements for estimation of deoxygenated blood volume and hemoglobin oxygen saturation.

    Science.gov (United States)

    Lee, Hyunyeol; Englund, Erin K; Wehrli, Felix W

    2018-03-23

    Quantitative BOLD (qBOLD), a non-invasive MRI method for assessment of hemodynamic and metabolic properties of the brain in the baseline state, provides spatial maps of deoxygenated blood volume fraction (DBV) and hemoglobin oxygen saturation (HbO 2 ) by means of an analytical model for the temporal evolution of free-induction-decay signals in the extravascular compartment. However, mutual coupling between DBV and HbO 2 in the signal model results in considerable estimation uncertainty precluding achievement of a unique set of solutions. To address this problem, we developed an interleaved qBOLD method (iqBOLD) that combines extravascular R 2 ' and intravascular R 2 mapping techniques so as to obtain prior knowledge for the two unknown parameters. To achieve these goals, asymmetric spin echo and velocity-selective spin-labeling (VSSL) modules were interleaved in a single pulse sequence. Prior to VSSL, arterial blood and CSF signals were suppressed to produce reliable estimates for cerebral venous blood volume fraction (CBV v ) as well as venous blood R 2 (to yield HbO 2 ). Parameter maps derived from the VSSL module were employed to initialize DBV and HbO 2 in the qBOLD processing. Numerical simulations and in vivo experiments at 3 T were performed to evaluate the performance of iqBOLD in comparison to the parent qBOLD method. Data obtained in eight healthy subjects yielded plausible values averaging 60.1 ± 3.3% for HbO 2 and 3.1 ± 0.5 and 2.0 ± 0.4% for DBV in gray and white matter, respectively. Furthermore, the results show that prior estimates of CBV v and HbO 2 from the VSSL component enhance the solution stability in the qBOLD processing, and thus suggest the feasibility of iqBOLD as a promising alternative to the conventional technique for quantifying neurometabolic parameters. Copyright © 2018. Published by Elsevier Inc.

  6. Limitations for qualitative and quantitative neutron activation analysis using reactor neutrons

    International Nuclear Information System (INIS)

    El-Abbady, W.H.; El-Tanahy, Z.H.; El-Hagg, A.A.; Hassan, A.M.

    1999-01-01

    In this work, the most important limitations for qualitative and quantitative analysis using reactor neutrons for activation are reviewed. Each limitation is discussed using different examples of activated samples. Photopeak estimation, nuclear reactions interference and neutron flux measurements are taken into consideration. Solutions for high accuracy evaluation in neutron activation analysis applications are given. (author)

  7. Automated Spatial Brain Normalization and Hindbrain White Matter Reference Tissue Give Improved [(18)F]-Florbetaben PET Quantitation in Alzheimer's Model Mice.

    Science.gov (United States)

    Overhoff, Felix; Brendel, Matthias; Jaworska, Anna; Korzhova, Viktoria; Delker, Andreas; Probst, Federico; Focke, Carola; Gildehaus, Franz-Josef; Carlsen, Janette; Baumann, Karlheinz; Haass, Christian; Bartenstein, Peter; Herms, Jochen; Rominger, Axel

    2016-01-01

    Preclinical PET studies of β-amyloid (Aβ) accumulation are of growing importance, but comparisons between research sites require standardized and optimized methods for quantitation. Therefore, we aimed to evaluate systematically the (1) impact of an automated algorithm for spatial brain normalization, and (2) intensity scaling methods of different reference regions for Aβ-PET in a large dataset of transgenic mice. PS2APP mice in a 6 week longitudinal setting (N = 37) and another set of PS2APP mice at a histologically assessed narrow range of Aβ burden (N = 40) were investigated by [(18)F]-florbetaben PET. Manual spatial normalization by three readers at different training levels was performed prior to application of an automated brain spatial normalization and inter-reader agreement was assessed by Fleiss Kappa (κ). For this method the impact of templates at different pathology stages was investigated. Four different reference regions on brain uptake normalization were used to calculate frontal cortical standardized uptake value ratios (SUVRCTX∕REF), relative to raw SUVCTX. Results were compared on the basis of longitudinal stability (Cohen's d), and in reference to gold standard histopathological quantitation (Pearson's R). Application of an automated brain spatial normalization resulted in nearly perfect agreement (all κ≥0.99) between different readers, with constant or improved correlation with histology. Templates based on inappropriate pathology stage resulted in up to 2.9% systematic bias for SUVRCTX∕REF. All SUVRCTX∕REF methods performed better than SUVCTX both with regard to longitudinal stability (d≥1.21 vs. d = 0.23) and histological gold standard agreement (R≥0.66 vs. R≥0.31). Voxel-wise analysis suggested a physiologically implausible longitudinal decrease by global mean scaling. The hindbrain white matter reference (R mean = 0.75) was slightly superior to the brainstem (R mean = 0.74) and the cerebellum (R mean = 0.73). Automated

  8. Bayesian Parameter Estimation for Heavy-Duty Vehicles

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Eric; Konan, Arnaud; Duran, Adam

    2017-03-28

    Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the current state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.

  9. Comparing computing formulas for estimating concentration ratios

    International Nuclear Information System (INIS)

    Gilbert, R.O.; Simpson, J.C.

    1984-03-01

    This paper provides guidance on the choice of computing formulas (estimators) for estimating concentration ratios and other ratio-type measures of radionuclides and other environmental contaminant transfers between ecosystem components. Mathematical expressions for the expected value of three commonly used estimators (arithmetic mean of ratios, geometric mean of ratios, and the ratio of means) are obtained when the multivariate lognormal distribution is assumed. These expressions are used to explain why these estimators will not in general give the same estimate of the average concentration ratio. They illustrate that the magnitude of the discrepancies depends on the magnitude of measurement biases, and on the variances and correlations associated with spatial heterogeneity and measurement errors. This paper also reports on a computer simulation study that compares the accuracy of eight computing formulas for estimating a ratio relationship that is constant over time and/or space. Statistical models appropriate for both controlled spiking experiments and observational field studies for either normal or lognormal distributions are considered. 24 references, 15 figures, 7 tables

  10. EKF composition estimation and GMC control of a reactive distillation column

    Science.gov (United States)

    Tintavon, Sirivimon; Kittisupakorn, Paisan

    2017-08-01

    This research work proposes an extended Kalman filter (EKF) estimator to give estimates of product composition and a generic model controller (GMC) to control the temperature of a reactive distillation column (RDC). One of major difficulties to control the RDC is large time delays of product composition measurement. Therefore, the estimates of the product composition are needed and determined based on available and reliable measured tray temperature via the extended Kalman Filter (EKF). With these estimates, the GMC controller is applied to control the RDC's temperature. The performance of the EKF estimator under the GMC control is evaluated in various disturbances and set point change.

  11. How to give a good talk.

    Science.gov (United States)

    Alon, Uri

    2009-10-23

    We depend on talks to communicate our work, and we spend much of our time as audience members in talks. However, few scientists are taught the well-established principles of giving good talks. Here, I describe how to prepare, present, and answer questions in a scientific talk. We will see how a talk prepared with a single premise and delivered with good eye contact is clear and enjoyable.

  12. Direct process estimation from tomographic data using artificial neural systems

    Science.gov (United States)

    Mohamad-Saleh, Junita; Hoyle, Brian S.; Podd, Frank J.; Spink, D. M.

    2001-07-01

    The paper deals with the goal of component fraction estimation in multicomponent flows, a critical measurement in many processes. Electrical capacitance tomography (ECT) is a well-researched sensing technique for this task, due to its low-cost, non-intrusion, and fast response. However, typical systems, which include practicable real-time reconstruction algorithms, give inaccurate results, and existing approaches to direct component fraction measurement are flow-regime dependent. In the investigation described, an artificial neural network approach is used to directly estimate the component fractions in gas-oil, gas-water, and gas-oil-water flows from ECT measurements. A 2D finite- element electric field model of a 12-electrode ECT sensor is used to simulate ECT measurements of various flow conditions. The raw measurements are reduced to a mutually independent set using principal components analysis and used with their corresponding component fractions to train multilayer feed-forward neural networks (MLFFNNs). The trained MLFFNNs are tested with patterns consisting of unlearned ECT simulated and plant measurements. Results included in the paper have a mean absolute error of less than 1% for the estimation of various multicomponent fractions of the permittivity distribution. They are also shown to give improved component fraction estimation compared to a well known direct ECT method.

  13. Estimation of Faults in DC Electrical Power System

    Science.gov (United States)

    Gorinevsky, Dimitry; Boyd, Stephen; Poll, Scott

    2009-01-01

    This paper demonstrates a novel optimization-based approach to estimating fault states in a DC power system. Potential faults changing the circuit topology are included along with faulty measurements. Our approach can be considered as a relaxation of the mixed estimation problem. We develop a linear model of the circuit and pose a convex problem for estimating the faults and other hidden states. A sparse fault vector solution is computed by using 11 regularization. The solution is computed reliably and efficiently, and gives accurate diagnostics on the faults. We demonstrate a real-time implementation of the approach for an instrumented electrical power system testbed, the ADAPT testbed at NASA ARC. The estimates are computed in milliseconds on a PC. The approach performs well despite unmodeled transients and other modeling uncertainties present in the system.

  14. Quantitative SRXRF analysis on the BL15U1 beamline at SSRF

    International Nuclear Information System (INIS)

    Zhang Yanle; Yu Xiaohan

    2010-01-01

    In this paper, we give an introduction first to two quantification methods for synchrotron radiation X-ray fluorescence analysis (SRXRF), namely fundamental parameters method and Monte-Carlo simulation method, for their application on the BL15U1 beamline (hard X-ray microprobe) at SSRF (Shanghai Synchrotron Radiation Facility). Effectiveness of the two methods is demonstrated and the XRF detection limits of the BL15U1 beamline are calculated. The results show that, quantitative analysis at the ppm level can be done using the two methods, with an accuracy of better than 10%. Although both the methods are valid for the SRXRF data analysis,the Monte Carlo method gives better analysis result, as it compares the simulated spectrum with the experiment spectrum, and this helps the determination of experiment parameters and thus minimizes the error caused by incorrect parameters. Finally, the detection limits shows that the BL15U1 beamline is capable of carrying out standard-of-the-art XRF experiment. (authors)

  15. Evaluating Bounds and Estimators for Constants of Random Polycrystals Composed of Orthotropic Elastic Materials

    Energy Technology Data Exchange (ETDEWEB)

    Berryman, J. G.

    2012-03-01

    While the well-known Voigt and Reuss (VR) bounds, and the Voigt-Reuss-Hill (VRH) elastic constant estimators for random polycrystals are all straightforwardly calculated once the elastic constants of anisotropic crystals are known, the Hashin-Shtrikman (HS) bounds and related self-consistent (SC) estimators for the same constants are, by comparison, more difficult to compute. Recent work has shown how to simplify (to some extent) these harder to compute HS bounds and SC estimators. An overview and analysis of a subsampling of these results is presented here with the main point being to show whether or not this extra work (i.e., in calculating both the HS bounds and the SC estimates) does provide added value since, in particular, the VRH estimators often do not fall within the HS bounds, while the SC estimators (for good reasons) have always been found to do so. The quantitative differences between the SC and the VRH estimators in the eight cases considered are often quite small however, being on the order of ±1%. These quantitative results hold true even though these polycrystal Voigt-Reuss-Hill estimators more typically (but not always) fall outside the Hashin-Shtrikman bounds, while the self-consistent estimators always fall inside (or on the boundaries of) these same bounds.

  16. Methodology development for the radioecological monitoring effectiveness estimation

    International Nuclear Information System (INIS)

    Gusev, A.E.; Kozlov, A.A.; Lavrov, K.N.; Sobolev, I.A.; Tsyplyakova, T.P.

    1997-01-01

    A general model for estimation of the programs assuring radiation and ecological public protection is described. The complex of purposes and criteria characterizing and giving an opportunity to estimate the effectiveness of environment protection program composition is selected. An algorithm for selecting the optimal management decision from the view point of work cost connected with population protection improvement is considered. The position of radiation-ecological monitoring in general problem of environment pollution is determined. It is shown that the monitoring organizing effectiveness is closely connected with population radiation and ecological protection

  17. Statistical Estimation for CAPM with Long-Memory Dependence

    Directory of Open Access Journals (Sweden)

    Tomoyuki Amano

    2012-01-01

    Full Text Available We investigate the Capital Asser Pricing Model (CAPM with time dimension. By using time series analysis, we discuss the estimation of CAPM when market portfolio and the error process are long-memory process and correlated with each other. We give a sufficient condition for the return of assets in the CAPM to be short memory. In this setting, we propose a two-stage least squares estimator for the regression coefficient and derive the asymptotic distribution. Some numerical studies are given. They show an interesting feature of this model.

  18. Methods for the quantitative comparison of molecular estimates of clade age and the fossil record.

    Science.gov (United States)

    Clarke, Julia A; Boyd, Clint A

    2015-01-01

    Approaches quantifying the relative congruence, or incongruence, of molecular divergence estimates and the fossil record have been limited. Previously proposed methods are largely node specific, assessing incongruence at particular nodes for which both fossil data and molecular divergence estimates are available. These existing metrics, and other methods that quantify incongruence across topologies including entirely extinct clades, have so far not taken into account uncertainty surrounding both the divergence estimates and the ages of fossils. They have also treated molecular divergence estimates younger than previously assessed fossil minimum estimates of clade age as if they were the same as cases in which they were older. However, these cases are not the same. Recovered divergence dates younger than compared oldest known occurrences require prior hypotheses regarding the phylogenetic position of the compared fossil record and standard assumptions about the relative timing of morphological and molecular change to be incorrect. Older molecular dates, by contrast, are consistent with an incomplete fossil record and do not require prior assessments of the fossil record to be unreliable in some way. Here, we compare previous approaches and introduce two new descriptive metrics. Both metrics explicitly incorporate information on uncertainty by utilizing the 95% confidence intervals on estimated divergence dates and data on stratigraphic uncertainty concerning the age of the compared fossils. Metric scores are maximized when these ranges are overlapping. MDI (minimum divergence incongruence) discriminates between situations where molecular estimates are younger or older than known fossils reporting both absolute fit values and a number score for incompatible nodes. DIG range (divergence implied gap range) allows quantification of the minimum increase in implied missing fossil record induced by enforcing a given set of molecular-based estimates. These metrics are used

  19. Benefits of dynamic mobility applications : preliminary estimates from the literature.

    Science.gov (United States)

    2012-12-01

    This white paper examines the available quantitative information on the potential mobility benefits of the connected vehicle Dynamic Mobility Applications (DMA). This work will be refined as more and better estimates of benefits from mobility applica...

  20. The Source Signature Estimator - System Improvements and Applications

    Energy Technology Data Exchange (ETDEWEB)

    Sabel, Per; Brink, Mundy; Eidsvig, Seija; Jensen, Lars

    1998-12-31

    This presentation relates briefly to the first part of the joint project on post-survey analysis of shot-by-shot based source signature estimation. The improvements of a Source Signature Estimator system are analysed. The notional source method can give suboptimal results when not inputting the real array geometry, i.e. actual separations between the sub-arrays of an air gun array, to the notional source algorithm. This constraint has been addressed herein and was implemented for the first time in the field in summer 1997. The second part of this study will show the potential advantages for interpretation when the signature estimates are then to be applied in the data processing. 5 refs., 1 fig.