WorldWideScience

Sample records for methods a comparison study

  1. A simple statistical method for catch comparison studies

    DEFF Research Database (Denmark)

    Holst, René; Revill, Andrew

    2009-01-01

    For analysing catch comparison data, we propose a simple method based on Generalised Linear Mixed Models (GLMM) and use polynomial approximations to fit the proportions caught in the test codend. The method provides comparisons of fish catch at length by the two gears through a continuous curve...... with a realistic confidence band. We demonstrate the versatility of this method, on field data obtained from the first known testing in European waters of the Rhode Island (USA) 'Eliminator' trawl. These data are interesting as they include a range of species with different selective patterns. Crown Copyright (C...

  2. a comparison of methods in a behaviour study of the south african ...

    African Journals Online (AJOL)

    A COMPARISON OF METHODS IN A BEHAVIOUR STUDY OF THE ... Three methods are outlined in this paper and the results obtained from each method were .... There was definitely no aggressive response towards the Sky pointing mate.

  3. Soybean allergen detection methods--a comparison study

    DEFF Research Database (Denmark)

    Pedersen, M. Højgaard; Holzhauser, T.; Bisson, C.

    2008-01-01

    Soybean containing products are widely consumed, thus reliable methods for detection of soy in foods are needed in order to make appropriate risk assessment studies to adequately protect soy allergic patients. Six methods were compared using eight food products with a declared content of soy...

  4. Extracting DNA from ocean microplastics: a method comparison study

    NARCIS (Netherlands)

    Debeljak, P.; Proietti, M.; Reisser, J.; Ferrari, F.F.; Abbas, B.; van Loosdrecht, M.C.M.; Slat, B.; Herndl, G.J.

    2017-01-01

    AbstractThe ubiquity of plastics in oceans worldwide raises concerns about their ecological implications. Suspended microplastics (<5 mm) can be ingested by a wide range of marine organisms and may accumulate up the food web along with associated chemicals. Additionally, plastics provide a stable

  5. Extracting DNA from ocean microplastics : A method comparison study

    NARCIS (Netherlands)

    Debeljak, Pavla; Pinto, M.J.; Proietti, Maira; Reisser, Julia; Ferrari, Francesco F.; Abbas, B.A.; van Loosdrecht, Mark C.M.; Slat, Boyan; Herndl, Gerhard J.

    2017-01-01

    The ubiquity of plastics in oceans worldwide raises concerns about their ecological implications. Suspended microplastics (<5 mm) can be ingested by a wide range of marine organisms and may accumulate up the food web along with associated chemicals. Additionally, plastics provide a stable

  6. Comparison of three different prehospital wrapping methods for preventing hypothermia - a crossover study in humans

    Directory of Open Access Journals (Sweden)

    Zakariassen Erik

    2011-06-01

    Full Text Available Abstract Background Accidental hypothermia increases mortality and morbidity in trauma patients. Various methods for insulating and wrapping hypothermic patients are used worldwide. The aim of this study was to compare the thermal insulating effects and comfort of bubble wrap, ambulance blankets / quilts, and Hibler's method, a low-cost method combining a plastic outer layer with an insulating layer. Methods Eight volunteers were dressed in moistened clothing, exposed to a cold and windy environment then wrapped using one of the three different insulation methods in random order on three different days. They were rested quietly on their back for 60 minutes in a cold climatic chamber. Skin temperature, rectal temperature, oxygen consumption were measured, and metabolic heat production was calculated. A questionnaire was used for a subjective evaluation of comfort, thermal sensation, and shivering. Results Skin temperature was significantly higher 15 minutes after wrapping using Hibler's method compared with wrapping with ambulance blankets / quilts or bubble wrap. There were no differences in core temperature between the three insulating methods. The subjects reported more shivering, they felt colder, were more uncomfortable, and had an increased heat production when using bubble wrap compared with the other two methods. Hibler's method was the volunteers preferred method for preventing hypothermia. Bubble wrap was the least effective insulating method, and seemed to require significantly higher heat production to compensate for increased heat loss. Conclusions This study demonstrated that a combination of vapour tight layer and an additional dry insulating layer (Hibler's method is the most efficient wrapping method to prevent heat loss, as shown by increased skin temperatures, lower metabolic rate and better thermal comfort. This should then be the method of choice when wrapping a wet patient at risk of developing hypothermia in prehospital

  7. A comparison of five methods of measuring mammographic density: a case-control study.

    Science.gov (United States)

    Astley, Susan M; Harkness, Elaine F; Sergeant, Jamie C; Warwick, Jane; Stavrinos, Paula; Warren, Ruth; Wilson, Mary; Beetles, Ursula; Gadde, Soujanya; Lim, Yit; Jain, Anil; Bundred, Sara; Barr, Nicola; Reece, Valerie; Brentnall, Adam R; Cuzick, Jack; Howell, Tony; Evans, D Gareth

    2018-02-05

    High mammographic density is associated with both risk of cancers being missed at mammography, and increased risk of developing breast cancer. Stratification of breast cancer prevention and screening requires mammographic density measures predictive of cancer. This study compares five mammographic density measures to determine the association with subsequent diagnosis of breast cancer and the presence of breast cancer at screening. Women participating in the "Predicting Risk Of Cancer At Screening" (PROCAS) study, a study of cancer risk, completed questionnaires to provide personal information to enable computation of the Tyrer-Cuzick risk score. Mammographic density was assessed by visual analogue scale (VAS), thresholding (Cumulus) and fully-automated methods (Densitas, Quantra, Volpara) in contralateral breasts of 366 women with unilateral breast cancer (cases) detected at screening on entry to the study (Cumulus 311/366) and in 338 women with cancer detected subsequently. Three controls per case were matched using age, body mass index category, hormone replacement therapy use and menopausal status. Odds ratios (OR) between the highest and lowest quintile, based on the density distribution in controls, for each density measure were estimated by conditional logistic regression, adjusting for classic risk factors. The strongest predictor of screen-detected cancer at study entry was VAS, OR 4.37 (95% CI 2.72-7.03) in the highest vs lowest quintile of percent density after adjustment for classical risk factors. Volpara, Densitas and Cumulus gave ORs for the highest vs lowest quintile of 2.42 (95% CI 1.56-3.78), 2.17 (95% CI 1.41-3.33) and 2.12 (95% CI 1.30-3.45), respectively. Quantra was not significantly associated with breast cancer (OR 1.02, 95% CI 0.67-1.54). Similar results were found for subsequent cancers, with ORs of 4.48 (95% CI 2.79-7.18), 2.87 (95% CI 1.77-4.64) and 2.34 (95% CI 1.50-3.68) in highest vs lowest quintiles of VAS, Volpara and Densitas

  8. Multi-reader ROC studies with split-plot designs: a comparison of statistical methods.

    Science.gov (United States)

    Obuchowski, Nancy A; Gallas, Brandon D; Hillis, Stephen L

    2012-12-01

    Multireader imaging trials often use a factorial design, in which study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of this design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper, the authors compare three methods of analysis for the split-plot design. Three statistical methods are presented: the Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean analysis-of-variance approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power, and confidence interval coverage of the three test statistics. The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% confidence intervals falls close to the nominal coverage for small and large sample sizes. The split-plot multireader, multicase study design can be statistically efficient compared to the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rates, similar power, and nominal confidence interval coverage, are available for this study design. Copyright © 2012 AUR. All rights reserved.

  9. Comparison of two inductive learning methods: A case study in failed fuel identification

    International Nuclear Information System (INIS)

    Reifman, J.; Lee, J.C.

    1992-01-01

    Two inductive learning methods, the ID3 and Rg algorithms, are studied as a means for systematically and automatically constructing the knowledge base of expert systems. Both inductive learning methods are general-purpose and use information entropy as a discriminatory measure in order to group objects of a common class. ID3 constructs a knowledge base by building decision trees that discriminate objects of a data set as a function of their class. Rg constructs a knowledge base by grouping objects of the same class into patterns or clusters. The two inductive methods are applied to the construction of a knowledge base for failed fuel identification in the Experimental Breeder Reactor II. Through analysis of the knowledge bases generated, the ID3 and Rg algorithms are compared for their knowledge representation, data overfitting, feature space partition, feature selection, and search procedure

  10. Comparison of two inductive learning methods: A case study in failed fuel identification

    Energy Technology Data Exchange (ETDEWEB)

    Reifman, J. [Argonne National Lab., IL (United States); Lee, J.C. [Michigan Univ., Ann Arbor, MI (United States). Dept. of Nuclear Engineering

    1992-05-01

    Two inductive learning methods, the ID3 and Rg algorithms, are studied as a means for systematically and automatically constructing the knowledge base of expert systems. Both inductive learning methods are general-purpose and use information entropy as a discriminatory measure in order to group objects of a common class. ID3 constructs a knowledge base by building decision trees that discriminate objects of a data set as a function of their class. Rg constructs a knowledge base by grouping objects of the same class into patterns or clusters. The two inductive methods are applied to the construction of a knowledge base for failed fuel identification in the Experimental Breeder Reactor II. Through analysis of the knowledge bases generated, the ID3 and Rg algorithms are compared for their knowledge representation, data overfitting, feature space partition, feature selection, and search procedure.

  11. Comparison of two inductive learning methods: A case study in failed fuel identification

    Energy Technology Data Exchange (ETDEWEB)

    Reifman, J. (Argonne National Lab., IL (United States)); Lee, J.C. (Michigan Univ., Ann Arbor, MI (United States). Dept. of Nuclear Engineering)

    1992-01-01

    Two inductive learning methods, the ID3 and Rg algorithms, are studied as a means for systematically and automatically constructing the knowledge base of expert systems. Both inductive learning methods are general-purpose and use information entropy as a discriminatory measure in order to group objects of a common class. ID3 constructs a knowledge base by building decision trees that discriminate objects of a data set as a function of their class. Rg constructs a knowledge base by grouping objects of the same class into patterns or clusters. The two inductive methods are applied to the construction of a knowledge base for failed fuel identification in the Experimental Breeder Reactor II. Through analysis of the knowledge bases generated, the ID3 and Rg algorithms are compared for their knowledge representation, data overfitting, feature space partition, feature selection, and search procedure.

  12. Mapping biomass with remote sensing: a comparison of methods for the case study of Uganda

    Directory of Open Access Journals (Sweden)

    Henry Matieu

    2011-10-01

    Full Text Available Abstract Background Assessing biomass is gaining increasing interest mainly for bioenergy, climate change research and mitigation activities, such as reducing emissions from deforestation and forest degradation and the role of conservation, sustainable management of forests and enhancement of forest carbon stocks in developing countries (REDD+. In response to these needs, a number of biomass/carbon maps have been recently produced using different approaches but the lack of comparable reference data limits their proper validation. The objectives of this study are to compare the available maps for Uganda and to understand the sources of variability in the estimation. Uganda was chosen as a case-study because it presents a reliable national biomass reference dataset. Results The comparison of the biomass/carbon maps show strong disagreement between the products, with estimates of total aboveground biomass of Uganda ranging from 343 to 2201 Tg and different spatial distribution patterns. Compared to the reference map based on country-specific field data and a national Land Cover (LC dataset (estimating 468 Tg, maps based on biome-average biomass values, such as the Intergovernmental Panel on Climate Change (IPCC default values, and global LC datasets tend to strongly overestimate biomass availability of Uganda (ranging from 578 to 2201 Tg, while maps based on satellite data and regression models provide conservative estimates (ranging from 343 to 443 Tg. The comparison of the maps predictions with field data, upscaled to map resolution using LC data, is in accordance with the above findings. This study also demonstrates that the biomass estimates are primarily driven by the biomass reference data while the type of spatial maps used for their stratification has a smaller, but not negligible, impact. The differences in format, resolution and biomass definition used by the maps, as well as the fact that some datasets are not independent from the

  13. Comparison of different methods for work accidents investigation in hospitals: A Portuguese case study.

    Science.gov (United States)

    Nunes, Cláudia; Santos, Joana; da Silva, Manuela Vieira; Lourenço, Irina; Carvalhais, Carlos

    2015-01-01

    The hospital environment has many occupational health risks that predispose healthcare workers to various kinds of work accidents. This study aims to compare different methods for work accidents investigation and to verify their suitability in hospital environment. For this purpose, we selected three types of accidents that were related with needle stick, worker fall and inadequate effort/movement during the mobilization of patients. A total of thirty accidents were analysed with six different work accidents investigation methods. The results showed that organizational factors were the group of causes which had the greatest impact in the three types of work accidents. The methods selected to be compared in this paper are applicable and appropriate for the work accidents investigation in hospitals. However, the Registration, Research and Analysis of Work Accidents method (RIAAT) showed to be an optimal technique to use in this context.

  14. Quantitative methods for reconstructing tissue biomechanical properties in optical coherence elastography: a comparison study

    International Nuclear Information System (INIS)

    Han, Zhaolong; Li, Jiasong; Singh, Manmohan; Wu, Chen; Liu, Chih-hao; Wang, Shang; Idugboe, Rita; Raghunathan, Raksha; Sudheendran, Narendran; Larin, Kirill V; Aglyamov, Salavat R; Twa, Michael D

    2015-01-01

    We present a systematic analysis of the accuracy of five different methods for extracting the biomechanical properties of soft samples using optical coherence elastography (OCE). OCE is an emerging noninvasive technique, which allows assessment of biomechanical properties of tissues with micrometer spatial resolution. However, in order to accurately extract biomechanical properties from OCE measurements, application of a proper mechanical model is required. In this study, we utilize tissue-mimicking phantoms with controlled elastic properties and investigate the feasibilities of four available methods for reconstructing elasticity (Young’s modulus) based on OCE measurements of an air-pulse induced elastic wave. The approaches are based on the shear wave equation (SWE), the surface wave equation (SuWE), Rayleigh-Lamb frequency equation (RLFE), and finite element method (FEM), Elasticity values were compared with uniaxial mechanical testing. The results show that the RLFE and the FEM are more robust in quantitatively assessing elasticity than the other simplified models. This study provides a foundation and reference for reconstructing the biomechanical properties of tissues from OCE data, which is important for the further development of noninvasive elastography methods. (paper)

  15. Gene Expression Profiles for Predicting Metastasis in Breast Cancer: A Cross-Study Comparison of Classification Methods

    Directory of Open Access Journals (Sweden)

    Mark Burton

    2012-01-01

    Full Text Available Machine learning has increasingly been used with microarray gene expression data and for the development of classifiers using a variety of methods. However, method comparisons in cross-study datasets are very scarce. This study compares the performance of seven classification methods and the effect of voting for predicting metastasis outcome in breast cancer patients, in three situations: within the same dataset or across datasets on similar or dissimilar microarray platforms. Combining classification results from seven classifiers into one voting decision performed significantly better during internal validation as well as external validation in similar microarray platforms than the underlying classification methods. When validating between different microarray platforms, random forest, another voting-based method, proved to be the best performing method. We conclude that voting based classifiers provided an advantage with respect to classifying metastasis outcome in breast cancer patients.

  16. A study on asymptomatic bacteriuria in pregnancy: prevalence, etiology and comparison of screening methods

    OpenAIRE

    Kheya Mukherjee; Saroj Golia; Vasudha CL; Babita; Debojyoti Bhattacharjee; Goutam Chakroborti

    2014-01-01

    Background: Asymptomatic bacteriuria is common in women with prevalence of 4-7% in pregnancy. The traditional reference test for bacteriuria is quantitative culture of urine which is relatively expensive time consuming and laborious. The aim of this study was to know the prevalence of asymptomatic bacteriuria in pregnancy, to identify pathogens and their antibiotic susceptibility patterns and to device a single or combined rapid screening method as an acceptable alternative to urine culture. ...

  17. A comparison between NASCET and ECST methods in the study of carotids

    International Nuclear Information System (INIS)

    Saba, Luca; Mallarini, Giorgio

    2010-01-01

    Purpose: NASCET and ECST systems to quantify carotid artery stenosis use percent diameter ratios from conventional angiography. With the use of Multi-Detector-Row CT scanners it is possible to easily measure plaque area and residual lumen in order to calculate carotid stenosis degree. Our purpose was to compare NASCET and ECST techniques in the measurement of carotid stenosis degree by using MDCTA. Methods and material: From February 2007 to October 2007, 83 non-consecutive patients (68 males; 15 females) were studied using Multi-Detector-Row CT. Each patient was assessed by two experienced radiologists for stenosis degree by using both NASCET and ECST methods. Statistic analysis was performed to determine the entity of correlation (method of Pearson) between NASCET and ECST. The Cohen kappa test and Bland-Altman analysis were applied to assess the level of inter- and intra-observer agreement. Results: The correlation Pearson coefficient between NASCET and ECST was 0.962 (p < 0.01). Intra-observer agreement in the NASCET evaluation, by using Cohen statistic was 0.844 and 0.825. Intra-observer agreement in the ECST evaluation was 0.871 and 0.836. Inter-observer agreement in the NASCET and ECTS were 0.822 and 0.834, respectively. Agreement analysis by using Bland-Altman plots showed a good intra-/inter-observer agreement for the NASCET and an optimal intra-/inter-observer agreement for the ECST. Conclusions: Results of our study suggest that NASCET and ECST methods show a strength correlation according to quadratic regression. Intra-observer agreement results high for both NASCET and ECST.

  18. A comparison study of size-specific dose estimate calculation methods

    Energy Technology Data Exchange (ETDEWEB)

    Parikh, Roshni A. [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); University of Michigan Health System, Department of Radiology, Ann Arbor, MI (United States); Wien, Michael A.; Jordan, David W.; Ciancibello, Leslie; Berlin, Sheila C. [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); Novak, Ronald D. [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); Rebecca D. Considine Research Institute, Children' s Hospital Medical Center of Akron, Center for Mitochondrial Medicine Research, Akron, OH (United States); Klahr, Paul [CT Clinical Science, Philips Healthcare, Highland Heights, OH (United States); Soriano, Stephanie [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); University of Washington, Department of Radiology, Seattle, WA (United States)

    2018-01-15

    The size-specific dose estimate (SSDE) has emerged as an improved metric for use by medical physicists and radiologists for estimating individual patient dose. Several methods of calculating SSDE have been described, ranging from patient thickness or attenuation-based (automated and manual) measurements to weight-based techniques. To compare the accuracy of thickness vs. weight measurement of body size to allow for the calculation of the size-specific dose estimate (SSDE) in pediatric body CT. We retrospectively identified 109 pediatric body CT examinations for SSDE calculation. We examined two automated methods measuring a series of level-specific diameters of the patient's body: method A used the effective diameter and method B used the water-equivalent diameter. Two manual methods measured patient diameter at two predetermined levels: the superior endplate of L2, where body width is typically most thin, and the superior femoral head or iliac crest (for scans that did not include the pelvis), where body width is typically most thick; method C averaged lateral measurements at these two levels from the CT projection scan, and method D averaged lateral and anteroposterior measurements at the same two levels from the axial CT images. Finally, we used body weight to characterize patient size, method E, and compared this with the various other measurement methods. Methods were compared across the entire population as well as by subgroup based on body width. Concordance correlation (ρ{sub c}) between each of the SSDE calculation methods (methods A-E) was greater than 0.92 across the entire population, although the range was wider when analyzed by subgroup (0.42-0.99). When we compared each SSDE measurement method with CTDI{sub vol,} there was poor correlation, ρ{sub c}<0.77, with percentage differences between 20.8% and 51.0%. Automated computer algorithms are accurate and efficient in the calculation of SSDE. Manual methods based on patient thickness provide

  19. Variable selection methods in PLS regression - a comparison study on metabolomics data

    DEFF Research Database (Denmark)

    Karaman, İbrahim; Hedemann, Mette Skou; Knudsen, Knud Erik Bach

    . The aim of the metabolomics study was to investigate the metabolic profile in pigs fed various cereal fractions with special attention to the metabolism of lignans using LC-MS based metabolomic approach. References 1. Lê Cao KA, Rossouw D, Robert-Granié C, Besse P: A Sparse PLS for Variable Selection when...... integrated approach. Due to the high number of variables in data sets (both raw data and after peak picking) the selection of important variables in an explorative analysis is difficult, especially when different data sets of metabolomics data need to be related. Variable selection (or removal of irrelevant...... different strategies for variable selection on PLSR method were considered and compared with respect to selected subset of variables and the possibility for biological validation. Sparse PLSR [1] as well as PLSR with Jack-knifing [2] was applied to data in order to achieve variable selection prior...

  20. Comparison of methods for calculating the health costs of endocrine disrupters: a case study on triclosan.

    Science.gov (United States)

    Prichystalova, Radka; Fini, Jean-Baptiste; Trasande, Leonardo; Bellanger, Martine; Demeneix, Barbara; Maxim, Laura

    2017-06-09

    Socioeconomic analysis is currently used in the Europe Union as part of the regulatory process in Regulation Registration, Evaluation and Authorisation of Chemicals (REACH), with the aim of assessing and managing risks from dangerous chemicals. The political impact of the socio-economic analysis is potentially high in the authorisation and restriction procedures, however, current socio-economic analysis dossiers submitted under REACH are very heterogeneous in terms of methodology used and quality. Furthermore, the economic literature is not very helpful for regulatory purposes, as most published calculations of health costs associated with chemical exposures use epidemiological studies as input data, but such studies are rarely available for most substances. The quasi-totality of the data used in the REACH dossiers comes from toxicological studies. This paper assesses the use of the integrated probabilistic risk assessment, based on toxicological data, for the calculation of health costs associated with endocrine disrupting effects of triclosan. The results are compared with those obtained using the population attributable fraction, based on epidemiological data. The results based on the integrated probabilistic risk assessment indicated that 4894 men could have reproductive deficits based on the decreased vas deferens weights observed in rats, 0 cases of changed T 3 levels, and 0 cases of girls with early pubertal development. The results obtained with the Population Attributable Fraction method showed 7,199,228 cases of obesity per year, 281,923 girls per year with early pubertal development and 88,957 to 303,759 cases per year with increased total T 3 hormone levels. The economic costs associated with increased BMI due to TCS exposure could be calculated. Direct health costs were estimated at €5.8 billion per year. The two methods give very different results for the same effects. The choice of a toxicological-based or an epidemiological-based method in the

  1. Comparison of artificial inoculation methods for studying ...

    African Journals Online (AJOL)

    divya

    2013-05-01

    May 1, 2013 ... atomizer; injection of spore suspension into the plants surface or into the intercellular air spaces of a ... Seeds of brown mustard B. juncea were sown in plastic inserts. (7.5 x 5 cm; 2 seeds per insert) containing ... system of the leaf should be avoided for injection. Detached leaves were kept in sealed Petri ...

  2. A comparison of interface tracking methods

    International Nuclear Information System (INIS)

    Kothe, D.B.; Rider, W.J.

    1995-01-01

    In this Paper we provide a direct comparison of several important algorithms designed to track fluid interfaces. In the process we propose improved criteria by which these methods are to be judged. We compare and contrast the behavior of the following interface tracking methods: high order monotone capturing schemes, level set methods, volume-of-fluid (VOF) methods, and particle-based (particle-in-cell, or PIC) methods. We compare these methods by first applying a set of standard test problems, then by applying a new set of enhanced problems designed to expose the limitations and weaknesses of each method. We find that the properties of these methods are not adequately assessed until they axe tested with flows having spatial and temporal vorticity gradients. Our results indicate that the particle-based methods are easily the most accurate of those tested. Their practical use, however, is often hampered by their memory and CPU requirements. Particle-based methods employing particles only along interfaces also have difficulty dealing with gross topology changes. Full PIC methods, on the other hand, do not in general have topology restrictions. Following the particle-based methods are VOF volume tracking methods, which are reasonably accurate, physically based, robust, low in cost, and relatively easy to implement. Recent enhancements to the VOF methods using multidimensional interface reconstruction and improved advection provide excellent results on a wide range of test problems

  3. Comparison of field swept ferromagnetic resonance methods - A case study using Ni-Mn-Sn films

    Science.gov (United States)

    Modak, R.; Samantaray, B.; Mandal, P.; Srinivasu, V. V.; Srinivasan, A.

    2018-05-01

    Ferromagnetic resonance spectroscopy is used to understand the magnetic behavior of Ni-Mn-Sn Heusler alloy film. Two popular experimental methods available for recording FMR spectra are presented here. In plane angular (φH) variation of magnetic relaxation is used to evaluate the in plane anisotropy (Ku) of the film. The out of plane (θH) variation of FMR spectra has been numerically analyzed to extract the Gilbert damping coefficient, effective magnetization and perpendicular magnetic anisotropy (K1). Magnetic homogeneity of the film had also been evaluated in terms of 2-magnon contribution from FMR linewidth. The advantage and limitations of these two popular FMR techniques are discussed on the basis of the results obtained in this comparative study.

  4. A Novel Group-Fused Sparse Partial Correlation Method for Simultaneous Estimation of Functional Networks in Group Comparison Studies.

    Science.gov (United States)

    Liang, Xiaoyun; Vaughan, David N; Connelly, Alan; Calamante, Fernando

    2018-05-01

    The conventional way to estimate functional networks is primarily based on Pearson correlation along with classic Fisher Z test. In general, networks are usually calculated at the individual-level and subsequently aggregated to obtain group-level networks. However, such estimated networks are inevitably affected by the inherent large inter-subject variability. A joint graphical model with Stability Selection (JGMSS) method was recently shown to effectively reduce inter-subject variability, mainly caused by confounding variations, by simultaneously estimating individual-level networks from a group. However, its benefits might be compromised when two groups are being compared, given that JGMSS is blinded to other groups when it is applied to estimate networks from a given group. We propose a novel method for robustly estimating networks from two groups by using group-fused multiple graphical-lasso combined with stability selection, named GMGLASS. Specifically, by simultaneously estimating similar within-group networks and between-group difference, it is possible to address inter-subject variability of estimated individual networks inherently related with existing methods such as Fisher Z test, and issues related to JGMSS ignoring between-group information in group comparisons. To evaluate the performance of GMGLASS in terms of a few key network metrics, as well as to compare with JGMSS and Fisher Z test, they are applied to both simulated and in vivo data. As a method aiming for group comparison studies, our study involves two groups for each case, i.e., normal control and patient groups; for in vivo data, we focus on a group of patients with right mesial temporal lobe epilepsy.

  5. Manual versus automatic bladder wall thickness measurements: a method comparison study

    NARCIS (Netherlands)

    Oelke, M.; Mamoulakis, C.; Ubbink, D.T.; de la Rosette, J.J.; Wijkstra, H.

    2009-01-01

    Purpose To compare repeatability and agreement of conventional ultrasound bladder wall thickness (BWT) measurements with automatically obtained BWT measurements by the BVM 6500 device. Methods Adult patients with lower urinary tract symptoms, urinary incontinence, or postvoid residual urine were

  6. Optimizing primary care research participation: a comparison of three recruitment methods in data-sharing studies.

    Science.gov (United States)

    Lord, Paul A; Willis, Thomas A; Carder, Paul; West, Robert M; Foy, Robbie

    2016-04-01

    Recruitment of representative samples in primary care research is essential to ensure high-quality, generalizable results. This is particularly important for research using routinely recorded patient data to examine the delivery of care. Yet little is known about how different recruitment strategies influence the characteristics of the practices included in research. We describe three approaches for recruiting practices to data-sharing studies, examining differences in recruitment levels and practice representativeness. We examined three studies that included varying populations of practices from West Yorkshire, UK. All used anonymized patient data to explore aspects of clinical practice. Recruitment strategies were 'opt-in', 'mixed opt-in and opt-out' and 'opt-out'. We compared aggregated practice data between recruited and not-recruited practices for practice list size, deprivation, chronic disease management, patient experience and rates of unplanned hospital admission. The opt-out strategy had the highest recruitment (80%), followed by mixed (70%) and opt-in (58%). Practices opting-in were larger (median 7153 versus 4722 patients, P = 0.03) than practices that declined to opt-in. Practices recruited by mixed approach were larger (median 7091 versus 5857 patients, P = 0.04) and had differences in the clinical quality measure (58.4% versus 53.9% of diabetic patients with HbA1c ≤ 59 mmol/mol, P Researchers should, with appropriate ethical safeguards, consider opt-out recruitment of practices for studies involving anonymized patient data sharing. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. Comparison Study of Subspace Identification Methods Applied to Flexible Structures

    Science.gov (United States)

    Abdelghani, M.; Verhaegen, M.; Van Overschee, P.; De Moor, B.

    1998-09-01

    In the past few years, various time domain methods for identifying dynamic models of mechanical structures from modal experimental data have appeared. Much attention has been given recently to so-called subspace methods for identifying state space models. This paper presents a detailed comparison study of these subspace identification methods: the eigensystem realisation algorithm with observer/Kalman filter Markov parameters computed from input/output data (ERA/OM), the robust version of the numerical algorithm for subspace system identification (N4SID), and a refined version of the past outputs scheme of the multiple-output error state space (MOESP) family of algorithms. The comparison is performed by simulating experimental data using the five mode reduced model of the NASA Mini-Mast structure. The general conclusion is that for the case of white noise excitations as well as coloured noise excitations, the N4SID/MOESP algorithms perform equally well but give better results (improved transfer function estimates, improved estimates of the output) compared to the ERA/OM algorithm. The key computational step in the three algorithms is the approximation of the extended observability matrix of the system to be identified, for N4SID/MOESP, or of the observer for the system to be identified, for the ERA/OM. Furthermore, the three algorithms only require the specification of one dimensioning parameter.

  8. A comparison of three methods to measure asthma in epidemiologic studies

    DEFF Research Database (Denmark)

    Hansen, Susanne; Strøm, Marin; Maslova, Ekaterina

    2012-01-01

    , the prevalence of asthma was estimated from a self-administered questionnaire using parental report of doctor diagnoses, ICD-10 diagnoses from a population-based hospitalization registry, and data on anti-asthmatic medication from a population-based prescription registry. We assessed the agreement between...

  9. A COMPARISON STUDY OF DIFFERENT MARKER SELECTION METHODS FOR SPECTRAL-SPATIAL CLASSIFICATION OF HYPERSPECTRAL IMAGES

    Directory of Open Access Journals (Sweden)

    D. Akbari

    2015-12-01

    Full Text Available An effective approach based on the Minimum Spanning Forest (MSF, grown from automatically selected markers using Support Vector Machines (SVM, has been proposed for spectral-spatial classification of hyperspectral images by Tarabalka et al. This paper aims at improving this approach by using image segmentation to integrate the spatial information into marker selection process. In this study, the markers are extracted from the classification maps, obtained by both SVM and segmentation algorithms, and then are used to build the MSF. The segmentation algorithms are the watershed, expectation maximization (EM and hierarchical clustering. These algorithms are used in parallel and independently to segment the image. Moreover, the pixels of each class, with the largest population in the classification map, are kept for each region of the segmentation map. Lastly, the most reliable classified pixels are chosen from among the exiting pixels as markers. Two benchmark urban hyperspectral datasets are used for evaluation: Washington DC Mall and Berlin. The results of our experiments indicate that, compared to the original MSF approach, the marker selection using segmentation algorithms leads in more accurate classification maps.

  10. A METHOD OF INTEGRATED ASSESSMENT OF LIVER FIBROSIS DEGREE: A COMPARISON OF THE OPTICAL METHOD FOR THE STUDY OF RED BLOOD CELLS AND INDIRECT ELASTOGRAPHY OF THE LIVER

    Directory of Open Access Journals (Sweden)

    M. V. Kruchinina

    2017-01-01

    Full Text Available The presented  method  of integrated  assessment of liver fibrosis degree based on the comparison of the data obtained in the study of electric and viscoelastic parameters of erythrocytes by the dielectrophoresis  method  using the electro-optical system  of the detection  cells and method  for indirect elastometry. A high degree of comparability of the results of the above-described  methods  was established  when the degree of fibrosis F 2-4  in the  absence  of marked  cytolysis, cholestasis,  inflammatory  syndrome,  metal  overload. It is shown that  parallel using the methods  of dielectrophoresis and indirect elastography is needed in the presence of a rise of transaminases, gammaglutamyltranspeptidase more than 5 norms, expressed dysproteinemia,  syndromes of iron overload, copper to increase the accuracy in determining the degree of liver fibrosis. The evaluation of dynamics of changes of the degree of fibrosis during antiviral therapy is more accurate by the method  of indirect elastometry, and the method of dielectrophoresis  is preferable in the treatment of nonalcoholic fatty liver disease. In cases of restrictions on the use of the method  for indirect elastography (marked obesity, ascites, cholelithiasis, pregnancy, presence of pacemaker, prosthesis to determine  the degree of fibrosis the method of dielectrophoresis  of red blood cells can be used. Simultaneous use of both methods  (using the identified discriminatory values allows to reduce or to neutralize their disadvantages, dependence on associated syndromes, to expand the possibilities of their application, to improve the diagnostic accuracy in determining each of the degrees of liver fibrosis. Integrated application of both methods — indirect elastography and dielectrophoresis of red blood cells — for determining the degree of liver fibrosis allows to achieve high levels of sensitivity (88.9 percent and specificity (100 percent compared to

  11. Two Methods for Turning and Positioning and the Effect on Pressure Ulcer Development: A Comparison Cohort Study.

    Science.gov (United States)

    Powers, Jan

    2016-01-01

    We evaluated 2 methods for patient positioning on the development of pressure ulcers; specifically, standard of care (SOC) using pillows versus a patient positioning system (PPS). The study also compared turning effectiveness as well as nursing resources related to patient positioning and nursing injuries. A nonrandomized comparison design was used for the study. Sixty patients from a trauma/neurointensive care unit were included in the study. Patients were randomly assigned to 1 of 2 teams per standard bed placement practices at the institution. Patients were identified for enrollment in the study if they were immobile and mechanically ventilated with anticipation of 3 days or more on mechanical ventilation. Patients were excluded if they had a preexisting pressure ulcer. Patients were evaluated daily for the presence of pressure ulcers. Data were collected on the number of personnel required to turn patients. Once completed, the angle of the turn was measured. The occupational health database was reviewed to determine nurse injuries. The final sample size was 59 (SOC = 29; PPS = 30); there were no statistical differences between groups for age (P = .10), body mass index (P = .65), gender (P = .43), Braden Scale score (P = .46), or mobility score (P = .10). There was a statistically significant difference in the number of hospital-acquired pressure ulcers between turning methods (6 in the SOC group vs 1 in the PPS group; P = .042). The number of nurses needed for the SOC method was significantly higher than the PPS (P ≤ 0.001). The average turn angle achieved using the PPS was 31.03°, while the average turn angle achieved using SOC was 22.39°. The difference in turn angle from initial turn to 1 hour after turning in the SOC group was statistically significant (P patients to prevent development of pressure ulcers.

  12. Study of n-γ discrimination by digital charge comparison method for a large volume liquid scintillator

    International Nuclear Information System (INIS)

    Moszynski, M.; Costa, G.J.; Guillaume, G.; Heusch, B.; Huck, A.; Ring, C.; Bizard, G.; Durand, D.; Peter, J.; Tamain, B.; El Masri, Y.; Hanappe, F.

    1992-01-01

    The study of the n-γ discrimination for a large 41 volume BC501A liquid scintillator coupled to a 130 mm diameter XP4512B photomultiplier was carried out by digital charge comparison method. A very good n-γ discrimination down to 100 keV of recoil electron energy was achieved. The measured relative intensity of the charge integrated at the slow component of the scintillation pulse and the photoelectron yield of the tested counter allow the factor of merit of the n-γ discrimination spectra to be calculated and to be compared with those measured experimentally. This shows that the main limitation of the n-γ discrimination is associated with the statistical fluctuation of the photoelectron number at the slow component. A serious effect of the distortion in the cable used to send the photomultiplier pulse to the electronics for the n-γ discrimination was studied. This suggests that the length of RG58 cable should be limited to about 40 m to preserve a high quality n-γ discrimination. (orig.)

  13. Measurement of Capsaicinoids in Chiltepin Hot Pepper: A Comparison Study between Spectrophotometric Method and High Performance Liquid Chromatography Analysis

    Directory of Open Access Journals (Sweden)

    Alberto González-Zamora

    2015-01-01

    Full Text Available Direct spectrophotometric determination of capsaicinoids content in Chiltepin pepper was investigated as a possible alternative to HPLC analysis. Capsaicinoids were extracted from Chiltepin in red ripe and green fruit with acetonitrile and evaluated quantitatively using the HPLC method with capsaicin and dihydrocapsaicin standards. Three samples of different treatment were analyzed for their capsaicinoids content successfully by these methods. HPLC-DAD revealed that capsaicin, dihydrocapsaicin, and nordihydrocapsaicin comprised up to 98% of total capsaicinoids detected. The absorbance of the diluted samples was read on a spectrophotometer at 215–300 nm and monitored at 280 nm. We report herein the comparison between traditional UV assays and HPLC-DAD methods for the determination of the molar absorptivity coefficient of capsaicin (ε280=3,410 and ε280=3,720 M−1 cm−1 and dihydrocapsaicin (ε280=4,175 and ε280=4,350 M−1 cm−1, respectively. Statistical comparisons were performed using the regression analyses (ordinary linear regression and Deming regression and Bland-Altman analysis. Comparative data for pungency was determined spectrophotometrically and by HPLC on samples ranging from 29.55 to 129 mg/g with a correlation of 0.91. These results indicate that the two methods significantly agree. The described spectrophotometric method can be routinely used for total capsaicinoids analysis and quality control in agricultural and pharmaceutical analysis.

  14. A comparison of two methods of measuring static coefficient of friction at low normal forces: a pilot study.

    Science.gov (United States)

    Seo, Na Jin; Armstrong, Thomas J; Drinkaus, Philip

    2009-01-01

    This study compares two methods for estimating static friction coefficients for skin. In the first method, referred to as the 'tilt method', a hand supporting a flat object is tilted until the object slides. The friction coefficient is estimated as the tangent of the angle of the object at the slip. The second method estimates the friction coefficient as the pull force required to begin moving a flat object over the surface of the hand, divided by object weight. Both methods were used to estimate friction coefficients for 12 subjects and three materials (cardboard, aluminium, rubber) against a flat hand and against fingertips. No differences in static friction coefficients were found between the two methods, except for that of rubber, where friction coefficient was 11% greater for the tilt method. As with previous studies, the friction coefficients varied with contact force and contact area. Static friction coefficient data are needed for analysis and design of objects that are grasped or manipulated with the hand. The tilt method described in this study can easily be used by ergonomic practitioners to estimate static friction coefficients in the field in a timely manner.

  15. Assessment of interchangeability rate between 2 methods of measurements: An example with a cardiac output comparison study.

    Science.gov (United States)

    Lorne, Emmanuel; Diouf, Momar; de Wilde, Robert B P; Fischer, Marc-Olivier

    2018-02-01

    The Bland-Altman (BA) and percentage error (PE) methods have been previously described to assess the agreement between 2 methods of medical or laboratory measurements. This type of approach raises several problems: the BA methodology constitutes a subjective approach to interchangeability, whereas the PE approach does not take into account the distribution of values over a range. We describe a new methodology that defines an interchangeability rate between 2 methods of measurement and cutoff values that determine the range of interchangeable values. We used a simulated data and a previously published data set to demonstrate the concept of the method. The interchangeability rate of 5 different cardiac output (CO) pulse contour techniques (Wesseling method, LiDCO, PiCCO, Hemac method, and Modelflow) was calculated, in comparison with the reference pulmonary artery thermodilution CO using our new method. In our example, Modelflow with a good interchangeability rate of 93% and a cutoff value of 4.8 L min, was found to be interchangeable with the thermodilution method for >95% of measurements. Modelflow had a higher interchangeability rate compared to Hemac (93% vs 86%; P = .022) or other monitors (Wesseling cZ = 76%, LiDCO = 73%, and PiCCO = 62%; P < .0001). Simulated data and reanalysis of a data set comparing 5 CO monitors against thermodilution CO showed that, depending on the repeatability of the reference method, the interchangeability rate combined with a cutoff value could be used to define the range of values over which interchangeability remains acceptable.

  16. A Comparison of Distillery Stillage Disposal Methods

    Directory of Open Access Journals (Sweden)

    V. Sajbrt

    2010-01-01

    Full Text Available This paper compares the main stillage disposal methods from the point of view of technology, economics and energetics. Attention is paid to the disposal of both solid and liquid phase. Specifically, the following methods are considered: a livestock feeding, b combustion of granulated stillages, c fertilizer production, d anaerobic digestion with biogas production and e chemical pretreatment and subsequent secondary treatment. Other disposal techniques mentioned in the literature (electrofenton reaction, electrocoagulation and reverse osmosis have not been considered, due to their high costs and technological requirements.Energy and economic calculations were carried out for a planned production of 120 m3 of stillage per day in a given distillery. Only specific treatment operating costs (per 1 m3 of stillage were compared, including operational costs for energy, transport and chemicals. These values were determined for January 31st, 2009. Resulting sequence of cost effectiveness: 1. – chemical pretreatment, 2. – combustion of granulated stillage, 3. – transportation of stillage to a biogas station, 4. – fertilizer production, 5. – livestock feeding. This study found that chemical pretreatment of stillage with secondary treatment (a method developed at the Department of Process Engineering, CTU was more suitable than the other methods. Also, there are some important technical advantages. Using this method, the total operating costs are approximately 1 150 ??/day, i.e. about 9,5 ??/m3 of stillage. The price of chemicals is the most important item in these costs, representing about 85 % of the total operating costs.

  17. Sampling methods in archaeomagnetic dating: A comparison using case studies from Wörterberg, Eisenerz and Gams Valley (Austria)

    Science.gov (United States)

    Trapanese, A.; Batt, C. M.; Schnepp, E.

    The aim of this research was to review the relative merits of different methods of taking samples for archaeomagnetic dating. To allow different methods to be investigated, two archaeological structures and one modern fireplace were sampled in Austria. On each structure a variety of sampling methods were used: the tube and disc techniques of Clark et al. (Clark, A.J., Tarling, D.H., Noel, M., 1988. Developments in archaeomagnetic dating in Great Britain. Journal of Archaeological Science 15, 645-667), the drill core technique, the mould plastered hand block method of Thellier, and a modification of it. All samples were oriented with a magnetic compass and sun compass, where weather conditions allowed. Approximately 12 discs, tubes, drill cores or plaster hand blocks were collected from each structure, with one mould plaster hand block being collected and cut into specimens. The natural remanent magnetisation (NRM) of the samples was measured and stepwise alternating field (AF) or thermal demagnetisation was applied. Samples were measured either in the UK or in Austria, which allowed the comparison of results between magnetometers with different sensitivity. The tubes and plastered hand block specimens showed good agreement in directional results, and the samples obtained showed good stability. The discs proved to be unreliable as both NRM and the characteristic remanent magnetisation (ChRM) distribution were very scattered. The failure of some methods may be related to the suitability of the material sampled, for example if it was disturbed before sampling, had been insufficiently heated or did not contain appropriate magnetic minerals to retain a remanent magnetisation. Caution is also recommended for laboratory procedures as the cutting of poorly consolidated specimens may disturb the material and therefore the remanent magnetisation. Criteria and guidelines were established to aid researchers in selecting the most appropriate method for a particular

  18. A Comparison of Case Study and Traditional Teaching Methods for Improvement of Oral Communication and Critical-Thinking Skills

    Science.gov (United States)

    Noblitt, Lynnette; Vance, Diane E.; Smith, Michelle L. DePoy

    2010-01-01

    This study compares a traditional paper presentation approach and a case study method for the development and improvement of oral communication skills and critical-thinking skills in a class of junior forensic science majors. A rubric for rating performance in these skills was designed on the basis of the oral communication competencies developed…

  19. Arima model and exponential smoothing method: A comparison

    Science.gov (United States)

    Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri

    2013-04-01

    This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.

  20. Correction for tissue attenuation in radionuclide gastric emptying studies: a comparison of a lateral image method and a geometric mean method

    Energy Technology Data Exchange (ETDEWEB)

    Collins, P.J.; Chatterton, B.E. (Royal Adelaide Hospital (Australia)); Horowitz, M.; Shearman, D.J.C. (Adelaide Univ. (Australia). Dept. of Medicine)

    1984-08-01

    Variation in depth of radionuclide within the stomach may result in significant errors in the measurement of gastric emptying if no attempt is made to correct for gamma-ray attenuation by the patient's tissues. A method of attenuation correction, which uses a single posteriorly located scintillation camera and correction factors derived from a lateral image of the stomach, was compared with a two-camera geometric mean method, in phantom studies and in five volunteer subjects. A meal of 100 g of ground beef containing /sup 99/Tcsup(m)-chicken liver, and 150 ml of water was used in the in vivo studies. In all subjects the geometric mean data showed that solid food emptied in two phases: an initial lag period, followed by a linear emptying phase. Using the geometric mean data as a standard, the anterior camera overestimated the 50% emptying time (T/sub 50/) by an average of 15% (range 5-18) and the posterior camera underestimated this parameter by 15% (4-22). The posterior data, corrected for attenuation using the lateral image method, underestimated the T/sub 50/ by 2% (-7 to +7). The difference in the distances of the proximal and distal stomach from the posterior detector was large in all subjects (mean 5.7 cm, range 3.9-7.4).

  1. Comparison study on cell calculation method of fast reactor

    International Nuclear Information System (INIS)

    Chiba, Gou

    2002-10-01

    Effective cross sections obtained by cell calculations are used in core calculations in current deterministic methods. Therefore, it is important to calculate the effective cross sections accurately and several methods have been proposed. In this study, some of the methods are compared to each other using a continuous energy Monte Carlo method as a reference. The result shows that the table look-up method used in Japan Nuclear Cycle Development Institute (JNC) sometimes has a difference over 10% in effective microscopic cross sections and be inferior to the sub-group method. The problem was overcome by introducing a new nuclear constant system developed in JNC, in which the ultra free energy group library is used. The system can also deal with resonance interaction effects between nuclides which are not able to be considered by other methods. In addition, a new method was proposed to calculate effective cross section accurately for power reactor fuel subassembly where the new nuclear constant system cannot be applied. This method uses the sub-group method and the ultra fine energy group collision probability method. The microscopic effective cross sections obtained by this method agree with the reference values within 5% difference. (author)

  2. Evaluation of analytical reconstruction with a new gap-filling method in comparison to iterative reconstruction in [11C]-raclopride PET studies

    International Nuclear Information System (INIS)

    Tuna, U.; Johansson, J.; Ruotsalainen, U.

    2014-01-01

    The aim of the study was (1) to evaluate the reconstruction strategies with dynamic [ 11 C]-raclopride human positron emission tomography (PET) studies acquired from ECAT high-resolution research tomograph (HRRT) scanner and (2) to justify for the selected gap-filling method for analytical reconstruction with simulated phantom data. A new transradial bicubic interpolation method has been implemented to enable faster analytical 3D-reprojection (3DRP) reconstructions for the ECAT HRRT PET scanner data. The transradial bicubic interpolation method was compared to the other gap-filling methods visually and quantitatively using the numerical Shepp-Logan phantom. The performance of the analytical 3DRP reconstruction method with this new gap-filling method was evaluated in comparison with the iterative statistical methods: ordinary Poisson ordered subsets expectation maximization (OPOSEM) and resolution modeled OPOSEM methods. The image reconstruction strategies were evaluated using human data at different count statistics and consequently at different noise levels. In the assessments, 14 [ 11 C]-raclopride dynamic PET studies (test-retest studies of 7 healthy subjects) acquired from the HRRT PET scanner were used. Besides the visual comparisons of the methods, we performed regional quantitative evaluations over the cerebellum, caudate and putamen structures. We compared the regional time-activity curves (TACs), areas under the TACs and binding potential (BP ND ) values. The results showed that the new gap-filling method preserves the linearity of the 3DRP method. Results with the 3DRP after gap-filling method exhibited hardly any dependency on the count statistics (noise levels) in the sinograms while we observed changes in the quantitative results with the EM-based methods for different noise contamination in the data. With this study, we showed that 3DRP with transradial bicubic gap-filling method is feasible for the reconstruction of high-resolution PET data with

  3. A Comparison of Distillery Stillage Disposal Methods

    OpenAIRE

    V. Sajbrt; M. Rosol; P. Ditl

    2010-01-01

    This paper compares the main stillage disposal methods from the point of view of technology, economics and energetics. Attention is paid to the disposal of both solid and liquid phase. Specifically, the following methods are considered: a) livestock feeding, b) combustion of granulated stillages, c) fertilizer production, d) anaerobic digestion with biogas production and e) chemical pretreatment and subsequent secondary treatment. Other disposal techniques mentioned in the literature (electro...

  4. Age modelling for Pleistocene lake sediments: A comparison of methods from the Andean Fúquene Basin (Colombia) case study

    NARCIS (Netherlands)

    Groot, M.H.M.; van der Plicht, J.; Hooghiemstra, H.; Lourens, L.J.; Rowe, H.D.

    2014-01-01

    Challenges and pitfalls for developing age models for long lacustrine sedimentary records are discussed and a comparison is made between radiocarbon dating, visual curve matching, and frequency analysis in the depth domain in combination with cyclostratigraphy. A core section of the high resolution

  5. Age modelling for Pleistocene lake sediments: a comparison of methods from the Andean Fúquene Basin (Colombia) case study

    NARCIS (Netherlands)

    Groot, M.H.M.; van der Plicht, J.; Hooghiemstra, H.; Lourens, L.J.; Rowe, H.D.

    2014-01-01

    Challenges and pitfalls for developing age models for long lacustrine sedimentary records are discussed and a comparison is made between radiocarbon dating, visual curve matching, and frequency analysis in the depth domain in combination with cyclostratigraphy. A core section of the high resolution

  6. Structural equation and log-linear modeling: a comparison of methods in the analysis of a study on caregivers' health

    Directory of Open Access Journals (Sweden)

    Rosenbaum Peter L

    2006-10-01

    Full Text Available Abstract Background In this paper we compare the results in an analysis of determinants of caregivers' health derived from two approaches, a structural equation model and a log-linear model, using the same data set. Methods The data were collected from a cross-sectional population-based sample of 468 families in Ontario, Canada who had a child with cerebral palsy (CP. The self-completed questionnaires and the home-based interviews used in this study included scales reflecting socio-economic status, child and caregiver characteristics, and the physical and psychological well-being of the caregivers. Both analytic models were used to evaluate the relationships between child behaviour, caregiving demands, coping factors, and the well-being of primary caregivers of children with CP. Results The results were compared, together with an assessment of the positive and negative aspects of each approach, including their practical and conceptual implications. Conclusion No important differences were found in the substantive conclusions of the two analyses. The broad confirmation of the Structural Equation Modeling (SEM results by the Log-linear Modeling (LLM provided some reassurance that the SEM had been adequately specified, and that it broadly fitted the data.

  7. A comparison of methods for cascade prediction

    OpenAIRE

    Guo, Ruocheng; Shakarian, Paulo

    2016-01-01

    Information cascades exist in a wide variety of platforms on Internet. A very important real-world problem is to identify which information cascades can go viral. A system addressing this problem can be used in a variety of applications including public health, marketing and counter-terrorism. As a cascade can be considered as compound of the social network and the time series. However, in related literature where methods for solving the cascade prediction problem were proposed, the experimen...

  8. A comparison between the conventional manual ROI method and an automatic algorithm for semiquantitative analysis of SPECT studies

    International Nuclear Information System (INIS)

    Pagan, L; Novi, B; Guidarelli, G; Tranfaglia, C; Galli, S; Lucchi, G; Fagioli, G

    2011-01-01

    In this study, the performance of a free software for automatic segmentation of striatal SPECT brain studies (BasGanV2 - www.aimn.it) and a standard manual Region Of Interest (ROI) method were compared. The anthropomorphic Alderson RSD phantom, filled with solutions at different concentration of 123 I-FP-CIT with Caudate-Putamen to Background ratios between 1 and 8.7 and Caudate to Putamen ratios between 1 and 2, was imaged on a Philips-Irix triple head gamma camera. Images were reconstructed using filtered back-projection and processed with both BasGanV2, that provides normalized striatal uptake values on volumetric anatomical ROIs, and a manual method, based on average counts per voxel in ROIs drawn in a three-slice section. Caudate-Putamen/Background and Caudate/Putamen ratios obtained with the two methods were compared with true experimental ratios. Good correlation was found for each method; BasGanV2, however, has higher R index (BasGan R mean = 0.95, p mean = 0.89, p 123 I-FP-CIT SPECT data with, moreover, the advantage of the availability of a control subject's database.

  9. A global multicenter study on reference values: 1. Assessment of methods for derivation and comparison of reference intervals.

    Science.gov (United States)

    Ichihara, Kiyoshi; Ozarda, Yesim; Barth, Julian H; Klee, George; Qiu, Ling; Erasmus, Rajiv; Borai, Anwar; Evgina, Svetlana; Ashavaid, Tester; Khan, Dilshad; Schreier, Laura; Rolle, Reynan; Shimizu, Yoshihisa; Kimura, Shogo; Kawano, Reo; Armbruster, David; Mori, Kazuo; Yadav, Binod K

    2017-04-01

    The IFCC Committee on Reference Intervals and Decision Limits coordinated a global multicenter study on reference values (RVs) to explore rational and harmonizable procedures for derivation of reference intervals (RIs) and investigate the feasibility of sharing RIs through evaluation of sources of variation of RVs on a global scale. For the common protocol, rather lenient criteria for reference individuals were adopted to facilitate harmonized recruitment with planned use of the latent abnormal values exclusion (LAVE) method. As of July 2015, 12 countries had completed their study with total recruitment of 13,386 healthy adults. 25 analytes were measured chemically and 25 immunologically. A serum panel with assigned values was measured by all laboratories. RIs were derived by parametric and nonparametric methods. The effect of LAVE methods is prominent in analytes which reflect nutritional status, inflammation and muscular exertion, indicating that inappropriate results are frequent in any country. The validity of the parametric method was confirmed by the presence of analyte-specific distribution patterns and successful Gaussian transformation using the modified Box-Cox formula in all countries. After successful alignment of RVs based on the panel test results, nearly half the analytes showed variable degrees of between-country differences. This finding, however, requires confirmation after adjusting for BMI and other sources of variation. The results are reported in the second part of this paper. The collaborative study enabled us to evaluate rational methods for deriving RIs and comparing the RVs based on real-world datasets obtained in a harmonized manner. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  10. Randomly and Non-Randomly Missing Renal Function Data in the Strong Heart Study: A Comparison of Imputation Methods.

    Directory of Open Access Journals (Sweden)

    Nawar Shara

    Full Text Available Kidney and cardiovascular disease are widespread among populations with high prevalence of diabetes, such as American Indians participating in the Strong Heart Study (SHS. Studying these conditions simultaneously in longitudinal studies is challenging, because the morbidity and mortality associated with these diseases result in missing data, and these data are likely not missing at random. When such data are merely excluded, study findings may be compromised. In this article, a subset of 2264 participants with complete renal function data from Strong Heart Exams 1 (1989-1991, 2 (1993-1995, and 3 (1998-1999 was used to examine the performance of five methods used to impute missing data: listwise deletion, mean of serial measures, adjacent value, multiple imputation, and pattern-mixture. Three missing at random models and one non-missing at random model were used to compare the performance of the imputation techniques on randomly and non-randomly missing data. The pattern-mixture method was found to perform best for imputing renal function data that were not missing at random. Determining whether data are missing at random or not can help in choosing the imputation method that will provide the most accurate results.

  11. Comparison of fatigue accumulated during and after prolonged robotic and laparoscopic surgical methods: a cross-sectional study.

    Science.gov (United States)

    González-Sánchez, Manuel; González-Poveda, Ivan; Mera-Velasco, Santiago; Cuesta-Vargas, Antonio I

    2017-03-01

    The aim of the present study was to analyse the fatigue experienced by surgeons during and after performing robotic and laparoscopic surgery and to analyse muscle function, self-perceived fatigue and postural balance. Cross-sectional study considering two surgical protocols (laparoscopic and robotic) with two different roles (chief and assistant surgeon). Fatigue was recorded in two ways: pre- and post-surgery using questionnaires [Profile of Mood States (POMS), Quick Questionnaire Piper Fatigue Scale and Visual Analogue Scale (VAS)-related fatigue] and parametrising functional tests [handgrip and single-leg balance test (SLBT)] and during the intervention by measuring the muscle activation of eight different muscles via surface electromyography and kinematic measurement (using inertial sensors). Each surgery profile intervention (robotic/laparoscopy-chief/assistant surgeon) was measured three times, totalling 12 measured surgery interventions. The minimal duration of surgery was 180 min. Pre- and post-surgery, all questionnaires showed that the magnitude of change was higher for the chief surgeon compared with the assistant surgeon, with differences of between 10 % POMS and 16.25 % VAS (robotic protocol) and between 3.1 % POMS and 12.5 % VAS (laparoscopic protocol). In the inter-profile comparison, the chief surgeon (robotic protocol) showed a lower balance capacity during the SLBT after surgery. During the intervention, the kinematic variables showed significant differences between the chief and assistant surgeon in the robotic protocol, but not in the laparoscopic protocol. Regarding muscle activation, there was not enough muscle activity to generate fatigue. Prolonged surgery increased fatigue in the surgeon; however, the magnitude of fatigue differed between surgical profiles. The surgeon who experienced the greatest fatigue was the chief surgeon in the robotic protocol.

  12. Evaluation of rainfall structure on hydrograph simulation: Comparison of radar and interpolated methods, a study case in a tropical catchment

    Science.gov (United States)

    Velasquez, N.; Ochoa, A.; Castillo, S.; Hoyos Ortiz, C. D.

    2017-12-01

    The skill of river discharge simulation using hydrological models strongly depends on the quality and spatio-temporal representativeness of precipitation during storm events. All precipitation measurement strategies have their own strengths and weaknesses that translate into discharge simulation uncertainties. Distributed hydrological models are based on evolving rainfall fields in the same time scale as the hydrological simulation. In general, rainfall measurements from a dense and well maintained rain gauge network provide a very good estimation of the total volume for each rainfall event, however, the spatial structure relies on interpolation strategies introducing considerable uncertainty in the simulation process. On the other hand, rainfall retrievals from radar reflectivity achieve a better spatial structure representation but with higher uncertainty in the surface precipitation intensity and volume depending on the vertical rainfall characteristics and radar scan strategy. To assess the impact of both rainfall measurement methodologies on hydrological simulations, and in particular the effects of the rainfall spatio-temporal variability, a numerical modeling experiment is proposed including the use of a novel QPE (Quantitative Precipitation Estimation) method based on disdrometer data in order to estimate surface rainfall from radar reflectivity. The experiment is based on the simulation of 84 storms, the hydrological simulations are carried out using radar QPE and two different interpolation methods (IDW and TIN), and the assessment of simulated peak flow. Results show significant rainfall differences between radar QPE and the interpolated fields, evidencing a poor representation of storms in the interpolated fields, which tend to miss the precise location of the intense precipitation cores, and to artificially generate rainfall in some areas of the catchment. Regarding streamflow modelling, the potential improvement achieved by using radar QPE depends on

  13. Study and comparison of different methods control in light water critical facility

    International Nuclear Information System (INIS)

    Michaiel, M.L.; Mahmoud, M.S.

    1980-01-01

    The control of nuclear reactors, may be studied using several control methods, such as control by rod absorbers, by inserting or removing fuel rods (moderator cavities), or by changing reflector thickness. Every method has its advantage, the comparison between these different methods and their effect on the reactivity of a reactor is the purpose of this work. A computer program is written by the authors to calculate the critical radius and worth in any case of the three precedent methods of control

  14. Ongoing challenges to finding people with Parkinson's disease for epidemiological studies: a comparison of population-level case ascertainment methods.

    Science.gov (United States)

    Harris, M Anne; Koehoorn, Mieke; Teschke, Kay

    2011-07-01

    Locating Parkinson's disease cases for epidemiological studies has long been challenging. Self reports, secondary records of physician diagnosis and drug tracer methods each exhibit known disadvantages but have rarely been compared directly. Prescriptions of levodopa have in some studies been considered to comprise a reasonable proxy for Parkinson's disease diagnosis. We tested this assumption by comparing three methods of population-level case ascertainment. We compared the number of Parkinson's disease cases in British Columbia derived from self-reports in the 2001 Canadian Community Health Survey to those obtained from administrative records of filled levodopa prescriptions and to Parkinson's disease diagnoses from physician visit billing and hospital discharge records in 1996 and 2005. We directly compared a case definition based on levodopa prescriptions with a definition based on records of physician diagnosis by calculating positive predictive value and sensitivity. Crude prevalence estimates ranged from approximately 100 to 200 per 100,000. Levodopa-based case definitions overestimated prevalence, while physician- and hospital-record-based case definitions provided lower prevalence estimates compared to survey derived estimates. The proportion of levodopa users with a diagnosis of Parkinson's disease declined from 62% to 52% between 1996 and 2005. This decrease was most dramatic among women (64%-44%) and those under age 65 (54%-39%). Sex and age trends suggest increasing use of levodopa among patients with conditions other than Parkinson's disease, such as restless legs syndrome. Increased non-Parkinson's levodopa use decreases the efficiency of levodopa as a Parkinson's disease case tracer. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Comparison of two occurrence risk assessment methods for collapse gully erosion ——A case study in Guangdong province

    Science.gov (United States)

    Sun, K.; Cheng, D. B.; He, J. J.; Zhao, Y. L.

    2018-02-01

    Collapse gully erosion is a specific type of soil erosion in the red soil region of southern China, and early warning and prevention of the occurrence of collapse gully erosion is very important. Based on the idea of risk assessment, this research, taking Guangdong province as an example, adopt the information acquisition analysis and the logistic regression analysis, to discuss the feasibility for collapse gully erosion risk assessment in regional scale, and compare the applicability of the different risk assessment methods. The results show that in the Guangdong province, the risk degree of collapse gully erosion occurrence is high in northeastern and western area, and relatively low in southwestern and central part. The comparing analysis of the different risk assessment methods on collapse gully also indicated that the risk distribution patterns from the different methods were basically consistent. However, the accuracy of risk map from the information acquisition analysis method was slightly better than that from the logistic regression analysis method.

  16. Comparison of Nutrigenomics Technology Interface Tools for Consumers and Health Professionals: Protocol for a Mixed-Methods Study.

    Science.gov (United States)

    Littlejohn, Paula; Cop, Irene; Brown, Erin; Afroze, Rimi; Davison, Karen M

    2018-06-11

    Although nutrition interventions are a widely accepted resource for the prevention of long-term health conditions, current approaches have not adequately reduced chronic disease morbidity. Nutrigenomics has great potential; however, it is complicated to implement. There is a need for products based on nutrition-related gene test results that are easily understood, accessible, and used. The primary objective of this study was to compare a nonpractitioner-assisted direct-to-consumer self-driven approach to nutrigenomics versus an integrated and personalized practitioner-led method. This 4-month study used a mixed-methods design that included (1) a phase 1 randomized controlled trial that examined the effectiveness of a multifaceted, nutrition-based gene test (components assessed included major nutrients, food tolerances, food taste and preferences, and micronutrients) in changing health behaviors, followed by (2) a qualitative investigation that explored participants' experiences. The study recruited 55 healthy males and females (aged 35-55 years) randomized as a 2:1 ratio where 36 received the intervention (gene test results plus integrated and personalized nutrition report) and 19 were assigned to the control group (gene test results report emailed). The primary outcomes of interest measures included changes in diet (nutrients, healthy eating index), changes in measures on General Self-efficacy and Health-Related Quality of Life scales, and anthropometrics (body mass index, waist-to-hip ratio) measured at baseline, post intervention (3 and 6 weeks), and the final visit (week 9 post intervention). Of the 478 individuals who expressed interest, 180 were invited (37.7%, 180/478) and completed the eligibility screening questionnaire; 73 of the 180 invited individuals (40.5%) were deemed eligible. Of the 73 individuals who were deemed to be eligible, 58 completed the baseline health questionnaire and food records (79%). Of these 58 individuals, 3 were excluded either

  17. A Comparison of Methods of Vertical Equating.

    Science.gov (United States)

    Loyd, Brenda H.; Hoover, H. D.

    Rasch model vertical equating procedures were applied to three mathematics computation tests for grades six, seven, and eight. Each level of the test was composed of 45 items in three sets of 15 items, arranged in such a way that tests for adjacent grades had two sets (30 items) in common, and the sixth and eighth grades had 15 items in common. In…

  18. A comparison of two instructional methods for drawing Lewis Structures

    Science.gov (United States)

    Terhune, Kari

    Two instructional methods for teaching Lewis structures were compared -- the Direct Octet Rule Method (DORM) and the Commonly Accepted Method (CAM). The DORM gives the number of bonds and the number of nonbonding electrons immediately, while the CAM involves moving electron pairs from nonbonding to bonding electrons, if necessary. The research question was as follows: Will high school chemistry students draw more accurate Lewis structures using the DORM or the CAM? Students in Regular Chemistry 1 (N = 23), Honors Chemistry 1 (N = 51) and Chemistry 2 (N = 15) at an urban high school were the study participants. An identical pretest and posttest was given before and after instruction. Students were given instruction with either the DORM (N = 45), the treatment method, or the CAM (N = 44), the control for two days. After the posttest, 15 students were interviewed, using a semistructured interview process. The pretest/posttest consisted of 23 numerical response questions and 2 to 6 free response questions that were graded using a rubric. A two-way ANOVA showed a significant interaction effect between the groups and the methods, F (1, 70) = 10.960, p = 0.001. Post hoc comparisons using the Bonferroni pairwise comparison showed that Reg Chem 1 students demonstrated larger gain scores when they had been taught the CAM (Mean difference = 3.275, SE = 1.324, p Chemistry 1 students performed better with the DORM, perhaps due to better math skills, enhanced working memory, and better metacognitive skills. Regular Chemistry 1 students performed better with the CAM, perhaps because it is more visual. Teachers may want to use the CAM or a direct-pairing method to introduce the topic and use the DORM in advanced classes when a correct structure is needed quickly.

  19. Comparison of automatic and visual methods used for image segmentation in Endodontics: a microCT study.

    Science.gov (United States)

    Queiroz, Polyane Mazucatto; Rovaris, Karla; Santaella, Gustavo Machado; Haiter-Neto, Francisco; Freitas, Deborah Queiroz

    2017-01-01

    To calculate root canal volume and surface area in microCT images, an image segmentation by selecting threshold values is required, which can be determined by visual or automatic methods. Visual determination is influenced by the operator's visual acuity, while the automatic method is done entirely by computer algorithms. To compare between visual and automatic segmentation, and to determine the influence of the operator's visual acuity on the reproducibility of root canal volume and area measurements. Images from 31 extracted human anterior teeth were scanned with a μCT scanner. Three experienced examiners performed visual image segmentation, and threshold values were recorded. Automatic segmentation was done using the "Automatic Threshold Tool" available in the dedicated software provided by the scanner's manufacturer. Volume and area measurements were performed using the threshold values determined both visually and automatically. The paired Student's t-test showed no significant difference between visual and automatic segmentation methods regarding root canal volume measurements (p=0.93) and root canal surface (p=0.79). Although visual and automatic segmentation methods can be used to determine the threshold and calculate root canal volume and surface, the automatic method may be the most suitable for ensuring the reproducibility of threshold determination.

  20. Comparison of three different methods for assessing in situ friction velocity: A case study from Loch Etive, Scotland

    DEFF Research Database (Denmark)

    Inoue, Tetsunori; Glud, Ronnie N.; Stahl, Henrik

    2011-01-01

    Three approaches, Eddy Correlation (EC), Turbulent Kinetic Energy (TKE), and Inertial Dissipation (ID) methods, were compared to evaluate their potential for estimation of friction velocity in a Scottish sea loch. As an independent assessment parameter, we used simultaneous O2 recordings of the d...

  1. Differential and difference equations a comparison of methods of solution

    CERN Document Server

    Maximon, Leonard C

    2016-01-01

    This book, intended for researchers and graduate students in physics, applied mathematics and engineering, presents a detailed comparison of the important methods of solution for linear differential and difference equations - variation of constants, reduction of order, Laplace transforms and generating functions - bringing out the similarities as well as the significant differences in the respective analyses. Equations of arbitrary order are studied, followed by a detailed analysis for equations of first and second order. Equations with polynomial coefficients are considered and explicit solutions for equations with linear coefficients are given, showing significant differences in the functional form of solutions of differential equations from those of difference equations. An alternative method of solution involving transformation of both the dependent and independent variables is given for both differential and difference equations. A comprehensive, detailed treatment of Green’s functions and the associat...

  2. Characterization and stability studies of a novel liposomal cyclosporin A prepared using the supercritical fluid method: comparison with the modified conventional Bangham method

    Directory of Open Access Journals (Sweden)

    Karn PR

    2013-01-01

    Full Text Available Pankaj Ranjan Karn,1,3 Wonkyung Cho,1,3 Hee-Jun Park,1,3 Jeong-Sook Park,3 Sung-Joo Hwang1,21Yonsei Institute of Pharmaceutical Sciences, Yonsei University, Yeonsu-gu, Incheon, Republic of Korea; 2College of Pharmacy, Yonsei University, Yeonsu-gu, Incheon, Republic of Korea; 3College of Pharmacy, Chungnam National University, Yuseong-gu, Daejeon, Republic of KoreaAbstract: A novel method to prepare cyclosporin A encapsulated liposomes was introduced using supercritical fluid of carbon dioxide (SCF-CO2 as an antisolvent. To investigate the strength of the newly developed SCF-CO2 method compared with the modified conventional Bangham method, particle size, zeta potential, and polydispersity index (PDI of both liposomal formulations were characterized and compared. In addition, entrapment efficiency (EE and drug loading (DL characteristics were analyzed by reversed-phase high-performance liquid chromatography. Significantly larger particle size and PDI were revealed from the conventional method, while EE (% and DL (% did not exhibit any significant differences. The SCF-CO2 liposomes were found to be relatively smaller, multilamellar, and spherical with a smoother surface as determined by transmission electron microscopy. SCF-CO2 liposomes showed no significant differences in their particle size and PDI after more than 3 months, whereas conventional liposomes exhibited significant changes in their particle size. The initial yield (%, EE (%, and DL (% of SCF-CO2 liposomes and conventional liposomes were 90.98 ± 2.94, 92.20 ± 1.36, 20.99 ± 0.84 and 90.72 ± 2.83, 90.24 ± 1.37, 20.47 ± 0.94, respectively, which changed after 14 weeks to 86.65 ± 0.30, 87.63 ± 0.72, 18.98 ± 0.22 and 75.04 ± 8.80, 84.59 ± 5.13, 15.94 ± 2.80, respectively. Therefore, the newly developed SCF-CO2 method could be a better alternative compared with the conventional method and may provide a promising approach for large-scale production of liposomes

  3. Cryopreservation of human oocytes, zygotes, embryos and blastocysts: A comparison study between slow freezing and ultra rapid (vitrification methods

    Directory of Open Access Journals (Sweden)

    Tahani Al-Azawi

    2013-12-01

    Full Text Available Preservation of female genetics is currently done primarily by means of oocyte and embryo cryopreservation. The field has seen much progress during its four-decade history, progress driven predominantly by research in humans. It can also be done by preservation of ovarian tissue or entire ovary for transplantation, followed by oocyte harvesting or natural fertilization. Two basic cryopreservation techniques rule the field, slow-rate freezing, the first to be developed and vitrification which in recent years, has gained a foothold. The slow-rate freezing method previously reported had low survival and pregnancy rates, along with the high cost of cryopreservation. Although there are some recent data indicating better survival rates, cryopreservation by the slow freezing method has started to discontinue. Vitrification of human embryos, especially at early stages, became a more popular alternative to the slow rate freezing method due to reported comparable clinical and laboratory outcomes. In addition, vitrification is relatively simple, requires no expensive programmable freezing equipment, and uses a small amount of liquid nitrogen for freezing. Moreover, oocyte cryopreservation using vitrification has been proposed as a solution to maintain women’s fertility by serving and freezing their oocytes at the optimal time. The aim of this research is to compare slow freezing and vitrification in cryopreservation of oocytes, zygotes, embryos and blastocysts during the last twelve years. Therefore, due to a lot of controversies in this regard, we tried to achieve an exact idea about the subject and the best technique used.

  4. A comparison study of Agrobacterium-mediated transformation methods for root-specific promoter analysis in soybean.

    Science.gov (United States)

    Li, Caifeng; Zhang, Haiyan; Wang, Xiurong; Liao, Hong

    2014-11-01

    Both in vitro and in vivo hairy root transformation systems could not replace whole plant transformation for promoter analysis of root-specific and low-P induced genes in soybean. An efficient genetic transformation system is crucial for promoter analysis in plants. Agrobacterium-mediated transformation is the most popular method to produce transgenic hairy roots or plants. In the present study, first, we compared the two different Agrobacterium rhizogenes-mediated hairy root transformation methods using either constitutive CaMV35S or the promoters of root-preferential genes, GmEXPB2 and GmPAP21, in soybean, and found the efficiency of in vitro hairy root transformation was significantly higher than that of in vivo transformation. We compared Agrobacterium rhizogenes-mediated hairy root and Agrobacterium tumefaciens-mediated whole plant transformation systems. The results showed that low-phosphorous (P) inducible GmEXPB2 and GmPAP21 promoters could not induce the increased expression of the GUS reporter gene under low P stress in both in vivo and in vitro transgenic hairy roots. Conversely, GUS activity of GmPAP21 promoter was significantly higher at low P than high P in whole plant transformation. Therefore, both in vitro and in vivo hairy root transformation systems could not replace whole plant transformation for promoter analysis of root-specific and low-P induced genes in soybean.

  5. Removal of phenol from water : a comparison of energization methods

    NARCIS (Netherlands)

    Grabowski, L.R.; Veldhuizen, van E.M.; Rutgers, W.R.

    2005-01-01

    Direct electrical energization methods for removal of persistent substances from water are under investigation in the framework of the ytriD-project. The emphasis of the first stage of the project is the energy efficiency. A comparison is made between a batch reactor with a thin layer of water and

  6. A comparison of non-invasive versus invasive methods of ...

    African Journals Online (AJOL)

    Puneet Khanna

    for Hb estimation from the laboratory [total haemoglobin mass (tHb)] and arterial blood gas (ABG) machine (aHb), using ... A comparison of non-invasive versus invasive methods of haemoglobin estimation in patients undergoing intracranial surgery. 161 .... making decisions for blood transfusions based on these results.

  7. Input-Output Analysis for Sustainability by Using DEA Method: A Comparison Study between European and Asian Countries

    Directory of Open Access Journals (Sweden)

    Wen-Hsien Tsai

    2016-11-01

    Full Text Available Policymakers around the world are confronted with the challenge of balancing between economic development and environmental friendliness, which entails a robust set of measures in energy efficiency and environmental protection. The increasing complexity of these issues has imposed pressure on the Asian countries that have been acting as global factories. This paper proposes a meta-frontier slacks-based measure (SBM data envelopment analysis (DEA model, with the hope that policymakers clarify the relationship between labor force, energy consumption, government expenditures, GDP, and CO2 emissions. Clarification of the causal relationship can serve as a template for policy decisions and ease concerns regarding the potential adverse effects of carbon reduction and energy efficiency on the economy. The results show: (1 Developing countries should establish their own climate change governance and policy frameworks; (2 Developed economies should seek to lower carbon emissions; (3 Energy policies play a pivotal role in energy efficiency improvement; (4 Top-down efforts are critical for the success of carbon reduction policies; (5 Learning from the success of developed countries helps to improve the effectiveness of energy policies; (6 Environmental policies should be formulated, and new production technologies, pollution prevention measures, and treatment methods should be introduced; (7 Governments are suggested to build long-term independent management institutions to promote energy cooperation and exchange.

  8. Proper comparison among methods using a confusion matrix

    CSIR Research Space (South Africa)

    Salmon

    2015-07-01

    Full Text Available -1 IGARSS 2015, Milan, Italy, 26-31 July 2015 Proper comparison among methods using a confusion matrix 1,2 B.P. Salmon, 2,3 W. Kleynhans, 2,3 C.P. Schwegmann and 1J.C. Olivier 1School of Engineering and ICT, University of Tasmania, Australia 2...

  9. Comparison of conventional versus Hybrid knife peroral endoscopic myotomy methods for esophageal achalasia: a case-control study.

    Science.gov (United States)

    Tang, Xiaowei; Gong, Wei; Deng, Zhiliang; Zhou, Jieqiong; Ren, Yutang; Zhang, Qiang; Chen, Zhenyu; Jiang, Bo

    2016-01-01

    Peroral endoscopic myotomy (POEM) has been developed to treat achalasia as a novel less invasive modality. We aimed to compare the efficacy and safety of conventional knife versus Hybrid knife (HK) during POEM procedure. Between June 2012 and July 2014, 31 patients underwent POEM using HK in our department (HK group), and 36 patients underwent POEM using conventional method (injection needle and triangular tip [TT] knife, TT group). Procedure-related parameters, symptom relief, adverse events were compared between two groups. There were no significant differences in the age, sex and other baseline characteristics between the two groups. The mean procedural time was significantly shorter in HK group than TT group (53.0 ± 17.2 vs. 67.6 ± 28.4 min, p = 0.015). The mean frequency of devices exchange was 4.7 ± 1.7 in HK group and 10.9 ± 1.8 in TT group (p = 0.000). No serious adverse events occurred postoperatively in both groups. At one-year follow-up, a total of 94% treatment success was achieved in all patients (93.5% in HK group and 94.4% in TT group, p = 0.877). HK in POEM can shorten the procedural time, and achieve similar treatment success compared to conventional TT knife.

  10. A comparison of three time-domain anomaly detection methods

    Energy Technology Data Exchange (ETDEWEB)

    Schoonewelle, H.; Hagen, T.H.J.J. van der; Hoogenboom, J.E. [Delft University of Technology (Netherlands). Interfaculty Reactor Institute

    1996-01-01

    Three anomaly detection methods based on a comparison of signal values with predictions from an autoregressive model are presented. These methods are: the extremes method, the {chi}{sup 2} method and the sequential probability ratio test. The methods are used to detect a change of the standard deviation of the residual noise obtained from applying an autoregressive model. They are fast and can be used in on-line applications. For each method some important anomaly detection parameters are determined by calculation or simulation. These parameters are: the false alarm rate, the average time to alarm and - being of minor importance -the alarm failure rate. Each method is optimized with respect to the average time to alarm for a given value of the false alarm rate. The methods are compared with each other, resulting in the sequential probability ratio test being clearly superior. (author).

  11. A comparison of three time-domain anomaly detection methods

    International Nuclear Information System (INIS)

    Schoonewelle, H.; Hagen, T.H.J.J. van der; Hoogenboom, J.E.

    1996-01-01

    Three anomaly detection methods based on a comparison of signal values with predictions from an autoregressive model are presented. These methods are: the extremes method, the χ 2 method and the sequential probability ratio test. The methods are used to detect a change of the standard deviation of the residual noise obtained from applying an autoregressive model. They are fast and can be used in on-line applications. For each method some important anomaly detection parameters are determined by calculation or simulation. These parameters are: the false alarm rate, the average time to alarm and - being of minor importance -the alarm failure rate. Each method is optimized with respect to the average time to alarm for a given value of the false alarm rate. The methods are compared with each other, resulting in the sequential probability ratio test being clearly superior. (author)

  12. Comparison of thoracolumbar motion produced by manual and Jackson-table-turning methods. Study of a cadaveric instability model.

    Science.gov (United States)

    DiPaola, Christian P; DiPaola, Matthew J; Conrad, Bryan P; Horodyski, MaryBeth; Del Rossi, Gianluca; Sawers, Andrew; Rechtine, Glenn R

    2008-08-01

    Patients who have sustained a spinal cord injury remain at risk for further neurologic deterioration until the spine is adequately stabilized. To our knowledge, no study has previously addressed the effects of different bed-to-operating room table transfer techniques on thoracolumbar spinal motion in an instability model. We hypothesized that the conventional logroll technique used to transfer patients from a supine position to a prone position on the operating room table has the potential to confer significantly more motion to the unstable thoracolumbar spine than the Jackson technique. Three-column instability was surgically created at the L1 level in seven cadavers. Two protocols were tested. The manual technique entailed performing a standard logroll of a supine cadaver to a prone position on an operating room Jackson table. The Jackson technique involved sliding the supine cadaver to the Jackson table, securing it to the table, and then rotating it into a prone position. An electromagnetic tracking device measured motion--i.e., angular motion (flexion-extension, lateral bending, and axial rotation) and linear translation (axial, medial-lateral, and anterior-posterior) between T12 and L2. The logroll technique created significantly more motion than the Jackson technique as measured with all six parameters. Manual logroll transfers produced an average of 13.8 degrees to 18.1 degrees of maximum angular displacement and 16.6 to 28.3 mm of maximum linear translation. The Jackson technique resulted in an average of 3.1 degrees to 5.8 degrees of maximum angular displacement (p patient safety. Performing the Jackson turn requires approximately half as many people as required for a manual logroll. This study suggests that the Jackson technique should be considered for supine-to-prone transfer of patients with known or suspected instability of the thoracolumbar spine.

  13. Comparison of the xenon-133 washout method with the microsphere method in dog brain perfusion studies

    International Nuclear Information System (INIS)

    Heikkitae, J.; Kettunen, R.; Ahonen, A.

    1982-01-01

    The validity of the Xenon-washout method in estimation of regional cerebral blood flow was tested against a radioactive microsphere method in anaesthetized dogs. The two compartmental model seemed not to be well suited for cerebral perfusion studies by Xe-washout method, although bi-exponential analysis of washout curves gave perfusion values correlating with the microsphere method but depending on calculation method

  14. Monitoring uterine activity during labor: a comparison of three methods

    Science.gov (United States)

    EULIANO, Tammy Y.; NGUYEN, Minh Tam; DARMANJIAN, Shalom; MCGORRAY, Susan P.; EULIANO, Neil; ONKALA, Allison; GREGG, Anthony R.

    2012-01-01

    Objective Tocodynamometry (Toco—strain gauge technology) provides contraction frequency and approximate duration of labor contractions, but suffers frequent signal dropout necessitating re-positioning by a nurse, and may fail in obese patients. The alternative invasive intrauterine pressure catheter (IUPC) is more reliable and adds contraction pressure information, but requires ruptured membranes and introduces small risks of infection and abruption. Electrohysterography (EHG) reports the electrical activity of the uterus through electrodes placed on the maternal abdomen. This study compared all three methods of contraction detection simultaneously in laboring women. Study Design Upon consent, laboring women were monitored simultaneously with Toco, EHG, and IUPC. Contraction curves were generated in real-time for the EHG and all three curves were stored electronically. A contraction detection algorithm was used to compare frequency and timing between methods. Seventy-three subjects were enrolled in the study; 14 were excluded due to hardware failure of one or more of the devices (12) or inadequate data collection duration(2). Results In comparison with the gold-standard IUPC, EHG performed significantly better than Toco with regard to Contractions Consistency Index (CCI). The mean CCI for EHG was 0.88 ± 0.17 compared to 0.69 ± 0.27 for Toco (pToco, EHG was not significantly affected by obesity. Conclusion Toco does not correlate well with the gold-standard IUPC and fails more frequently in obese patients. EHG provides a reliable non-invasive alternative regardless of body habitus. PMID:23122926

  15. A comparison of methods to determine tannin acyl hydrolase activity

    Directory of Open Access Journals (Sweden)

    Cristóbal Aguilar

    1999-01-01

    Full Text Available Six methods to determine the activity of tannase produced by Aspergillus niger Aa-20 on polyurethane foam by solid state fermentation, which included two titrimetric techniques, three spectrophotometric methods and one HPLC assay were tested and compared. All methods assayed enabled the measurement of extracellular tannase activity. However, only five were useful to evaluate intracellular tannase activity. Studies on the effect of pH on tannase extraction demonstrated that tannase activity was considerably under-estimated when its extraction was carried out at pH values below 5.5 and above 6.0. Results showed that the HPLC technique and the modified Bajpai and Patil methods presented several advantages in comparison to the other methods tested.Seis métodos para determinar a atividade de tannase produzida por Aspergillus niger O Aa-20 em espuma de polyuretano por fermentação em estado sólido foram estudados. Duas técnicas titulométricas , três métodos spectrofotométricos e um método por HPLC foram testados e comparados. Todos os métodos testados permitiram determinar a atividade da tannase produzida extracelularmente. Entretanto, somente cinco se mostraram úteis para avaliar a atividade da tannase produzida intracelularmente. Os estudos do efeito do pH na extração de tannase demonstraram que a atividade de tannase era consideravelmente subestimada quando sua extração foi executada em valores de pH inferiores a 5.5 e superior a pH 6.0. Os resultados demostraram que a técnica de HPLC o método Bajpai and Patil modificado apresentam várias vantagens em comparação aos outros métodos testados.

  16. A Comparison Study between Two MPPT Control Methods for a Large Variable-Speed Wind Turbine under Different Wind Speed Characteristics

    Directory of Open Access Journals (Sweden)

    Dongran Song

    2017-05-01

    Full Text Available Variable speed wind turbines (VSWTs usually adopt a maximum power point tracking (MPPT method to optimize energy capture performance. Nevertheless, obtained performance offered by different MPPT methods may be affected by the impact of wind turbine (WT’s inertia and wind speed characteristics and it needs to be clarified. In this paper, the tip speed ratio (TSR and optimal torque (OT methods are investigated in terms of their performance under different wind speed characteristics on a 1.5 MW wind turbine model. To this end, the TSR control method based on an effective wind speed estimator and the OT control method are firstly presented. Then, their performance is investigated and compared through simulation test results under different wind speeds using Bladed software. Comparison results show that the TSR control method can capture slightly more wind energy at the cost of high component loads than the other one under all wind conditions. Furthermore, it is found that both control methods present similar trends of power reduction that is relevant to mean wind speed and turbulence intensity. From the obtained results, we demonstrate that, to further improve MPPT capability of large VSWTs, other advanced control methods using wind speed prediction information need to be addressed.

  17. A Simulation-Based Study on the Comparison of Statistical and Time Series Forecasting Methods for Early Detection of Infectious Disease Outbreaks.

    Science.gov (United States)

    Yang, Eunjoo; Park, Hyun Woo; Choi, Yeon Hwa; Kim, Jusim; Munkhdalai, Lkhagvadorj; Musa, Ibrahim; Ryu, Keun Ho

    2018-05-11

    Early detection of infectious disease outbreaks is one of the important and significant issues in syndromic surveillance systems. It helps to provide a rapid epidemiological response and reduce morbidity and mortality. In order to upgrade the current system at the Korea Centers for Disease Control and Prevention (KCDC), a comparative study of state-of-the-art techniques is required. We compared four different temporal outbreak detection algorithms: the CUmulative SUM (CUSUM), the Early Aberration Reporting System (EARS), the autoregressive integrated moving average (ARIMA), and the Holt-Winters algorithm. The comparison was performed based on not only 42 different time series generated taking into account trends, seasonality, and randomly occurring outbreaks, but also real-world daily and weekly data related to diarrhea infection. The algorithms were evaluated using different metrics. These were namely, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), F1 score, symmetric mean absolute percent error (sMAPE), root-mean-square error (RMSE), and mean absolute deviation (MAD). Although the comparison results showed better performance for the EARS C3 method with respect to the other algorithms, despite the characteristics of the underlying time series data, Holt⁻Winters showed better performance when the baseline frequency and the dispersion parameter values were both less than 1.5 and 2, respectively.

  18. Monitoring uterine activity during labor: a comparison of 3 methods.

    Science.gov (United States)

    Euliano, Tammy Y; Nguyen, Minh Tam; Darmanjian, Shalom; McGorray, Susan P; Euliano, Neil; Onkala, Allison; Gregg, Anthony R

    2013-01-01

    Tocodynamometry (Toco; strain gauge technology) provides contraction frequency and approximate duration of labor contractions but suffers frequent signal dropout, necessitating repositioning by a nurse, and may fail in obese patients. The alternative invasive intrauterine pressure catheter (IUPC) is more reliable and adds contraction pressure information but requires ruptured membranes and introduces small risks of infection and abruption. Electrohysterography (EHG) reports the electrical activity of the uterus through electrodes placed on the maternal abdomen. This study compared all 3 methods of contraction detection simultaneously in laboring women. Upon consent, laboring women were monitored simultaneously with Toco, EHG, and IUPC. Contraction curves were generated in real-time for the EHG, and all 3 curves were stored electronically. A contraction detection algorithm was used to compare frequency and timing between methods. Seventy-three subjects were enrolled in the study; 14 were excluded due to hardware failure of 1 or more of the devices (n = 12) or inadequate data collection duration (n = 2). In comparison with the gold-standard IUPC, EHG performed significantly better than Toco with regard to the Contractions Consistency Index (CCI). The mean CCI for EHG was 0.88 ± 0.17 compared with 0.69 ± 0.27 for Toco (P Toco, EHG was not significantly affected by obesity. Toco does not correlate well with the gold-standard IUPC and fails more frequently in obese patients. EHG provides a reliable noninvasive alternative, regardless of body habitus. Copyright © 2013 Mosby, Inc. All rights reserved.

  19. Comparison of methods for quantifying reef ecosystem services: a case study mapping services for St. Croix, USVI

    Science.gov (United States)

    In coastal communities, stresses derived from landuse changes, climate change, and serial over-exploitation can have major effects on coral reefs, which support multibillion dollar fishing and tourism industries vital to regional economies. A key challenge in evaluating coastal a...

  20. A Case Study from Southwest Germany. Shifting of Groundwater Age During One Year Pumping Test and Comparison of Isotope Methods

    Energy Technology Data Exchange (ETDEWEB)

    Lorenz, G. D.; Eichinger, L.; Heidinger, M.; Schneider, J. [Hydroisotop GmbH, Schweitenkirchen (Germany); Funk, E. [Buero fuer Hydrogeologie, Staufen (Germany)

    2013-07-15

    A two in one well in southwest Germany, separately tapping aquifers in the formations of Muschelkalk and Keuper, was tested in a one year pumping test. The waters were continuously analysed for chemical and isotopic composition (major ions, {sup 3}H, {sup 18}O, {sup 2}H, {sup 13}C, {sup 14}C, {sup 85}Kr) and trace gases (CFC, SF{sub 6}). The analytical results of {sup 3}H and {sup 85}Kr showed a shift in the composition of 25% young, {sup 3}H-bearing water to a proportion of 50% young water after half a year of pumping. The residence time of the young water of about 10 to 20 years remained the same. The shift is also visible in the increasing contents of nitrate and chloride. Though the analytical results of SF{sub 6} showed the same shift, SF{sub 6} - most probably influenced by crystalline gravel - indicates a residence time of the young water of less than one year. The CFCs, on the other hand, point to lower proportions of young water as they are influenced by degradation processes and/or adsorption. Though both aquifers are effectively separated from each other, the same shifting of age structure can be observed. (author)

  1. Comparison between three different LCIA methods for aquatic ecotoxicity and a product Environmental Risk Assessment – Insights from a Detergent Case Study within OMNIITOX

    DEFF Research Database (Denmark)

    Pant, Rana; Van Hoof, Geert; Feijtel, Tom

    2004-01-01

    set of physico-chemical and toxicological effect data to enable a better comparison of the methodological differences. For the same reason, the system boundaries were kept the same in all cases, focusing on emissions into water at the disposal stage. Results and Discussion. Significant differences...... ecotoxicity is not satisfactory, unless explicit reasons for the differences are identifiable. This can hamper practical decision support, as LCA practitioners usually will not be in a position to choose the 'right' LCIA method for their specific case. This puts a challenge to the entire OMNIITOX project......) with results from an Environmental Risk Assessment (ERA). Material and Methods. The LCIA has been conducted with EDIP97 (chronic aquatic ecotoxicity) [1], USES-LCA (freshwater and marine water aquatic ecotoxicity, sometimes referred to as CML2001) [2, 3] and IMPACT 2002 (covering freshwater aquatic ecotoxicity...

  2. Optimization and Comparison of ESI and APCI LC-MS/MS Methods: A Case Study of Irgarol 1051, Diuron, and their Degradation Products in Environmental Samples

    Science.gov (United States)

    Maragou, Niki C.; Thomaidis, Nikolaos S.; Koupparis, Michael A.

    2011-10-01

    A systematic and detailed optimization strategy for the development of atmospheric pressure ionization (API) LC-MS/MS methods for the determination of Irgarol 1051, Diuron, and their degradation products (M1, DCPMU, DCPU, and DCA) in water, sediment, and mussel is described. Experimental design was applied for the optimization of the ion sources parameters. Comparison of ESI and APCI was performed in positive- and negative-ion mode, and the effect of the mobile phase on ionization was studied for both techniques. Special attention was drawn to the ionization of DCA, which presents particular difficulty in API techniques. Satisfactory ionization of this small molecule is achieved only with ESI positive-ion mode using acetonitrile in the mobile phase; the instrumental detection limit is 0.11 ng/mL. Signal suppression was qualitatively estimated by using purified and non-purified samples. The sample preparation for sediments and mussels is direct and simple, comprising only solvent extraction. Mean recoveries ranged from 71% to 110%, and the corresponding (%) RSDs ranged between 4.1 and 14%. The method limits of detection ranged between 0.6 and 3.5 ng/g for sediment and mussel and from 1.3 to 1.8 ng/L for sea water. The method was applied to sea water, marine sediment, and mussels, which were obtained from marinas in Attiki, Greece. Ion ratio confirmation was used for the identification of the compounds.

  3. A review and comparison of methods for recreating individual patient data from published Kaplan-Meier survival curves for economic evaluations: a simulation study.

    Science.gov (United States)

    Wan, Xiaomin; Peng, Liubao; Li, Yuanjian

    2015-01-01

    In general, the individual patient-level data (IPD) collected in clinical trials are not available to independent researchers to conduct economic evaluations; researchers only have access to published survival curves and summary statistics. Thus, methods that use published survival curves and summary statistics to reproduce statistics for economic evaluations are essential. Four methods have been identified: two traditional methods 1) least squares method, 2) graphical method; and two recently proposed methods by 3) Hoyle and Henley, 4) Guyot et al. The four methods were first individually reviewed and subsequently assessed regarding their abilities to estimate mean survival through a simulation study. A number of different scenarios were developed that comprised combinations of various sample sizes, censoring rates and parametric survival distributions. One thousand simulated survival datasets were generated for each scenario, and all methods were applied to actual IPD. The uncertainty in the estimate of mean survival time was also captured. All methods provided accurate estimates of the mean survival time when the sample size was 500 and a Weibull distribution was used. When the sample size was 100 and the Weibull distribution was used, the Guyot et al. method was almost as accurate as the Hoyle and Henley method; however, more biases were identified in the traditional methods. When a lognormal distribution was used, the Guyot et al. method generated noticeably less bias and a more accurate uncertainty compared with the Hoyle and Henley method. The traditional methods should not be preferred because of their remarkable overestimation. When the Weibull distribution was used for a fitted model, the Guyot et al. method was almost as accurate as the Hoyle and Henley method. However, if the lognormal distribution was used, the Guyot et al. method was less biased compared with the Hoyle and Henley method.

  4. A comparison of surveillance methods for small incidence rates

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Woodall, William H.; Reynolds, Marion R.

    2008-05-15

    A number of methods have been proposed to detect an increasing shift in the incidence rate of a rare health event, such as a congenital malformation. Among these are the Sets method, two modifcations of the Sets method, and the CUSUM method based on the Poisson distribution. We consider the situation where data are observed as a sequence of Bernoulli trials and propose the Bernoulli CUSUM chart as a desirable method for the surveillance of rare health events. We compare the performance of the Sets method and its modifcations to the Bernoulli CUSUM chart under a wide variety of circumstances. Chart design parameters were chosen to satisfy a minimax criteria.We used the steady- state average run length to measure chart performance instead of the average run length which was used in nearly all previous comparisons involving the Sets method or its modifcations. Except in a very few instances, we found that the Bernoulli CUSUM chart has better steady-state average run length performance than the Sets method and its modifcations for the extensive number of cases considered.

  5. A comparison of ancestral state reconstruction methods for quantitative characters.

    Science.gov (United States)

    Royer-Carenzi, Manuela; Didier, Gilles

    2016-09-07

    Choosing an ancestral state reconstruction method among the alternatives available for quantitative characters may be puzzling. We present here a comparison of seven of them, namely the maximum likelihood, restricted maximum likelihood, generalized least squares under Brownian, Brownian-with-trend and Ornstein-Uhlenbeck models, phylogenetic independent contrasts and squared parsimony methods. A review of the relations between these methods shows that the maximum likelihood, the restricted maximum likelihood and the generalized least squares under Brownian model infer the same ancestral states and can only be distinguished by the distributions accounting for the reconstruction uncertainty which they provide. The respective accuracy of the methods is assessed over character evolution simulated under a Brownian motion with (and without) directional or stabilizing selection. We give the general form of ancestral state distributions conditioned on leaf states under the simulation models. Ancestral distributions are used first, to give a theoretical lower bound of the expected reconstruction error, and second, to develop an original evaluation scheme which is more efficient than comparing the reconstructed and the simulated states. Our simulations show that: (i) the distributions of the reconstruction uncertainty provided by the methods generally make sense (some more than others); (ii) it is essential to detect the presence of an evolutionary trend and to choose a reconstruction method accordingly; (iii) all the methods show good performances on characters under stabilizing selection; (iv) without trend or stabilizing selection, the maximum likelihood method is generally the most accurate. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. A study of biodiversity using DSS method and seed storage protein comparison of populations in two species of Achillea L. in the west of Iran

    Directory of Open Access Journals (Sweden)

    Hajar Salehi

    2013-11-01

    Full Text Available Intarspecific and interspecific variations are the main reserves of biodiversity and both are important sources of speciation. On this basis, identifing and recognizing the intra and interspecific variations is important in order to recognition of biodiversity. This research was done to study biodiversity and electrophoresis comparison of seed storage proteins in the populations of the two species of the genus Achillea in Hamadan and Kurdistan provinces using of the method of determination of special station (DSS. For this purpose, 12 and 9 special stations were selected for the species A. tenuifolia and A. biebresteinii using the data published in the related flora. Seed storage proteins were extracted and then studied using electrophoresis techniques (SDS-PAGE. In survey of all special stations, 120 plant species were distinguished as associated species. The results of the floristic data for the both species determined six distinctive groups that indicated the existence of intraspecific diversity in this species. The result of analysis of ecological data and seed storage proteins for the two species was in accordance with the floristic data and showed six distinctive groups. The existence of the bands of no. 4, 5, 8, 12 and 13 in the special stations of A. tenuifolia and the bands of 14, 15 and 16 in the special stations of A. biebresteinii o separated the populations of the species in two quite different and distinctive groups.

  7. A new comparison method for dew-point generators

    Science.gov (United States)

    Heinonen, Martti

    1999-12-01

    A new method for comparing dew-point generators was developed at the Centre for Metrology and Accreditation. In this method, the generators participating in a comparison are compared with a transportable saturator unit using a dew-point comparator. The method was tested by constructing a test apparatus and by comparing it with the MIKES primary dew-point generator several times in the dew-point temperature range from -40 to +75 °C. The expanded uncertainty (k = 2) of the apparatus was estimated to be between 0.05 and 0.07 °C and the difference between the comparator system and the generator is well within these limits. In particular, all of the results obtained in the range below 0 °C are within ±0.03 °C. It is concluded that a new type of a transfer standard with characteristics most suitable for dew-point comparisons can be developed on the basis of the principles presented in this paper.

  8. A Comparison of Surface Acoustic Wave Modeling Methods

    Science.gov (United States)

    Wilson, W. c.; Atkinson, G. M.

    2009-01-01

    Surface Acoustic Wave (SAW) technology is low cost, rugged, lightweight, extremely low power and can be used to develop passive wireless sensors. For these reasons, NASA is investigating the use of SAW technology for Integrated Vehicle Health Monitoring (IVHM) of aerospace structures. To facilitate rapid prototyping of passive SAW sensors for aerospace applications, SAW models have been developed. This paper reports on the comparison of three methods of modeling SAWs. The three models are the Impulse Response Method a first order model, and two second order matrix methods; the conventional matrix approach, and a modified matrix approach that is extended to include internal finger reflections. The second order models are based upon matrices that were originally developed for analyzing microwave circuits using transmission line theory. Results from the models are presented with measured data from devices.

  9. A comparison of methods for evaluating structure during ship collisions

    International Nuclear Information System (INIS)

    Ammerman, D.J.; Daidola, J.C.

    1996-01-01

    A comparison is provided of the results of various methods for evaluating structure during a ship-to-ship collision. The baseline vessel utilized in the analyses is a 67.4 meter in length displacement hull struck by an identical vessel traveling at speeds ranging from 10 to 30 knots. The structural response of the struck vessel and motion of both the struck and striking vessels are assessed by finite element analysis. These same results are then compared to predictions utilizing the open-quotes Tanker Structural Analysis for Minor Collisionsclose quotes (TSAMC) Method, the Minorsky Method, the Haywood Collision Process, and comparison to full-scale tests. Consideration is given to the nature of structural deformation, absorbed energy, penetration, rigid body motion, and virtual mass affecting the hydrodynamic response. Insights are provided with regard to the calibration of the finite element model which was achievable through utilizing the more empirical analyses and the extent to which the finite element analysis is able to simulate the entire collision event. 7 refs., 8 figs., 4 tabs

  10. A comparison of Nodal methods in neutron diffusion calculations

    Energy Technology Data Exchange (ETDEWEB)

    Tavron, Barak [Israel Electric Company, Haifa (Israel) Nuclear Engineering Dept. Research and Development Div.

    1996-12-01

    The nuclear engineering department at IEC uses in the reactor analysis three neutron diffusion codes based on nodal methods. The codes, GNOMERl, ADMARC2 and NOXER3 solve the neutron diffusion equation to obtain flux and power distributions in the core. The resulting flux distributions are used for the furl cycle analysis and for fuel reload optimization. This work presents a comparison of the various nodal methods employed in the above codes. Nodal methods (also called Coarse-mesh methods) have been designed to solve problems that contain relatively coarse areas of homogeneous composition. In the nodal method parts of the equation that present the state in the homogeneous area are solved analytically while, according to various assumptions and continuity requirements, a general solution is sought out. Thus efficiency of the method for this kind of problems, is very high compared with the finite element and finite difference methods. On the other hand, using this method one can get only approximate information about the node vicinity (or coarse-mesh area, usually a feel assembly of a 20 cm size). These characteristics of the nodal method make it suitable for feel cycle analysis and reload optimization. This analysis requires many subsequent calculations of the flux and power distributions for the feel assemblies while there is no need for detailed distribution within the assembly. For obtaining detailed distribution within the assembly methods of power reconstruction may be applied. However homogenization of feel assembly properties, required for the nodal method, may cause difficulties when applied to fuel assemblies with many absorber rods, due to exciting strong neutron properties heterogeneity within the assembly. (author).

  11. Comparison of two methods of quantitation in human studies of biodistribution and radiation dosimetry

    International Nuclear Information System (INIS)

    Smith, T.

    1992-01-01

    A simple method of quantitating organ radioactivity content for dosimetry purposes based on relationships between organ count rate and the initial whole body count rate, has been compared with a more rigorous method of absolute quantitation using a transmission scanning technique. Comparisons were on the basis of organ uptake (% administered activity) and resultant organ radiation doses (mGy MBq -1 ) in 6 normal male volunteers given a 99 Tc m -labelled myocardial perfusion imaging agent intravenously at rest and following exercise. In these studies, estimates of individual organ uptakes by the simple method were in error by between +24 and -16% compared with the more accurate method. However, errors on organ dose values were somewhat less and the effective dose was correct to within 3%. (Author)

  12. A comparison of two analytical evaluation methods for educational computer games for young children

    NARCIS (Netherlands)

    Bekker, M.M.; Baauw, E.; Barendregt, W.

    2008-01-01

    In this paper we describe a comparison of two analytical methods for educational computer games for young children. The methods compared in the study are the Structured Expert Evaluation Method (SEEM) and the Combined Heuristic Evaluation (HE) (based on a combination of Nielsen’s HE and the

  13. Comparison of Witczak NCHRP 1-40D & Hirsh dynamic modulus models based on different binder characterization methods: a case study

    Directory of Open Access Journals (Sweden)

    Khattab Ahmed M.

    2017-01-01

    Full Text Available The Pavement ME Design method considers the hot mix asphalt (HMA dynamic modulus (E* as the main mechanistic property that affects pavement performance. For the HMA, E* can be determined directly by laboratory testing (level 1 or it can be estimated using predictive equations (levels 2 and 3. Pavement-ME Design introduced the NCHRP1-40D model as the latest model for predicting E* when levels 2 or 3 HMA inputs are used. This study focused on utilizing laboratory measured E* data to compare NCHRP1-40D model with Hirsh model. This comparison included the evaluation of the binder characterization level as per Pavement ME Design and its influence on the performance of these models. E*tests were conducted in the laboratory on 25 local mixes representing different road construction projects in the kingdom of Saudi Arabia. The main tests for the mix binders were dynamic Shear Rheometer (DSR and Brookfield Rotational Viscometer (RV. Results showed that both models with level 3 binder data produced very similar accuracy. The highest accuracy and lowest bias for both models occurred with level 3 binder data. Finally, the accuracy of prediction and level of bias for both models were found to be a function of the binder input level.

  14. A Simplified Version of the Fuzzy Decision Method and its Comparison with the Paraconsistent Decision Method

    Science.gov (United States)

    de Carvalho, Fábio Romeu; Abe, Jair Minoro

    2010-11-01

    Two recent non-classical logics have been used to make decision: fuzzy logic and paraconsistent annotated evidential logic Et. In this paper we present a simplified version of the fuzzy decision method and its comparison with the paraconsistent one. Paraconsistent annotated evidential logic Et, introduced by Da Costa, Vago and Subrahmanian (1991), is capable of handling uncertain and contradictory data without becoming trivial. It has been used in many applications such as information technology, robotics, artificial intelligence, production engineering, decision making etc. Intuitively, one Et logic formula is type p(a, b), in which a and b belong to [0, 1] (real interval) and represent respectively the degree of favorable evidence (or degree of belief) and the degree of contrary evidence (or degree of disbelief) found in p. The set of all pairs (a; b), called annotations, when plotted, form the Cartesian Unitary Square (CUS). This set, containing a similar order relation of real number, comprises a network, called lattice of the annotations. Fuzzy logic was introduced by Zadeh (1965). It tries to systematize the knowledge study, searching mainly to study the fuzzy knowledge (you don't know what it means) and distinguish it from the imprecise one (you know what it means, but you don't know its exact value). This logic is similar to paraconsistent annotated one, since it attributes a numeric value (only one, not two values) to each proposition (then we can say that it is an one-valued logic). This number translates the intensity (the degree) with which the preposition is true. Let's X a set and A, a subset of X, identified by the function f(x). For each element x∈X, you have y = f(x)∈[0, 1]. The number y is called degree of pertinence of x in A. Decision making theories based on these logics have shown to be powerful in many aspects regarding more traditional methods, like the one based on Statistics. In this paper we present a first study for a simplified

  15. How to find non-dependent opiate users: a comparison of sampling methods in a field study of opium and heroin users

    NARCIS (Netherlands)

    Korf, D.J.; van Ginkel, P.; Benschop, A.

    2010-01-01

    Background/aim The first aim is to better understand the potentials and limitations of different sampling methods for reaching a specific, rarely studied population of drug users and for persuading them to take part in a multidisciplinary study. The second is to determine the extent to which these

  16. A Comparison of Moments-Based Logo Recognition Methods

    Directory of Open Access Journals (Sweden)

    Zili Zhang

    2014-01-01

    Full Text Available Logo recognition is an important issue in document image, advertisement, and intelligent transportation. Although there are many approaches to study logos in these fields, logo recognition is an essential subprocess. Among the methods of logo recognition, the descriptor is very vital. The results of moments as powerful descriptors were not discussed before in terms of logo recognition. So it is unclear which moments are more appropriate to recognize which kind of logos. In this paper we find out the relations between logos with different transforms and moments, which moments are fit for logos with different transforms. The open datasets are employed from the University of Maryland. The comparisons based on moments are carried out from the aspects of logos with noise, and rotation, scaling, rotation and scaling.

  17. Specific activity measurement of 64Cu: A comparison of methods

    International Nuclear Information System (INIS)

    Mastren, Tara; Guthrie, James; Eisenbeis, Paul; Voller, Tom; Mebrahtu, Efrem; Robertson, J. David; Lapi, Suzanne E.

    2014-01-01

    Effective specific activity of 64 Cu (amount of radioactivity per µmol metal) is important in order to determine purity of a particular 64 Cu lot and to assist in optimization of the purification process. Metal impurities can affect effective specific activity and therefore it is important to have a simple method that can measure trace amounts of metals. This work shows that ion chromatography (IC) yields similar results to ICP mass spectrometry for copper, nickel and iron contaminants in 64 Cu production solutions. - Highlights: • Comparison of TETA titration, ICP mass spectrometry, and ion chromatography to measure specific activity. • Validates ion chromatography by using ICP mass spectrometry as the “gold standard”. • Shows different types and amounts of metal impurities present in 64 Cu

  18. Study of a 900 MW PWR by a substructuring method - Spectral response to a seismic excitation and comparison with a beam model

    International Nuclear Information System (INIS)

    Rousseau, G.; Bianchini-Burlot, B.; Bosselut, D.; Jacquart, G.; Viallet, E.

    1997-03-01

    This report presents a three dimensional Finite Element Model (FEM) of a 900 MW Pressurized Water Reactor (PWR) which is described at first: its modal behaviour is computed by a sub-structuring method based upon a Component Mode Synthesis (CMS) method. All the substructures taken into account in the model are described. One model with equivalent beams is also described. Then, different approaches to take into account the fluid/structure interaction in the different models are investigated. Results of the modal analysis of each model are compared to each other and with experimental measures. This modal analysis is then used to compute the non linear and linear response of the PWR due to a seismic excitation. (author)

  19. A comparison of radiological risk assessment methods for environmental restoration

    International Nuclear Information System (INIS)

    Dunning, D.E. Jr.; Peterson, J.M.

    1993-01-01

    Evaluation of risks to human health from exposure to ionizing radiation at radioactively contaminated sites is an integral part of the decision-making process for determining the need for remediation and selecting remedial actions that may be required. At sites regulated under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), a target risk range of 10 -4 to 10 -6 incremental cancer incidence over a lifetime is specified by the US Environmental Protection Agency (EPA) as generally acceptable, based on the reasonable maximum exposure to any individual under current and future land use scenarios. Two primary methods currently being used in conducting radiological risk assessments at CERCLA sites are compared in this analysis. Under the first method, the radiation dose equivalent (i.e., Sv or rem) to the receptors of interest over the appropriate period of exposure is estimated and multiplied by a risk factor (cancer risk/Sv). Alternatively, incremental cancer risk can be estimated by combining the EPA's cancer slope factors (previously termed potency factors) for radionuclides with estimates of radionuclide intake by ingestion and inhalation, as well as radionuclide concentrations in soil that contribute to external dose. The comparison of the two methods has demonstrated that resulting estimates of lifetime incremental cancer risk under these different methods may differ significantly, even when all other exposure assumptions are held constant, with the magnitude of the discrepancy depending upon the dominant radionuclides and exposure pathways for the site. The basis for these discrepancies, the advantages and disadvantages of each method, and the significance of the discrepant results for environmental restoration decisions are presented

  20. A comparison of methods in estimating soil water erosion

    Directory of Open Access Journals (Sweden)

    Marisela Pando Moreno

    2012-02-01

    Full Text Available A comparison between direct field measurements and predictions of soil water erosion using two variant; (FAO and R/2 index of the Revised Universal Soil Loss Equation (RUSLE was carried out in a microcatchment o 22.32 km2 in Northeastern Mexico. Direct field measurements were based on a geomorphologic classification of the area; while environmental units were defined for applying the equation. Environmental units were later grouped withir geomorphologic units to compare results. For the basin as a whole, erosion rates from FAO index were statistical!; equal to those measured on the field, while values obtained from the R/2 index were statistically different from the res and overestimated erosion. However, when comparing among geomorphologic units, erosion appeared overestimate! in steep units and underestimated in more flat areas. The most remarkable differences on erosion rates, between th( direct and FAO methods, were for those units where gullies have developed, fn these cases, erosion was underestimated by FAO index. Hence, it is suggested that a weighted factor for presence of gullies should be developed and included in RUSLE equation.

  1. Comparison of three boosting methods in parent-offspring trios for genotype imputation using simulation study

    Directory of Open Access Journals (Sweden)

    Abbas Mikhchi

    2016-01-01

    Full Text Available Abstract Background Genotype imputation is an important process of predicting unknown genotypes, which uses reference population with dense genotypes to predict missing genotypes for both human and animal genetic variations at a low cost. Machine learning methods specially boosting methods have been used in genetic studies to explore the underlying genetic profile of disease and build models capable of predicting missing values of a marker. Methods In this study strategies and factors affecting the imputation accuracy of parent-offspring trios compared from lower-density SNP panels (5 K to high density (10 K SNP panel using three different Boosting methods namely TotalBoost (TB, LogitBoost (LB and AdaBoost (AB. The methods employed using simulated data to impute the un-typed SNPs in parent-offspring trios. Four different datasets of G1 (100 trios with 5 k SNPs, G2 (100 trios with 10 k SNPs, G3 (500 trios with 5 k SNPs, and G4 (500 trio with 10 k SNPs were simulated. In four datasets all parents were genotyped completely, and offspring genotyped with a lower density panel. Results Comparison of the three methods for imputation showed that the LB outperformed AB and TB for imputation accuracy. The time of computation were different between methods. The AB was the fastest algorithm. The higher SNP densities resulted the increase of the accuracy of imputation. Larger trios (i.e. 500 was better for performance of LB and TB. Conclusions The conclusion is that the three methods do well in terms of imputation accuracy also the dense chip is recommended for imputation of parent-offspring trios.

  2. How to find non-dependent opiate users: a comparison of sampling methods in a field study of opium and heroin users.

    Science.gov (United States)

    Korf, Dirk J; van Ginkel, Patrick; Benschop, Annemieke

    2010-05-01

    The first aim is to better understand the potentials and limitations of different sampling methods for reaching a specific, rarely studied population of drug users and for persuading them to take part in a multidisciplinary study. The second is to determine the extent to which these different methods reach similar or dissimilar segments of the non-dependent opiate-using population. Using ethnographic fieldwork (EFW) and targeted canvassing (TARC; small newspaper advertisements and website announcements), supplemented by snowball referrals, we recruited and interviewed 127 non-dependent opiate users (lifetime prevalence of use 5-100 times; 86.6% had used heroin and 56.7% opium). Average age was 39.0; 66.1% were male and 33.9% female. In addition to opiates, many respondents had wide experience with other illicit drugs. The majority had non-conventional lifestyles. Both EFW and TARC yielded only limited numbers of snowball referrals. EFW requires specific skills, is labour-intensive, thus expensive, but allows unsuitable candidates to be excluded faster. Respondents recruited through EFW were significantly more likely to have experience with opium and various drugs other than opiates. TARC resulted in larger percentages of women and respondents with conventional lifestyles. TARC is less labour-intensive but requires more time for screening candidates; its cost-effectiveness depends on the price of advertising for the recruitment. Different methods reach different segments of the population of non-dependent opiate users. It is useful to employ a multi-method approach to reduce selectivity. Copyright 2009 Elsevier B.V. All rights reserved.

  3. A Novel Mobile Phone Application for Pulse Pressure Variation Monitoring Based on Feature Extraction Technology: A Method Comparison Study in a Simulated Environment.

    Science.gov (United States)

    Desebbe, Olivier; Joosten, Alexandre; Suehiro, Koichi; Lahham, Sari; Essiet, Mfonobong; Rinehart, Joseph; Cannesson, Maxime

    2016-07-01

    Pulse pressure variation (PPV) can be used to assess fluid status in the operating room. This measurement, however, is time consuming when done manually and unreliable through visual assessment. Moreover, its continuous monitoring requires the use of expensive devices. Capstesia™ is a novel Android™/iOS™ application, which calculates PPV from a digital picture of the arterial pressure waveform obtained from any monitor. The application identifies the peaks and troughs of the arterial curve, determines maximum and minimum pulse pressures, and computes PPV. In this study, we compared the accuracy of PPV generated with the smartphone application Capstesia (PPVapp) against the reference method that is the manual determination of PPV (PPVman). The Capstesia application was loaded onto a Samsung Galaxy S4 phone. A physiologic simulator including PPV was used to display arterial waveforms on a computer screen. Data were obtained with different sweep speeds (6 and 12 mm/s) and randomly generated PPV values (from 2% to 24%), pulse pressure (30, 45, and 60 mm Hg), heart rates (60-80 bpm), and respiratory rates (10-15 breaths/min) on the simulator. Each metric was recorded 5 times at an arterial height scale X1 (PPV5appX1) and 5 times at an arterial height scale X3 (PPV5appX3). Reproducibility of PPVapp and PPVman was determined from the 5 pictures of the same hemodynamic profile. The effect of sweep speed, arterial waveform scale (X1 or X3), and number of images captured was assessed by a Bland-Altman analysis. The measurement error (ME) was calculated for each pair of data. A receiver operating characteristic curve analysis determined the ability of PPVapp to discriminate a PPVman > 13%. Four hundred eight pairs of PPVapp and PPVman were analyzed. The reproducibility of PPVapp and PPVman was 10% (interquartile range, 7%-14%) and 6% (interquartile range, 3%-10%), respectively, allowing a threshold ME of 12%. The overall mean bias for PPVappX1 was 1.1% within limits of

  4. A photon dominated region code comparison study

    NARCIS (Netherlands)

    Roellig, M.; Abel, N. P.; Bell, T.; Bensch, F.; Black, J.; Ferland, G. J.; Jonkheid, B.; Kamp, I.; Kaufman, M. J.; Le Bourlot, J.; Le Petit, F.; Meijerink, R.; Morata, O.; Ossenkopf, Volker; Roueff, E.; Shaw, G.; Spaans, M.; Sternberg, A.; Stutzki, J.; Thi, W.-F.; van Dishoeck, E. F.; van Hoof, P. A. M.; Viti, S.; Wolfire, M. G.

    Aims. We present a comparison between independent computer codes, modeling the physics and chemistry of interstellar photon dominated regions (PDRs). Our goal was to understand the mutual differences in the PDR codes and their effects on the physical and chemical structure of the model clouds, and

  5. A comparison of analysis methods to estimate contingency strength.

    Science.gov (United States)

    Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T

    2018-05-09

    To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.

  6. Study on Comparison of Bidding and Pricing Behavior Distinction between Estimate Methods

    Science.gov (United States)

    Morimoto, Emi; Namerikawa, Susumu

    The most characteristic trend on bidding and pricing behavior distinction in recent years is the increasing number of bidders just above the criteria for low-price bidding investigations. The contractor's markup is the difference between the bidding price and the execution price. Therefore, the contractor's markup is the difference between criteria for low-price bidding investigations price and the execution price in the public works bid in Japan. Virtually, bidder's strategies and behavior have been controlled by public engineer's budgets. Estimation and bid are inseparably linked in the Japanese public works procurement system. The trial of the unit price-type estimation method begins in 2004. On another front, accumulated estimation method is one of the general methods in public works. So, there are two types of standard estimation methods in Japan. In this study, we did a statistical analysis on the bid information of civil engineering works for the Ministry of Land, Infrastructure, and Transportation in 2008. It presents several issues that bidding and pricing behavior is related to an estimation method (several estimation methods) for public works bid in Japan. The two types of standard estimation methods produce different results that number of bidders (decide on bid-no bid strategy) and distribution of bid price (decide on mark-up strategy).The comparison on the distribution of bid prices showed that the percentage of the bid concentrated on the criteria for low-price bidding investigations have had a tendency to get higher in the large-sized public works by the unit price-type estimation method, comparing with the accumulated estimation method. On one hand, the number of bidders who bids for public works estimated unit-price tends to increase significantly Public works estimated unit-price is likely to have been one of the factors for the construction companies to decide if they participate in the biddings.

  7. A comparison of multivariate genome-wide association methods

    DEFF Research Database (Denmark)

    Galesloot, Tessel E; Van Steen, Kristel; Kiemeney, Lambertus A L M

    2014-01-01

    Joint association analysis of multiple traits in a genome-wide association study (GWAS), i.e. a multivariate GWAS, offers several advantages over analyzing each trait in a separate GWAS. In this study we directly compared a number of multivariate GWAS methods using simulated data. We focused on six...... methods that are implemented in the software packages PLINK, SNPTEST, MultiPhen, BIMBAM, PCHAT and TATES, and also compared them to standard univariate GWAS, analysis of the first principal component of the traits, and meta-analysis of univariate results. We simulated data (N = 1000) for three...... for scenarios with an opposite sign of genetic and residual correlation. All multivariate analyses resulted in a higher power than univariate analyses, even when only one of the traits was associated with the QTL. Hence, use of multivariate GWAS methods can be recommended, even when genetic correlations between...

  8. Comparison of methods to identify crop productivity constraints in developing countries. A review

    NARCIS (Netherlands)

    Kraaijvanger, R.G.M.; Sonneveld, M.P.W.; Almekinders, C.J.M.; Veldkamp, T.

    2015-01-01

    Selecting a method for identifying actual crop productivity constraints is an important step for triggering innovation processes. Applied methods can be diverse and although such methods have consequences for the design of intervention strategies, documented comparisons between various methods are

  9. How Many Alternatives Can Be Ranked? A Comparison of the Paired Comparison and Ranking Methods.

    Science.gov (United States)

    Ock, Minsu; Yi, Nari; Ahn, Jeonghoon; Jo, Min-Woo

    2016-01-01

    To determine the feasibility of converting ranking data into paired comparison (PC) data and suggest the number of alternatives that can be ranked by comparing a PC and a ranking method. Using a total of 222 health states, a household survey was conducted in a sample of 300 individuals from the general population. Each respondent performed a PC 15 times and a ranking method 6 times (two attempts of ranking three, four, and five health states, respectively). The health states of the PC and the ranking method were constructed to overlap each other. We converted the ranked data into PC data and examined the consistency of the response rate. Applying probit regression, we obtained the predicted probability of each method. Pearson correlation coefficients were determined between the predicted probabilities of those methods. The mean absolute error was also assessed between the observed and the predicted values. The overall consistency of the response rate was 82.8%. The Pearson correlation coefficients were 0.789, 0.852, and 0.893 for ranking three, four, and five health states, respectively. The lowest mean absolute error was 0.082 (95% confidence interval [CI] 0.074-0.090) in ranking five health states, followed by 0.123 (95% CI 0.111-0.135) in ranking four health states and 0.126 (95% CI 0.113-0.138) in ranking three health states. After empirically examining the consistency of the response rate between a PC and a ranking method, we suggest that using five alternatives in the ranking method may be superior to using three or four alternatives. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  10. A contribution to the study of pyrogenic substances in radiopharmaceutical preparations. Comparison between methods using Rabbit and those using Limulus amebocytes lysate

    International Nuclear Information System (INIS)

    Bruneau, Jacky.

    1982-10-01

    We have studied two methods for pyrogenic substances detection. We used: the hyperthermic action of these substances after injection in Rabbit, and the gelation reaction of a Limulus amebocytes lysate. To apply these two methods of pyrogenic substances detection to the radiopharmaceutical preparations, we have conceived and designed a material allowing their handling in compliance with the radioactive safety norms. We have compared the sensitivity, reliability and reproducibility of these methods, one based on gelation of Limulus amebocytes lysate in presence of endotoxins, the other on the hyperthermic action of these same endotoxins in the rabbit when injected intravenously or through the suboccipital route. The discussion of the results obtained, shows that the method using the Limulus amebocytes lysate is more sensitive, less expansive and less dangerous. This method particulary well adapted to the control of radiopharmaceutical preparations, brings an additional security to the patients for whom these products are destined [fr

  11. Comparison of meaningful learning characteristics in simulated nursing practice after traditional versus computer-based simulation method: a qualitative videography study.

    Science.gov (United States)

    Poikela, Paula; Ruokamo, Heli; Teräs, Marianne

    2015-02-01

    Nursing educators must ensure that nursing students acquire the necessary competencies; finding the most purposeful teaching methods and encouraging learning through meaningful learning opportunities is necessary to meet this goal. We investigated student learning in a simulated nursing practice using videography. The purpose of this paper is to examine how two different teaching methods presented students' meaningful learning in a simulated nursing experience. The 6-hour study was divided into three parts: part I, general information; part II, training; and part III, simulated nursing practice. Part II was delivered by two different methods: a computer-based simulation and a lecture. The study was carried out in the simulated nursing practice in two universities of applied sciences, in Northern Finland. The participants in parts II and I were 40 first year nursing students; 12 student volunteers continued to part III. Qualitative analysis method was used. The data were collected using video recordings and analyzed by videography. The students who used a computer-based simulation program were more likely to report meaningful learning themes than those who were first exposed to lecture method. Educators should be encouraged to use computer-based simulation teaching in conjunction with other teaching methods to ensure that nursing students are able to receive the greatest educational benefits. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. A Comparison of Various Forecasting Methods for Autocorrelated Time Series

    Directory of Open Access Journals (Sweden)

    Karin Kandananond

    2012-07-01

    Full Text Available The accuracy of forecasts significantly affects the overall performance of a whole supply chain system. Sometimes, the nature of consumer products might cause difficulties in forecasting for the future demands because of its complicated structure. In this study, two machine learning methods, artificial neural network (ANN and support vector machine (SVM, and a traditional approach, the autoregressive integrated moving average (ARIMA model, were utilized to predict the demand for consumer products. The training data used were the actual demand of six different products from a consumer product company in Thailand. Initially, each set of data was analysed using Ljung‐Box‐Q statistics to test for autocorrelation. Afterwards, each method was applied to different sets of data. The results indicated that the SVM method had a better forecast quality (in terms of MAPE than ANN and ARIMA in every category of products.

  13. A comparison theorem for the SOR iterative method

    Science.gov (United States)

    Sun, Li-Ying

    2005-09-01

    In 1997, Kohno et al. have reported numerically that the improving modified Gauss-Seidel method, which was referred to as the IMGS method, is superior to the SOR iterative method. In this paper, we prove that the spectral radius of the IMGS method is smaller than that of the SOR method and Gauss-Seidel method, if the relaxation parameter [omega][set membership, variant](0,1]. As a result, we prove theoretically that this method is succeeded in improving the convergence of some classical iterative methods. Some recent results are improved.

  14. A Comparison of Card-sorting Analysis Methods

    DEFF Research Database (Denmark)

    Nawaz, Ather

    2012-01-01

    This study investigates how the choice of analysis method for card sorting studies affects the suggested information structure for websites. In the card sorting technique, a variety of methods are used to analyse the resulting data. The analysis of card sorting data helps user experience (UX......) designers to discover the patterns in how users make classifications and thus to develop an optimal, user-centred website structure. During analysis, the recurrence of patterns of classification between users influences the resulting website structure. However, the algorithm used in the analysis influences...... the recurrent patterns found and thus has consequences for the resulting website design. This paper draws an attention to the choice of card sorting analysis and techniques and shows how it impacts the results. The research focuses on how the same data for card sorting can lead to different website structures...

  15. Comparison of longitudinal excursion of a nerve-phantom model using quantitative ultrasound imaging and motion analysis system methods: A convergent validity study.

    Science.gov (United States)

    Paquette, Philippe; El Khamlichi, Youssef; Lamontagne, Martin; Higgins, Johanne; Gagnon, Dany H

    2017-08-01

    Quantitative ultrasound imaging is gaining popularity in research and clinical settings to measure the neuromechanical properties of the peripheral nerves such as their capability to glide in response to body segment movement. Increasing evidence suggests that impaired median nerve longitudinal excursion is associated with carpal tunnel syndrome. To date, psychometric properties of longitudinal nerve excursion measurements using quantitative ultrasound imaging have not been extensively investigated. This study investigates the convergent validity of the longitudinal nerve excursion by comparing measures obtained using quantitative ultrasound imaging with those determined with a motion analysis system. A 38-cm long rigid nerve-phantom model was used to assess the longitudinal excursion in a laboratory environment. The nerve-phantom model, immersed in a 20-cm deep container filled with a gelatin-based solution, was moved 20 times using a linear forward and backward motion. Three light-emitting diodes were used to record nerve-phantom excursion with a motion analysis system, while a 5-cm linear transducer allowed simultaneous recording via ultrasound imaging. Both measurement techniques yielded excellent association ( r  = 0.99) and agreement (mean absolute difference between methods = 0.85 mm; mean relative difference between methods = 7.48 %). Small discrepancies were largely found when larger excursions (i.e. > 10 mm) were performed, revealing slight underestimation of the excursion by the ultrasound imaging analysis software. Quantitative ultrasound imaging is an accurate method to assess the longitudinal excursion of an in vitro nerve-phantom model and appears relevant for future research protocols investigating the neuromechanical properties of the peripheral nerves.

  16. Calibrations of pocket dosemeters using a comparison method

    International Nuclear Information System (INIS)

    Somarriba V, I.

    1996-01-01

    This monograph is dedicated mainly to the calibration of pocket dosemeters. Various types of radiation sources used in hospitals and different radiation detectors with emphasis on ionization chambers are briefly presented. Calibration methods based on the use of a reference dosemeter were developed to calibrate all pocket dosemeters existing at the Radiation Physics and Metrology Laboratory. Some of these dosemeters were used in personnel dosimetry at hospitals. Moreover, a study was realized about factors that affect the measurements with pocket dosemeters in the long term, such as discharges due to cosmic radiation. A DBASE IV program was developed to store the information included in the hospital's registry

  17. A comparison of three clustering methods for finding subgroups in MRI, SMS or clinical data

    DEFF Research Database (Denmark)

    Kent, Peter; Jensen, Rikke K; Kongsted, Alice

    2014-01-01

    ). There is a scarcity of head-to-head comparisons that can inform the choice of which clustering method might be suitable for particular clinical datasets and research questions. Therefore, the aim of this study was to perform a head-to-head comparison of three commonly available methods (SPSS TwoStep CA, Latent Gold...... LCA and SNOB LCA). METHODS: The performance of these three methods was compared: (i) quantitatively using the number of subgroups detected, the classification probability of individuals into subgroups, the reproducibility of results, and (ii) qualitatively using subjective judgments about each program...... classify individuals into those subgroups. CONCLUSIONS: Our subjective judgement was that Latent Gold offered the best balance of sensitivity to subgroups, ease of use and presentation of results with these datasets but we recognise that different clustering methods may suit other types of data...

  18. Comparison of the functionality of pelvic floor muscles in women who practice the Pilates method and sedentary women: a pilot study.

    Science.gov (United States)

    Ferla, Lia; Paiva, Luciana Laureano; Darki, Caroline; Vieira, Adriane

    2016-01-01

    The Pilates method is a form of physical exercise that improves the control of the core muscles, improving the conditioning of all the muscle groups that comprise the core, including the pelvic floor muscles (PFM). Thus, this study had the goal of verifying the existence of differences in the functioning of the PFM in women who practice the Pilates method and sedentary women. This was an observational, cross-sectional pilot study. A sample size calculation was performed using preliminary data and it determined that the sample should have at least 24 individuals in each group. The participants were 60 women aged 20 to 40 years; 30 women practiced the Pilates method (PMG) and 30 were sedentary (SG). An anamnesis file was used to collect personal data and assess the knowledge and perception of the PFM. The Perina perineometer and vaginal palpation were used to determine the functionality of the PFM. There was no significant difference between the PMG and the SG in any of the variables analyzed. We concluded that the functionality of the PFM in younger women who practice the Pilates method is not different from that of sedentary women.

  19. Comparison of Address-based Sampling and Random-digit Dialing Methods for Recruiting Young Men as Controls in a Case-Control Study of Testicular Cancer Susceptibility

    OpenAIRE

    Clagett, Bartholt; Nathanson, Katherine L.; Ciosek, Stephanie L.; McDermoth, Monique; Vaughn, David J.; Mitra, Nandita; Weiss, Andrew; Martonik, Rachel; Kanetsky, Peter A.

    2013-01-01

    Random-digit dialing (RDD) using landline telephone numbers is the historical gold standard for control recruitment in population-based epidemiologic research. However, increasing cell-phone usage and diminishing response rates suggest that the effectiveness of RDD in recruiting a random sample of the general population, particularly for younger target populations, is decreasing. In this study, we compared landline RDD with alternative methods of control recruitment, including RDD using cell-...

  20. A new ART iterative method and a comparison of performance among various ART methods

    International Nuclear Information System (INIS)

    Tan, Yufeng; Sato, Shunsuke

    1993-01-01

    Many algebraic reconstruction techniques (ART) image reconstruction algorithms, for instance, simultaneous iterative reconstruction technique (SIRT), the relaxation method and multiplicative ART (MART), have been proposed and their convergent properties have been studied. SIRT and the underrelaxed relaxation method converge to the least-squares solution, but the convergent speeds are very slow. The Kaczmarz method converges very quickly, but the reconstructed images contain a lot of noise. The comparative studies between these algorithms have been done by Gilbert and others, but are not adequate. In this paper, we (1) propose a new method which is a modified Kaczmarz method and prove its convergence property, (2) study performance of 7 algorithms including the one proposed here by computer simulation for 3 kinds of typical phantoms. The method proposed here does not give the least-square solution, but the root mean square errors of its reconstructed images decrease very quickly after few interations. The result shows that the method proposed here gives a better reconstructed image. (author)

  1. Comparison of mucosal lining fluid sampling methods and influenza-specific IgA detection assays for use in human studies of influenza immunity.

    Science.gov (United States)

    de Silva, Thushan I; Gould, Victoria; Mohammed, Nuredin I; Cope, Alethea; Meijer, Adam; Zutt, Ilse; Reimerink, Johan; Kampmann, Beate; Hoschler, Katja; Zambon, Maria; Tregoning, John S

    2017-10-01

    We need greater understanding of the mechanisms underlying protection against influenza virus to develop more effective vaccines. To do this, we need better, more reproducible methods of sampling the nasal mucosa. The aim of the current study was to compare levels of influenza virus A subtype-specific IgA collected using three different methods of nasal sampling. Samples were collected from healthy adult volunteers before and after LAIV immunization by nasal wash, flocked swabs and Synthetic Absorptive Matrix (SAM) strips. Influenza A virus subtype-specific IgA levels were measured by haemagglutinin binding ELISA or haemagglutinin binding microarray and the functional response was assessed by microneutralization. Nasosorption using SAM strips lead to the recovery of a more concentrated sample of material, with a significantly higher level of total and influenza H1-specific IgA. However, an equivalent percentage of specific IgA was observed with all sampling methods when normalized to the total IgA. Responses measured using a recently developed antibody microarray platform, which allows evaluation of binding to multiple influenza strains simultaneously with small sample volumes, were compared to ELISA. There was a good correlation between ELISA and microarray values. Material recovered from SAM strips was weakly neutralizing when used in an in vitro assay, with a modest correlation between the level of IgA measured by ELISA and neutralization, but a greater correlation between microarray-measured IgA and neutralizing activity. In conclusion we have tested three different methods of nasal sampling and show that flocked swabs and novel SAM strips are appropriate alternatives to traditional nasal washes for assessment of mucosal influenza humoral immunity. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Comparison study on qualitative and quantitative risk assessment methods for urban natural gas pipeline network.

    Science.gov (United States)

    Han, Z Y; Weng, W G

    2011-05-15

    In this paper, a qualitative and a quantitative risk assessment methods for urban natural gas pipeline network are proposed. The qualitative method is comprised of an index system, which includes a causation index, an inherent risk index, a consequence index and their corresponding weights. The quantitative method consists of a probability assessment, a consequences analysis and a risk evaluation. The outcome of the qualitative method is a qualitative risk value, and for quantitative method the outcomes are individual risk and social risk. In comparison with previous research, the qualitative method proposed in this paper is particularly suitable for urban natural gas pipeline network, and the quantitative method takes different consequences of accidents into consideration, such as toxic gas diffusion, jet flame, fire ball combustion and UVCE. Two sample urban natural gas pipeline networks are used to demonstrate these two methods. It is indicated that both of the two methods can be applied to practical application, and the choice of the methods depends on the actual basic data of the gas pipelines and the precision requirements of risk assessment. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.

  3. A comparison of methods of assessment of scintigraphic colon transit.

    Science.gov (United States)

    Freedman, Patricia Noel; Goldberg, Paul A; Fataar, Abdul Basier; Mann, Michael M

    2006-06-01

    There is no standard method of analysis of scintigraphic colonic transit investigation. This study was designed to compare 4 techniques. Sixteen subjects (median age, 37.5 y; range, 21-61 y), who had sustained a spinal cord injury more than a year before the study, were given a pancake labeled with 10-18 MBq of (111)In bound to resin beads to eat. Anterior and posterior images were acquired with a gamma-camera 3 h after the meal and then 3 times a day for the next 4 d. Seven regions of interest, outlining the ascending colon, hepatic flexure, transverse colon, splenic flexure, descending colon, rectosigmoid, and total abdominal activity at each time point, were drawn on the anterior and posterior images. The counts were decay corrected and the geometric mean (GM), for each region, at each time point calculated. The GM was used to calculate the percentage of the initial total abdominal activity in each region, at each time point. Colonic transit was assessed in 4 ways: (a) Three independent nuclear medicine physicians visually assessed transit on the analog images and classified subjects into 5 categories of colonic transit (rapid, intermediate, generalized delay, right-sided delay, or left-sided delay). (b) Parametric images were constructed from the percentage activity in each region at each time point. (c) The arrival and clearance times of the activity in the right and left colon were plotted as time-activity curves. (d) The geometric center of the distribution of the activity was calculated and plotted on a graph versus time. The results of these 4 methods were compared using an agreement matrix. Though simple to perform, the visual assessment was unreliable. The best agreement occurred between the parametric images and the arrival and clearance times of the activity in the right and left colon. The different methods of assessment do not produce uniform results. The best option for evaluating colonic transit appears to be a combination of the analog images

  4. Comparison of methods for identifying small-for-gestational-age infants at risk of perinatal mortality among obese mothers: a hospital-based cohort study.

    Science.gov (United States)

    Hinkle, S N; Sjaarda, L A; Albert, P S; Mendola, P; Grantz, K L

    2016-11-01

    To assess differences in small-for-gestational age (SGA) classifications for the detection of neonates with increased perinatal mortality risk among obese women and subsequently assess the association between prepregnancy body mass index (BMI) status and SGA. Hospital-based cohort. Twelve US clinical centres (2002-08). A total of 114 626 singleton, nonanomalous pregnancies. Data were collected using electronic medical record abstraction. Relative risks (RR) with 95% CI were estimated. SGA trends (birthweight < 10th centile) classified using population-based (SGA POP ), intrauterine (SGA IU ) and customised (SGA CUST ) references were assessed. The SGA-associated perinatal mortality risk was estimated among obese women. Using the SGA method most associated with perinatal mortality, the association between prepregnancy BMI and SGA was estimated. The overall perinatal mortality prevalence was 0.55% and this increased significantly with increasing BMI (P < 0.01). Among obese women, SGA IU detected the highest proportion of perinatal mortality cases (2.49%). Perinatal mortality was 5.32 times (95% CI 3.72-7.60) more likely among SGA IU neonates than non-SGA IU neonates. This is in comparison with the 3.71-fold (2.49-5.53) and 4.81-fold (3.41-6.80) increased risk observed when SGA POP and SGA CUST were used, respectively. Compared with women of normal weight, overweight women (RR = 0.82, 95% CI 0.78-0.86) and obese women (RR = 0.80; 95% CI 0.75-0.83) had a lower risk for delivering an SGA IU neonate. Among obese women, the intrauterine reference best identified neonates at risk of perinatal mortality. Based on SGA IU , SGA is less common among obese women but these SGA babies are at a high risk of death and remain an important group for surveillance. SGA is less common among obese women but these SGA babies are at a high risk of death. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.

  5. Comparison of three instrumental methods for the objective evaluation of radiotherapy induced erythema in breast cancer patients and a study of the effect of skin lotions

    Energy Technology Data Exchange (ETDEWEB)

    Nystroem, Josefina; Lindholm-Sethson, Britta [Dept. of Chemistry, Umeaa Univ ., Umeaa (Sweden); Centre for Biomedical Engineering and Physics, Umeaa Univ., Umeaa (Sweden); Geladi, Paul [Unit of Biomass Technology and Chemistry, SLU Roebaecksdal en, Umeaa (Sweden); Svensk, Ann-Christine; Larson, Johan; Franzen, Lars [Dept. of Oncology, N orthern Univ. Hospital, Umeaa (Sweden)

    2007-10-15

    A non-blinded three armed study of the effect of Aloe vera, Essex and no lotion on erythema was performed. The erythema is an effect of radiotherapy treatment in breast cancer patients. The study required testing of objective methods for measuring the erythema. The chosen experimental methods were Near Infrared Spectroscopy, Laser Doppler Imaging and Digital Colour Photography. The experimental setup was made in such a way that in parallel with testing the effect of the lotions there was also a test of the sensitivity of the instruments. Fifty women were selected consecutively to participate in the study. They were all subjected to treatment with high-energy electrons (9-20 MeV) after mastectomy, 2 Gy/day to a total dose of 50 Gy. Measurements were performed before the start of radiotherapy and thereafter once a week during the course of treatment. Aloe vera and Essex lotion were applied twice every radiation day in selected sites. The increase in skin redness could be monitored with all techniques with a detection limit of 8 Gy for Digital Colour Photography and Near Infrared Spectroscopy and 18 Gy for Laser Doppler Imaging. In clinical practice our recommendation is to use Digital Colour Photography. No significant median differences were observed between the pairs no lotion-Essex, no lotion-Aloe vera and Essex-Aloe vera for any of the techniques tested.

  6. Comparison of three instrumental methods for the objective evaluation of radiotherapy induced erythema in breast cancer patients and a study of the effect of skin lotions

    International Nuclear Information System (INIS)

    Nystroem, Josefina; Lindholm-Sethson, Britta; Geladi, Paul; Svensk, Ann-Christine; Larson, Johan; Franzen, Lars

    2007-01-01

    A non-blinded three armed study of the effect of Aloe vera, Essex and no lotion on erythema was performed. The erythema is an effect of radiotherapy treatment in breast cancer patients. The study required testing of objective methods for measuring the erythema. The chosen experimental methods were Near Infrared Spectroscopy, Laser Doppler Imaging and Digital Colour Photography. The experimental setup was made in such a way that in parallel with testing the effect of the lotions there was also a test of the sensitivity of the instruments. Fifty women were selected consecutively to participate in the study. They were all subjected to treatment with high-energy electrons (9-20 MeV) after mastectomy, 2 Gy/day to a total dose of 50 Gy. Measurements were performed before the start of radiotherapy and thereafter once a week during the course of treatment. Aloe vera and Essex lotion were applied twice every radiation day in selected sites. The increase in skin redness could be monitored with all techniques with a detection limit of 8 Gy for Digital Colour Photography and Near Infrared Spectroscopy and 18 Gy for Laser Doppler Imaging. In clinical practice our recommendation is to use Digital Colour Photography. No significant median differences were observed between the pairs no lotion-Essex, no lotion-Aloe vera and Essex-Aloe vera for any of the techniques tested

  7. Sperm Na+, K+-ATPase and Ca2+-ATPase activity: A preliminary study of comparison of swim up and density gradient centrifugation methods for sperm preparation

    Science.gov (United States)

    Lestari, Silvia W.; Larasati, Manggiasih D.; Asmarinah, Mansur, Indra G.

    2018-02-01

    As one of the treatment for infertility, the success rate of Intrauterine Insemination (IUI) is still relatively low. Several sperm preparation methods, swim-up (SU) and the density-gradient centrifugation (DGC) are frequently used to select for better sperm quality which also contribute to IUI failure. Sperm selection methods mainly separate the motile from the immotile sperm, eliminating the seminal plasma. The sperm motility involves the structure and function of sperm membrane in maintaining the balance of ion transport system which is regulated by the Na+, K+-ATPase, and Ca2+-ATPase enzymes. This study aims to re-evaluate the efficiency of these methods in selecting for sperm before being used for IUI and based the evaluation on sperm Na+,K+-ATPase and Ca2+-ATPase activities. Fourteen infertile men from couples who underwent IUI were involved in this study. The SU and DGC methods were used for the sperm preparation. Semen analysis was performed based on the reference value of World Health Organization (WHO) 2010. After isolating the membrane fraction of sperms, the Na+, K+-ATPase activity was defined as the difference in the released inorganic phosphate (Pi) with and without the existence of 10 mM ouabain in the reaction, while the Ca2+-ATPase was determined as the difference in Pi contents with and without the existence of 55 µm CaCl2. The prepared sperm demonstrated a higher percentage of motile sperm compared to sperm from the whole semen. Additionally, the percentage of motile sperm of post-DGC showed higher result than the sperm from post-SU. The velocity of sperm showed similar pattern with the percentage of motile sperm, in which the velocity of prepared sperm was higher than the sperm from whole semen. Furthermore, the sperm velocity of post-DGC was higher compared to the sperm from post-SU. The Na+, K+-ATPase activity of prepared sperm was higher compared to whole semen, whereas Na+, K+-ATPase activity in the post DGC was higher than post SU. The Ca2

  8. A Comparison of Multiple Methods for Estimating Parasitemia of Hemogregarine Hemoparasites (Apicomplexa: Adeleorina) and Its Application for Studying Infection in Natural Populations

    Science.gov (United States)

    Maia, João P.; Harris, D. James; Carranza, Salvador; Gómez-Díaz, Elena

    2014-01-01

    Identifying factors influencing infection patterns among hosts is critical for our understanding of the evolution and impact of parasitism in natural populations. However, the correct estimation of infection parameters depends on the performance of detection and quantification methods. In this study, we designed a quantitative PCR (qPCR) assay targeting the 18 S rRNA gene to estimate prevalence and intensity of Hepatozoon infection and compared its performance with microscopy and PCR. Using qPCR, we also compared various protocols that differ in the biological source and the extraction methods. Our results show that the qPCR approach on DNA extracted from blood samples, regardless of the extraction protocol, provided the most sensitive estimates of Hepatozoon infection parameters; while allowed us to differentiate between mixed infections of Adeleorinid (Hepatozoon) and Eimeriorinid (Schellackia and Lankesterella), based on the analysis of melting curves. We also show that tissue and saline methods can be used as low-cost alternatives in parasitological studies. The next step was to test our qPCR assay in a biological context, and for this purpose we investigated infection patterns between two sympatric lacertid species, which are naturally infected with apicomplexan hemoparasites, such as the genera Schellackia (Eimeriorina) and Hepatozoon (Adeleorina). From a biological standpoint, we found a positive correlation between Hepatozoon intensity of infection and host body size within each host species, being significantly higher in males, and higher in the smaller sized host species. These variations can be associated with a number of host intrinsic factors, like hormonal and immunological traits, that require further investigation. Our findings are relevant as they pinpoint the importance of accounting for methodological issues to better estimate infection in parasitological studies, and illustrate how between-host factors can influence parasite distributions in

  9. A comparison of multiple methods for estimating parasitemia of hemogregarine hemoparasites (apicomplexa: adeleorina and its application for studying infection in natural populations.

    Directory of Open Access Journals (Sweden)

    João P Maia

    Full Text Available Identifying factors influencing infection patterns among hosts is critical for our understanding of the evolution and impact of parasitism in natural populations. However, the correct estimation of infection parameters depends on the performance of detection and quantification methods. In this study, we designed a quantitative PCR (qPCR assay targeting the 18 S rRNA gene to estimate prevalence and intensity of Hepatozoon infection and compared its performance with microscopy and PCR. Using qPCR, we also compared various protocols that differ in the biological source and the extraction methods. Our results show that the qPCR approach on DNA extracted from blood samples, regardless of the extraction protocol, provided the most sensitive estimates of Hepatozoon infection parameters; while allowed us to differentiate between mixed infections of Adeleorinid (Hepatozoon and Eimeriorinid (Schellackia and Lankesterella, based on the analysis of melting curves. We also show that tissue and saline methods can be used as low-cost alternatives in parasitological studies. The next step was to test our qPCR assay in a biological context, and for this purpose we investigated infection patterns between two sympatric lacertid species, which are naturally infected with apicomplexan hemoparasites, such as the genera Schellackia (Eimeriorina and Hepatozoon (Adeleorina. From a biological standpoint, we found a positive correlation between Hepatozoon intensity of infection and host body size within each host species, being significantly higher in males, and higher in the smaller sized host species. These variations can be associated with a number of host intrinsic factors, like hormonal and immunological traits, that require further investigation. Our findings are relevant as they pinpoint the importance of accounting for methodological issues to better estimate infection in parasitological studies, and illustrate how between-host factors can influence parasite

  10. A comparison of multiple methods for estimating parasitemia of hemogregarine hemoparasites (apicomplexa: adeleorina) and its application for studying infection in natural populations.

    Science.gov (United States)

    Maia, João P; Harris, D James; Carranza, Salvador; Gómez-Díaz, Elena

    2014-01-01

    Identifying factors influencing infection patterns among hosts is critical for our understanding of the evolution and impact of parasitism in natural populations. However, the correct estimation of infection parameters depends on the performance of detection and quantification methods. In this study, we designed a quantitative PCR (qPCR) assay targeting the 18 S rRNA gene to estimate prevalence and intensity of Hepatozoon infection and compared its performance with microscopy and PCR. Using qPCR, we also compared various protocols that differ in the biological source and the extraction methods. Our results show that the qPCR approach on DNA extracted from blood samples, regardless of the extraction protocol, provided the most sensitive estimates of Hepatozoon infection parameters; while allowed us to differentiate between mixed infections of Adeleorinid (Hepatozoon) and Eimeriorinid (Schellackia and Lankesterella), based on the analysis of melting curves. We also show that tissue and saline methods can be used as low-cost alternatives in parasitological studies. The next step was to test our qPCR assay in a biological context, and for this purpose we investigated infection patterns between two sympatric lacertid species, which are naturally infected with apicomplexan hemoparasites, such as the genera Schellackia (Eimeriorina) and Hepatozoon (Adeleorina). From a biological standpoint, we found a positive correlation between Hepatozoon intensity of infection and host body size within each host species, being significantly higher in males, and higher in the smaller sized host species. These variations can be associated with a number of host intrinsic factors, like hormonal and immunological traits, that require further investigation. Our findings are relevant as they pinpoint the importance of accounting for methodological issues to better estimate infection in parasitological studies, and illustrate how between-host factors can influence parasite distributions in

  11. "A Comparison of Several Methods in a Rock Slope Stability ...

    African Journals Online (AJOL)

    This researchuses the mentioned methods and principles in the stability analysis of some rock slopes in an open pit mine in Syria, that is Khneifees phosphate mine. The importance of this researchis that it shows the role of kinematical analysis in minimizing efforts when verifying the safety of rock slopes in site, and when ...

  12. A comparison of different methods for predicting coal devolatilisation kinetics

    Energy Technology Data Exchange (ETDEWEB)

    Arenillas, A.; Rubiera, F.; Pevida, C.; Pis, J.J. [Instituto Nacional del Carbon, CSIC, Apartado 73, 33080 Oviedo (Spain)

    2001-04-01

    Knowledge of the coal devolatilisation rate is of great importance because it exerts a marked effect on the overall combustion behaviour. Different approaches can be used to obtain the kinetics of the complex devolatilisation process. The simplest are empirical and employ global kinetics, where the Arrhenius expression is used to correlate rates of mass loss with temperature. In this study a high volatile bituminous coal was devolatilised at four different heating rates in a thermogravimetric analyser (TG) linked to a mass spectrometer (MS). As a first approach, the Arrhenius kinetic parameters (k and A) were calculated from the experimental results, assuming a single step process. Another approach is the distributed-activation energy model, which is more complex due to the assumption that devolatilisation occurs through several first-order reactions, which occur simultaneously. Recent advances in the understanding of coal structure have led to more fundamental approaches for modelling devolatilisation behaviour, such as network models. These are based on a physico-chemical description of coal structure. In the present study the FG-DVC (Functional Group-Depolymerisation, Vaporisation and Crosslinking) computer code was used as the network model and the FG-DVC predicted evolution of volatile compounds was compared with the experimental results. In addition, the predicted rate of mass loss from the FG-DVC model was used to obtain a third devolatilisation kinetic approach. The three methods were compared and discussed, with the experimental results as a reference.

  13. A Comparison of Assessment Methods and Raters in Product Creativity

    Science.gov (United States)

    Lu, Chia-Chen; Luh, Ding-Bang

    2012-01-01

    Although previous studies have attempted to use different experiences of raters to rate product creativity by adopting the Consensus Assessment Method (CAT) approach, the validity of replacing CAT with another measurement tool has not been adequately tested. This study aimed to compare raters with different levels of experience (expert ves.…

  14. Soil structure interaction calculations: a comparison of methods

    International Nuclear Information System (INIS)

    Wight, L.; Zaslawsky, M.

    1976-01-01

    Two approaches for calculating soil structure interaction (SSI) are compared: finite element and lumped mass. Results indicate that the calculations with the lumped mass method are generally conservative compared to those obtained by the finite element method. They also suggest that a closer agreement between the two sets of calculations is possible, depending on the use of frequency-dependent soil springs and dashpots in the lumped mass calculations. There is a total lack of suitable guidelines for implementing the lumped mass method of calculating SSI, which leads to the conclusion that the finite element method is generally superior for calculative purposes

  15. Soil structure interaction calculations: a comparison of methods

    Energy Technology Data Exchange (ETDEWEB)

    Wight, L.; Zaslawsky, M.

    1976-07-22

    Two approaches for calculating soil structure interaction (SSI) are compared: finite element and lumped mass. Results indicate that the calculations with the lumped mass method are generally conservative compared to those obtained by the finite element method. They also suggest that a closer agreement between the two sets of calculations is possible, depending on the use of frequency-dependent soil springs and dashpots in the lumped mass calculations. There is a total lack of suitable guidelines for implementing the lumped mass method of calculating SSI, which leads to the conclusion that the finite element method is generally superior for calculative purposes.

  16. The electronic states of 1,2,3-triazole studied by vacuum ultraviolet photoabsorption and ultraviolet photoelectron spectroscopy, and a comparison with ab initio configuration interaction methods

    DEFF Research Database (Denmark)

    Palmer, Michael H.; Hoffmann, Søren Vrønning; Jones, Nykola C.

    2011-01-01

    The Rydberg states in the vacuum ultraviolet photoabsorption spectrum of 1,2,3-triazole have been measured and analyzed with the aid of comparison to the UV valence photoelectron ionizations and the results of ab initio configuration interaction (CI) calculations. Calculated electronic ionization...... and excitation energies for singlet, triplet valence, and Rydberg states were obtained using multireference multiroot CI procedures with an aug-cc-pVTZ [5s3p3d1f] basis set and a set of Rydberg [4s3p3d3f] functions. Adiabatic excitation energies obtained for several electronic states using coupled...... are the excitations consistent with an f-series....

  17. Comparison of address-based sampling and random-digit dialing methods for recruiting young men as controls in a case-control study of testicular cancer susceptibility.

    Science.gov (United States)

    Clagett, Bartholt; Nathanson, Katherine L; Ciosek, Stephanie L; McDermoth, Monique; Vaughn, David J; Mitra, Nandita; Weiss, Andrew; Martonik, Rachel; Kanetsky, Peter A

    2013-12-01

    Random-digit dialing (RDD) using landline telephone numbers is the historical gold standard for control recruitment in population-based epidemiologic research. However, increasing cell-phone usage and diminishing response rates suggest that the effectiveness of RDD in recruiting a random sample of the general population, particularly for younger target populations, is decreasing. In this study, we compared landline RDD with alternative methods of control recruitment, including RDD using cell-phone numbers and address-based sampling (ABS), to recruit primarily white men aged 18-55 years into a study of testicular cancer susceptibility conducted in the Philadelphia, Pennsylvania, metropolitan area between 2009 and 2012. With few exceptions, eligible and enrolled controls recruited by means of RDD and ABS were similar with regard to characteristics for which data were collected on the screening survey. While we find ABS to be a comparably effective method of recruiting young males compared with landline RDD, we acknowledge the potential impact that selection bias may have had on our results because of poor overall response rates, which ranged from 11.4% for landline RDD to 1.7% for ABS.

  18. Matrix diffusion studies by electrical conductivity methods. Comparison between laboratory and in-situ measurements

    International Nuclear Information System (INIS)

    Ohlsson, Y.; Neretnieks, I.

    1998-01-01

    Traditional laboratory diffusion experiments in rock material are time consuming, and quite small samples are generally used. Electrical conductivity measurements, on the other hand, provide a fast means for examining transport properties in rock and allow measurements on larger samples as well. Laboratory measurements using electrical conductivity give results that compare well to those from traditional diffusion experiments. The measurement of the electrical resistivity in the rock surrounding a borehole is a standard method for the detection of water conducting fractures. If these data could be correlated to matrix diffusion properties, in-situ diffusion data from large areas could be obtained. This would be valuable because it would make it possible to obtain data very early in future investigations of potentially suitable sites for a repository. This study compares laboratory electrical conductivity measurements with in-situ resistivity measurements from a borehole at Aespoe. The laboratory samples consist mainly of Aespoe diorite and fine-grained granite and the rock surrounding the borehole of Aespoe diorite, Smaaland granite and fine-grained granite. The comparison shows good agreement between laboratory measurements and in-situ data

  19. Comparison of association mapping methods in a complex pedigreed population

    DEFF Research Database (Denmark)

    Sahana, Goutam; Guldbrandtsen, Bernt; Janss, Luc

    2010-01-01

    to collect SNP signals in intervals, to avoid the scattering of a QTL signal over multiple neighboring SNPs. Methods not accounting for genetic background (full pedigree information) performed worse, and methods using haplotypes were considerably worse with a high false-positive rate, probably due...... to the presence of low-frequency haplotypes. It was necessary to account for full relationships among individuals to avoid excess false discovery. Although the methods were tested on a cattle pedigree, the results are applicable to any population with a complex pedigree structure...

  20. Task exposures in an office environment: a comparison of methods.

    Science.gov (United States)

    Van Eerd, Dwayne; Hogg-Johnson, Sheilah; Mazumder, Anjali; Cole, Donald; Wells, Richard; Moore, Anne

    2009-10-01

    Task-related factors such as frequency and duration are associated with musculoskeletal disorders in office settings. The primary objective was to compare various task recording methods as measures of exposure in an office workplace. A total of 41 workers from different jobs were recruited from a large urban newspaper (71% female, mean age 41 years SD 9.6). Questionnaire, task diaries, direct observation and video methods were used to record tasks. A common set of task codes was used across methods. Different estimates of task duration, number of tasks and task transitions arose from the different methods. Self-report methods did not consistently result in longer task duration estimates. Methodological issues could explain some of the differences in estimates seen between methods observed. It was concluded that different task recording methods result in different estimates of exposure likely due to different exposure constructs. This work addresses issues of exposure measurement in office environments. It is of relevance to ergonomists/researchers interested in how to best assess the risk of injury among office workers. The paper discusses the trade-offs between precision, accuracy and burden in the collection of computer task-based exposure measures and different underlying constructs captures in each method.

  1. A comparison of confirmatory factor analysis methods : Oblique multiple group method versus confirmatory common factor method

    NARCIS (Netherlands)

    Stuive, Ilse

    2007-01-01

    Confirmatieve Factor Analyse (CFA) is een vaak gebruikte methode wanneer onderzoekers een bepaalde veronderstelling hebben over de indeling van items in één of meerdere subtests en willen onderzoeken of deze indeling ook wordt ondersteund door verzamelde onderzoeksgegevens. De meest gebruikte

  2. Structured Feedback Training for Time-Out: Efficacy and Efficiency in Comparison to a Didactic Method.

    Science.gov (United States)

    Jensen, Scott A; Blumberg, Sean; Browning, Megan

    2017-09-01

    Although time-out has been demonstrated to be effective across multiple settings, little research exists on effective methods for training others to implement time-out. The present set of studies is an exploratory analysis of a structured feedback method for training time-out using repeated role-plays. The three studies examined (a) a between-subjects comparison to more a traditional didactic/video modeling method of time-out training, (b) a within-subjects comparison to traditional didactic/video modeling training for another skill, and (c) the impact of structured feedback training on in-home time-out implementation. Though findings are only preliminary and more research is needed, the structured feedback method appears across studies to be an efficient, effective method that demonstrates good maintenance of skill up to 3 months post training. Findings suggest, though do not confirm, a benefit of the structured feedback method over a more traditional didactic/video training model. Implications and further research on the method are discussed.

  3. From a tree to a stand in Finnish boreal forests: biomass estimation and comparison of methods

    OpenAIRE

    Liu, Chunjiang

    2009-01-01

    There is an increasing need to compare the results obtained with different methods of estimation of tree biomass in order to reduce the uncertainty in the assessment of forest biomass carbon. In this study, tree biomass was investigated in a 30-year-old Scots pine (Pinus sylvestris) (Young-Stand) and a 130-year-old mixed Norway spruce (Picea abies)-Scots pine stand (Mature-Stand) located in southern Finland (61º50' N, 24º22' E). In particular, a comparison of the results of different estimati...

  4. A comparison of non-integrating reprogramming methods

    Science.gov (United States)

    Schlaeger, Thorsten M; Daheron, Laurence; Brickler, Thomas R; Entwisle, Samuel; Chan, Karrie; Cianci, Amelia; DeVine, Alexander; Ettenger, Andrew; Fitzgerald, Kelly; Godfrey, Michelle; Gupta, Dipti; McPherson, Jade; Malwadkar, Prerana; Gupta, Manav; Bell, Blair; Doi, Akiko; Jung, Namyoung; Li, Xin; Lynes, Maureen S; Brookes, Emily; Cherry, Anne B C; Demirbas, Didem; Tsankov, Alexander M; Zon, Leonard I; Rubin, Lee L; Feinberg, Andrew P; Meissner, Alexander; Cowan, Chad A; Daley, George Q

    2015-01-01

    Human induced pluripotent stem cells (hiPSCs1–3) are useful in disease modeling and drug discovery, and they promise to provide a new generation of cell-based therapeutics. To date there has been no systematic evaluation of the most widely used techniques for generating integration-free hiPSCs. Here we compare Sendai-viral (SeV)4, episomal (Epi)5 and mRNA transfection mRNA6 methods using a number of criteria. All methods generated high-quality hiPSCs, but significant differences existed in aneuploidy rates, reprogramming efficiency, reliability and workload. We discuss the advantages and shortcomings of each approach, and present and review the results of a survey of a large number of human reprogramming laboratories on their independent experiences and preferences. Our analysis provides a valuable resource to inform the use of specific reprogramming methods for different laboratories and different applications, including clinical translation. PMID:25437882

  5. Determination of chloride in water. A comparison of three methods

    International Nuclear Information System (INIS)

    Steele, P.J.

    1978-09-01

    The presence of chloride in the water circuits of nuclear reactors, power stations and experimental rigs is undesirable because of the possibility of corrosion. Three methods are considered for the determination of chloride in water in the 0 to 10 μg ml -1 range. The potentiometric method, using a silver-silver chloride electrode, is capable of determining chloride above the 0.1μg ml -1 level, with a standard deviation of 0.03 to 0.12 μg ml -1 in the range 0.1 to 6.0 μg ml -1 chloride. Bromide, iodide and strong reducing agents interfere but none of the cations likely to be present has an effect. The method is very susceptible to variations in temperature. The turbidimetric method involves the production of suspended silver chloride by the addition of silver nitride solution to the sample. The method is somewhat unreliable and is more useful as a rapid, routine limit-testing technique. In the third method, chloride in the sample is pre-concentrated by co-precipitation on lead phosphate, redissolved in acidified ferric nitrate solution and determined colorimetrically by the addition of mercuric thiocyanate solution. It is suitable for determining chloride in the range 0 to 50 μg, using a sample volume of 100 to 500 ml. None of the chemical species likely to be present interferes. In all three methods, chloride contamination can occur at any point in the determination. Analyses should be carried out in conditions where airborne contamination is minimised and a high degree of cleanliness must be maintained. (author)

  6. A comparison of four gravimetric fine particle sampling methods.

    Science.gov (United States)

    Yanosky, J D; MacIntosh, D L

    2001-06-01

    A study was conducted to compare four gravimetric methods of measuring fine particle (PM2.5) concentrations in air: the BGI, Inc. PQ200 Federal Reference Method PM2.5 (FRM) sampler; the Harvard-Marple Impactor (HI); the BGI, Inc. GK2.05 KTL Respirable/Thoracic Cyclone (KTL); and the AirMetrics MiniVol (MiniVol). Pairs of FRM, HI, and KTL samplers and one MiniVol sampler were collocated and 24-hr integrated PM2.5 samples were collected on 21 days from January 6 through April 9, 2000. The mean and standard deviation of PM2.5 levels from the FRM samplers were 13.6 and 6.8 microg/m3, respectively. Significant systematic bias was found between mean concentrations from the FRM and the MiniVol (1.14 microg/m3, p = 0.0007), the HI and the MiniVol (0.85 microg/m3, p = 0.0048), and the KTL and the MiniVol (1.23 microg/m3, p = 0.0078) according to paired t test analyses. Linear regression on all pairwise combinations of the sampler types was used to evaluate measurements made by the samplers. None of the regression intercepts was significantly different from 0, and only two of the regression slopes were significantly different from 1, that for the FRM and the MiniVol [beta1 = 0.91, 95% CI (0.83-0.99)] and that for the KTL and the MiniVol [beta1 = 0.88, 95% CI (0.78-0.98)]. Regression R2 terms were 0.96 or greater between all pairs of samplers, and regression root mean square error terms (RMSE) were 1.65 microg/m3 or less. These results suggest that the MiniVol will underestimate measurements made by the FRM, the HI, and the KTL by an amount proportional to PM2.5 concentration. Nonetheless, these results indicate that all of the sampler types are comparable if approximately 10% variation on the mean levels and on individual measurement levels is considered acceptable and the actual concentration is within the range of this study (5-35 microg/m3).

  7. Comparison of Localization Methods for a Robot Soccer Team

    Directory of Open Access Journals (Sweden)

    H. Levent Akın

    2008-11-01

    Full Text Available In this work, several localization algorithms that are designed and implemented for Cerberus'05 Robot Soccer Team are analyzed and compared. These algorithms are used for global localization of autonomous mobile agents in the robotic soccer domain, to overcome the uncertainty in the sensors, environment and the motion model. The algorithms are Reverse Monte Carlo Localization (R-MCL, Simple Localization (S-Loc and Sensor Resetting Localization (SRL. R-MCL is a hybrid method based on both Markov Localization (ML and Monte Carlo Localization (MCL where the ML module finds the region where the robot should be and MCL predicts the geometrical location with high precision by selecting samples in this region. S-Loc is another localization method where just one sample per percept is drawn, for global localization. Within this method another novel method My Environment (ME is designed to hold the history and overcome the lack of information due to the drastically decrease in the number of samples in S-Loc. ME together with S-Loc is used in the Technical Challenges in Robocup 2005 and play an important role in ranking the First Place in the Challenges. In this work, these methods together with SRL, which is a widely used successful localization algorithm, are tested with both offline and real-time tests. First they are tested on a challenging data set that is used by many researches and compared in terms of error rate against different levels of noise, and sparsity. Besides time required recovering from kidnapping and speed of the methods are tested and compared. Then their performances are tested with real-time tests with scenarios like the ones in the Technical Challenges in ROBOCUP. The main aim is to find the best method which is very robust and fast and requires less computational power and memory compared to similar approaches and is accurate enough for high level decision making which is vital for robot soccer.

  8. Comparison of Localization Methods for a Robot Soccer Team

    Directory of Open Access Journals (Sweden)

    Hatice Kose

    2006-12-01

    Full Text Available In this work, several localization algorithms that are designed and implemented for Cerberus'05 Robot Soccer Team are analyzed and compared. These algorithms are used for global localization of autonomous mobile agents in the robotic soccer domain, to overcome the uncertainty in the sensors, environment and the motion model. The algorithms are Reverse Monte Carlo Localization (R-MCL, Simple Localization (S-Loc and Sensor Resetting Localization (SRL. R-MCL is a hybrid method based on both Markov Localization (ML and Monte Carlo Localization (MCL where the ML module finds the region where the robot should be and MCL predicts the geometrical location with high precision by selecting samples in this region. S-Loc is another localization method where just one sample per percept is drawn, for global localization. Within this method another novel method My Environment (ME is designed to hold the history and overcome the lack of information due to the drastically decrease in the number of samples in S-Loc. ME together with S-Loc is used in the Technical Challenges in Robocup 2005 and play an important role in ranking the First Place in the Challenges. In this work, these methods together with SRL, which is a widely used successful localization algorithm, are tested with both offline and real-time tests. First they are tested on a challenging data set that is used by many researches and compared in terms of error rate against different levels of noise, and sparsity. Besides time required recovering from kidnapping and speed of the methods are tested and compared. Then their performances are tested with real-time tests with scenarios like the ones in the Technical Challenges in ROBOCUP. The main aim is to find the best method which is very robust and fast and requires less computational power and memory compared to similar approaches and is accurate enough for high level decision making which is vital for robot soccer.

  9. A comparison of methods for teaching receptive language to toddlers with autism.

    Science.gov (United States)

    Vedora, Joseph; Grandelski, Katrina

    2015-01-01

    The use of a simple-conditional discrimination training procedure, in which stimuli are initially taught in isolation with no other comparison stimuli, is common in early intensive behavioral intervention programs. Researchers have suggested that this procedure may encourage the development of faulty stimulus control during training. The current study replicated previous work that compared the simple-conditional and the conditional-only methods to teach receptive labeling of pictures to young children with autism spectrum disorder. Both methods were effective, but the conditional-only method required fewer sessions to mastery. © Society for the Experimental Analysis of Behavior.

  10. A comparison of cosegregation analysis methods for the clinical setting.

    Science.gov (United States)

    Rañola, John Michael O; Liu, Quanhui; Rosenthal, Elisabeth A; Shirts, Brian H

    2018-04-01

    Quantitative cosegregation analysis can help evaluate the pathogenicity of genetic variants. However, genetics professionals without statistical training often use simple methods, reporting only qualitative findings. We evaluate the potential utility of quantitative cosegregation in the clinical setting by comparing three methods. One thousand pedigrees each were simulated for benign and pathogenic variants in BRCA1 and MLH1 using United States historical demographic data to produce pedigrees similar to those seen in the clinic. These pedigrees were analyzed using two robust methods, full likelihood Bayes factors (FLB) and cosegregation likelihood ratios (CSLR), and a simpler method, counting meioses. Both FLB and CSLR outperform counting meioses when dealing with pathogenic variants, though counting meioses is not far behind. For benign variants, FLB and CSLR greatly outperform as counting meioses is unable to generate evidence for benign variants. Comparing FLB and CSLR, we find that the two methods perform similarly, indicating that quantitative results from either of these methods could be combined in multifactorial calculations. Combining quantitative information will be important as isolated use of cosegregation in single families will yield classification for less than 1% of variants. To encourage wider use of robust cosegregation analysis, we present a website ( http://www.analyze.myvariant.org ) which implements the CSLR, FLB, and Counting Meioses methods for ATM, BRCA1, BRCA2, CHEK2, MEN1, MLH1, MSH2, MSH6, and PMS2. We also present an R package, CoSeg, which performs the CSLR analysis on any gene with user supplied parameters. Future variant classification guidelines should allow nuanced inclusion of cosegregation evidence against pathogenicity.

  11. Demographic analysis, a comparison of the jackknife and bootstrap methods, and predation projection: a case study of Chrysopa pallens (Neuroptera: Chrysopidae).

    Science.gov (United States)

    Yu, Ling-Yuan; Chen, Zhen-Zhen; Zheng, Fang-Qiang; Shi, Ai-Ju; Guo, Ting-Ting; Yeh, Bao-Hua; Chi, Hsin; Xu, Yong-Yu

    2013-02-01

    The life table of the green lacewing, Chrysopa pallens (Rambur), was studied at 22 degrees C, a photoperiod of 15:9 (L:D) h, and 80% relative humidity in the laboratory. The raw data were analyzed using the age-stage, two-sex life table. The intrinsic rate of increase (r), the finite rate of increase (lambda), the net reproduction rate (R0), and the mean generation time (T) of Ch. pallens were 0.1258 d(-1), 1.1340 d(-1), 241.4 offspring and 43.6 d, respectively. For the estimation of the means, variances, and SEs of the population parameters, we compared the jackknife and bootstrap techniques. Although similar values of the means and SEs were obtained with both techniques, significant differences were observed in the frequency distribution and variances of all parameters. The jackknife technique will result in a zero net reproductive rate upon the omission of a male, an immature death, or a nonreproductive female. This result represents, however, a contradiction because an intrinsic rate of increase exists in this situation. Therefore, we suggest that the jackknife technique should not be used for the estimation of population parameters. In predator-prey interactions, the nonpredatory egg and pupal stages of the predator are time refuges for the prey, and the pest population can grow during these times. In this study, a population projection based on the age-stage, two-sex life table is used to determine the optimal interval between releases to fill the predation gaps and maintain the predatory capacity of the control agent.

  12. Comparison of support vector machine and neutral network classification method in hyperspectral mapping of ophiolite mélanges–A case study of east of Iran

    Directory of Open Access Journals (Sweden)

    Bahram Bahrambeygi

    2017-06-01

    Full Text Available Ophiolitic regions are one of the most complex geology settings. Mapping in these parts need broad and precise studies and tools because of the mixture rocks and confusion units. Hyperion hyperspectral sensor data are one of the advanced tools for earth surface mapping that containing rich information of shallow electromagnetic reflection in 242 continuous bands. Because of some contaminated noise in tens of these bands we removed 87 most noisy bands and focused our study on 155 low noisy bands. In present study, tow spectral based classification algorithms of support vector machine and neutral network are compared on hyperion image for classification of cluttered units in an ophiolite set. Study area is Mesina region in collision ophiolitic belt of south east of Iran. In this region for design processing results validation rate, lots of random locations and control points were studied in field scale and were sampled for laboratory surveys. Samples were investigated in microscopic section and by electron microprobe system. Based on laboratory-field studies, the lithology of this area can divided into five general groups: (Melange series, metamorphic units, Oligocene – Miocene to Quaternary volcanic units, lime and flysch units. Based on field-laboratory works, some standard points defined for validate processing results accuracy rate. Therefore, the Support Vector Machine and neutral network method as advanced hyperspectral image processing methods respectively have overall accuracies of 52% and 65%. Consequently the method based neutral network theory for hyperspectral classification have acceptable ratio in separation of blended complicated units.

  13. A Comparison of Fully Automated Methods of Data Analysis and Computer Assisted Heuristic Methods in an Electrode Kinetic Study of the Pathologically Variable [Fe(CN) 6 ] 3–/4– Process by AC Voltammetry

    KAUST Repository

    Morris, Graham P.

    2013-12-17

    Fully automated and computer assisted heuristic data analysis approaches have been applied to a series of AC voltammetric experiments undertaken on the [Fe(CN)6]3-/4- process at a glassy carbon electrode in 3 M KCl aqueous electrolyte. The recovered parameters in all forms of data analysis encompass E0 (reversible potential), k0 (heterogeneous charge transfer rate constant at E0), α (charge transfer coefficient), Ru (uncompensated resistance), and Cdl (double layer capacitance). The automated method of analysis employed time domain optimization and Bayesian statistics. This and all other methods assumed the Butler-Volmer model applies for electron transfer kinetics, planar diffusion for mass transport, Ohm\\'s Law for Ru, and a potential-independent Cdl model. Heuristic approaches utilize combinations of Fourier Transform filtering, sensitivity analysis, and simplex-based forms of optimization applied to resolved AC harmonics and rely on experimenter experience to assist in experiment-theory comparisons. Remarkable consistency of parameter evaluation was achieved, although the fully automated time domain method provided consistently higher α values than those based on frequency domain data analysis. The origin of this difference is that the implemented fully automated method requires a perfect model for the double layer capacitance. In contrast, the importance of imperfections in the double layer model is minimized when analysis is performed in the frequency domain. Substantial variation in k0 values was found by analysis of the 10 data sets for this highly surface-sensitive pathologically variable [Fe(CN) 6]3-/4- process, but remarkably, all fit the quasi-reversible model satisfactorily. © 2013 American Chemical Society.

  14. Soybean phospholipase D activity determination. A comparison of two methods

    Directory of Open Access Journals (Sweden)

    Ré, E.

    2007-09-01

    Full Text Available Due to a discrepancy between previously published results, two methods to determine the soybean phospholipase D activity were evaluated. One method is based on the extraction of the enzyme from whole soybean flour, quantifying the enzyme activity on the extract. The other method quantifies the enzymatic activity on whole soybean flour without enzyme extraction. In the extraction-based-method, both the extraction time and the number of extractions were optimized. The highest phospholipase D activity values were obtained from the method without enzyme extraction. This method is less complex, requires less running-time and the conditions of the medium in which phospholipase D acts resemble the conditions found in the oil industrySe evaluaron dos métodos para determinar la actividad de la fosfolipasa D en soja debido a que existe discrepancia entre los resultados publicados. Un método se basa en la extracción de la enzima de la harina resultante de la molienda del grano de soja entero, cuantificando la actividad sobre el extracto. En el otro método, la cuantificación se realiza sobre la harina del grano entero molido, sin extraer la enzima. En el método de extracción se optimizaron tanto el tiempo como el número de extracciones. Los mayores valores de actividad de la fosfolipasa D se obtuvieron por el método sin extracción de la enzima. Este método es más simple, exige menos tiempo de ejecución y las condiciones del medio en que actúa la fosfolipasa D se asemejan a las condiciones encontradas en la industria aceitera.

  15. Comparison the diagnostic value of serological and molecular methods for screening and detecting Chlamydia trachomatis in semen of infertile men: A cross-sectional study

    Directory of Open Access Journals (Sweden)

    Amin Khoshakhlagh

    2017-12-01

    Full Text Available Background: Chlamydia trachomatis (CT with damaging effects on sperm quality parameters can often cause infertility in men. Objective: The main objective of this study was to determine the diagnostic value of polymerase chain reaction (PCR and enzyme linked immuno sorbent assay (ELISA for screening and detecting CT in semen samples of infertile men. Materials and Methods: In this cross-sectional study, 465 men referring to the clinical laboratory of Royan Institute were chosen for primary screening and detection of the presence of CT. 93 samples were normozoospermia with normal sperm parameters i.e. sperm number, motility and morphology (Asymptomatic and 372 had abnormal sperm parameters (Symptomatic in semen analysis. ELISA test was performed as the screening test. Samples with optical density (OD >0.200 were selected as the case and asymptomatic samples with OD 0.400 as the ELISA positive, the diagnostic value of CT-ELISA positive in symptomatic and asymptomatic infertile patients were 0.019 (7 of 372 and 0.021 (2 of 93, respectively. There was no relationship between the presence of CT infection and different sperm abnormalities. Conclusion: The anti-CT IgA ELISA test may be introduced as an appropriate tool for screening purpose in the seminal plasma to select suspicious samples for PCR confirmatory tests.

  16. A comparison of multidimensional scaling methods for perceptual mapping

    NARCIS (Netherlands)

    Bijmolt, T.H.A.; Wedel, M.

    Multidimensional scaling has been applied to a wide range of marketing problems, in particular to perceptual mapping based on dissimilarity judgments. The introduction of methods based on the maximum likelihood principle is one of the most important developments. In this article, the authors compare

  17. Aggregation Methods in International Comparisons

    NARCIS (Netherlands)

    B.M. Balk (Bert)

    2001-01-01

    textabstractThis paper reviews the progress that has been made over the past decade in understanding the nature of the various multilateral in- ternational comparison methods. Fifteen methods are discussed and subjected to a system of ten tests. In addition, attention is paid to recently developed

  18. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method

    International Nuclear Information System (INIS)

    Shidahara, Miho; Kato, Takashi; Kawatsu, Shoji; Yoshimura, Kumiko; Ito, Kengo; Watabe, Hiroshi; Kim, Kyeong Min; Iida, Hidehiro; Kato, Rikio

    2005-01-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99m Tc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I AC μb with Chang's attenuation correction factor. The scatter component image is estimated by convolving I AC μb with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99m Tc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine. (orig.)

  19. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method

    Energy Technology Data Exchange (ETDEWEB)

    Shidahara, Miho; Kato, Takashi; Kawatsu, Shoji; Yoshimura, Kumiko; Ito, Kengo [National Center for Geriatrics and Gerontology Research Institute, Department of Brain Science and Molecular Imaging, Obu, Aichi (Japan); Watabe, Hiroshi; Kim, Kyeong Min; Iida, Hidehiro [National Cardiovascular Center Research Institute, Department of Investigative Radiology, Suita (Japan); Kato, Rikio [National Center for Geriatrics and Gerontology, Department of Radiology, Obu (Japan)

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with {sup 99m}Tc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I{sub AC}{sup {mu}}{sup b} with Chang's attenuation correction factor. The scatter component image is estimated by convolving I{sub AC}{sup {mu}}{sup b} with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and {sup 99m}Tc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine. (orig.)

  20. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method.

    Science.gov (United States)

    Shidahara, Miho; Watabe, Hiroshi; Kim, Kyeong Min; Kato, Takashi; Kawatsu, Shoji; Kato, Rikio; Yoshimura, Kumiko; Iida, Hidehiro; Ito, Kengo

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99mTc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I(mub)AC with Chang's attenuation correction factor. The scatter component image is estimated by convolving I(mub)AC with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine.

  1. A comparison of published methods of calculation of defect significance

    International Nuclear Information System (INIS)

    Ingham, T.; Harrison, R.P.

    1982-01-01

    This paper presents some of the results obtained in a round-robin calculational exercise organised by the OECD Committee on the Safety of Nuclear Installations (CSNI). The exercise was initiated to examine practical aspects of using documented elastic-plastic fracture mechanics methods to calculate defect significance. The extent to which the objectives of the exercise were met is illustrated using solutions to 'standard' problems produced by UKAEA and CEGB using the methods given in ASME XI, Appendix A, BSI PD6493, and the CEGB R/H/R6 Document. Differences in critical or tolerable defect size defined using these procedures are examined in terms of their different treatments and reasons for discrepancies are discussed. (author)

  2. Comparison of Force Reconstruction Methods for a Lumped Mass Beam

    Directory of Open Access Journals (Sweden)

    Vesta I. Bateman

    1997-01-01

    Full Text Available Two extensions of the force reconstruction method, the sum of weighted accelerations technique (SWAT, are presented in this article. SWAT requires the use of the structure’s elastic mode shapes for reconstruction of the applied force. Although based on the same theory, the two new techniques do not rely on mode shapes to reconstruct the applied force and may be applied to structures whose mode shapes are not available. One technique uses the measured force and acceleration responses with the rigid body mode shapes to calculate the scalar weighting vector, so the technique is called SWAT-CAL (SWAT using a calibrated force input. The second technique uses the free-decay time response of the structure with the rigid body mode shapes to calculate the scalar weighting vector and is called SWAT-TEEM (SWAT using time eliminated elastic modes. All three methods are used to reconstruct forces for a simple structure.

  3. The Value of Mixed Methods Research: A Mixed Methods Study

    Science.gov (United States)

    McKim, Courtney A.

    2017-01-01

    The purpose of this explanatory mixed methods study was to examine the perceived value of mixed methods research for graduate students. The quantitative phase was an experiment examining the effect of a passage's methodology on students' perceived value. Results indicated students scored the mixed methods passage as more valuable than those who…

  4. Comparison Among Methods of Retinopathy Assessment (CAMRA) Study: Smartphone, Nonmydriatic, and Mydriatic Photography.

    Science.gov (United States)

    Ryan, Martha E; Rajalakshmi, Ramachandran; Prathiba, Vijayaraghavan; Anjana, Ranjit Mohan; Ranjani, Harish; Narayan, K M Venkat; Olsen, Timothy W; Mohan, Viswanathan; Ward, Laura A; Lynn, Michael J; Hendrick, Andrew M

    2015-10-01

    We compared smartphone fundus photography, nonmydriatic fundus photography, and 7-field mydriatic fundus photography for their abilities to detect and grade diabetic retinopathy (DR). This was a prospective, comparative study of 3 photography modalities. Diabetic patients (n = 300) were recruited at the ophthalmology clinic of a tertiary diabetes care center in Chennai, India. Patients underwent photography by all 3 modalities, and photographs were evaluated by 2 retina specialists. The sensitivity and specificity in the detection of DR for both smartphone and nonmydriatic photography were determined by comparison with the standard method, 7-field mydriatic fundus photography. The sensitivity and specificity of smartphone fundus photography, compared with 7-field mydriatic fundus photography, for the detection of any DR were 50% (95% confidence interval [CI], 43-56) and 94% (95% CI, 92-97), respectively, and of nonmydriatic fundus photography were 81% (95% CI, 75-86) and 94% (95% CI, 92-96%), respectively. The sensitivity and specificity of smartphone fundus photography for the detection of vision-threatening DR were 59% (95% CI, 46-72) and 100% (95% CI, 99-100), respectively, and of nonmydriatic fundus photography were 54% (95% CI, 40-67) and 99% (95% CI, 98-100), respectively. Smartphone and nonmydriatic fundus photography are each able to detect DR and sight-threatening disease. However, the nonmydriatic camera is more sensitive at detecting DR than the smartphone. At this time, the benefits of the smartphone (connectivity, portability, and reduced cost) are not offset by the lack of sufficient sensitivity for detection of DR in most clinical circumstances. Copyright © 2015 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  5. A Comparison of Reading Response Methods to Increase Student Learning

    Directory of Open Access Journals (Sweden)

    Cheryl J. Davis

    2016-01-01

    Full Text Available It is common in college courses to test students on the required readings for that course. With a rise in online education it is often the case that students are required to provide evidence of reading the material. However, there is little empirical research stating the best written means to assess that students read the materials. This study experimentally compared the effect of assigned reading summaries or study questions on student test performance. The results revealed that study questions produced higher quiz scores and higher preparation for the quiz, based on student feedback. Limitations of the study included a small sample size and extraneous activities that may have affected general knowledge on a topic. Results suggest that study questions focusing students on critical information in the required readings improve student learning.

  6. Cell synchrony techniques. I. A comparison of methods

    Energy Technology Data Exchange (ETDEWEB)

    Grdina, D.J.; Meistrich, M.L.; Meyn, R.E.; Johnson, T.S.; White, R.A.

    1984-01-01

    Selected cell synchrony techniques, as applied to asynchronous populations of Chinese hamster ovary (CHO) cells, have been compared. Aliquots from the same culture of exponentially growing cells were synchronized using mitotic selection, mitotic selection and hydroxyurea block, centrifugal elutriation, or an EPICS V cell sorter. Sorting of cells was achieved after staining cells with Hoechst 33258. After syncronization by the various methods the relative distribution of cells in G/sub 1/, S, or G/sub 2/ + M phases of the cell cycle was determined by flow cytometry. Fractions of synchronized cells obtained from each method were replated and allowed to progress through a second cell cycle. Mitotic selection gave rise to relatively pure and unperturbed early G/sub 1/ phase cells. While cell synchrony rapidly dispersed with time, cells progressed through the cell cycle in 12 hr. Sorting with the EPIC V on the modal G/sub 1/ peak yielded a relatively pure but heterogeneous G/sub 1/ population (i.e. early to late G/sub 1/). Again, synchrony dispersed with time, but cell-cycle progression required 14 hr. With centrifugal elutriation, several different cell populations synchronized throughout the cell cycle could be rapidly obtained with a purity comparable to mitotic selection and cell sorting. It was concluded that, either alone or in combination with blocking agents such as hydroxyurea, elutriation and mitotic selection were both excellent methods for synchronizing CHO cells. Cell sorting exhibited limitations in sample size and time required for synchronizing CHO cells. Its major advantage would be its ability to isolate cell populations unique with respect to selected cellular parameters. 19 references, 9 figures.

  7. A comparison between progressive extension method (PEM) and iterative method (IM) for magnetic field extrapolations in the solar atmosphere

    Science.gov (United States)

    Wu, S. T.; Sun, M. T.; Sakurai, Takashi

    1990-01-01

    This paper presents a comparison between two numerical methods for the extrapolation of nonlinear force-free magnetic fields, viz the Iterative Method (IM) and the Progressive Extension Method (PEM). The advantages and disadvantages of these two methods are summarized, and the accuracy and numerical instability are discussed. On the basis of this investigation, it is claimed that the two methods do resemble each other qualitatively.

  8. A Comparison of Methods for Player Clustering via Behavioral Telemetry

    DEFF Research Database (Denmark)

    Drachen, Anders; Thurau, C.; Sifa, R.

    2013-01-01

    patterns in the behavioral data, and developing profiles that are actionable to game developers. There are numerous methods for unsupervised clustering of user behavior, e.g. k-means/c-means, Nonnegative Matrix Factorization, or Principal Component Analysis. Although all yield behavior categorizations......, interpretation of the resulting categories in terms of actual play behavior can be difficult if not impossible. In this paper, a range of unsupervised techniques are applied together with Archetypal Analysis to develop behavioral clusters from playtime data of 70,014 World of Warcraft players, covering a five......The analysis of user behavior in digital games has been aided by the introduction of user telemetry in game development, which provides unprecedented access to quantitative data on user behavior from the installed game clients of the entire population of players. Player behavior telemetry datasets...

  9. A statistical comparison of accelerated concrete testing methods

    OpenAIRE

    Denny Meyer

    1997-01-01

    Accelerated curing results, obtained after only 24 hours, are used to predict the 28 day strength of concrete. Various accelerated curing methods are available. Two of these methods are compared in relation to the accuracy of their predictions and the stability of the relationship between their 24 hour and 28 day concrete strength. The results suggest that Warm Water accelerated curing is preferable to Hot Water accelerated curing of concrete. In addition, some other methods for improving the...

  10. A comparison of methods to predict historical daily streamflow time series in the southeastern United States

    Science.gov (United States)

    Farmer, William H.; Archfield, Stacey A.; Over, Thomas M.; Hay, Lauren E.; LaFontaine, Jacob H.; Kiang, Julie E.

    2015-01-01

    Effective and responsible management of water resources relies on a thorough understanding of the quantity and quality of available water. Streamgages cannot be installed at every location where streamflow information is needed. As part of its National Water Census, the U.S. Geological Survey is planning to provide streamflow predictions for ungaged locations. In order to predict streamflow at a useful spatial and temporal resolution throughout the Nation, efficient methods need to be selected. This report examines several methods used for streamflow prediction in ungaged basins to determine the best methods for regional and national implementation. A pilot area in the southeastern United States was selected to apply 19 different streamflow prediction methods and evaluate each method by a wide set of performance metrics. Through these comparisons, two methods emerged as the most generally accurate streamflow prediction methods: the nearest-neighbor implementations of nonlinear spatial interpolation using flow duration curves (NN-QPPQ) and standardizing logarithms of streamflow by monthly means and standard deviations (NN-SMS12L). It was nearly impossible to distinguish between these two methods in terms of performance. Furthermore, neither of these methods requires significantly more parameterization in order to be applied: NN-SMS12L requires 24 regional regressions—12 for monthly means and 12 for monthly standard deviations. NN-QPPQ, in the application described in this study, required 27 regressions of particular quantiles along the flow duration curve. Despite this finding, the results suggest that an optimal streamflow prediction method depends on the intended application. Some methods are stronger overall, while some methods may be better at predicting particular statistics. The methods of analysis presented here reflect a possible framework for continued analysis and comprehensive multiple comparisons of methods of prediction in ungaged basins (PUB

  11. A comparison of methods of predicting maximum oxygen uptake.

    OpenAIRE

    Grant, S; Corbett, K; Amjad, A M; Wilson, J; Aitchison, T

    1995-01-01

    The aim of this study was to compare the results from a Cooper walk run test, a multistage shuttle run test, and a submaximal cycle test with the direct measurement of maximum oxygen uptake on a treadmill. Three predictive tests of maximum oxygen uptake--linear extrapolation of heart rate of VO2 collected from a submaximal cycle ergometer test (predicted L/E), the Cooper 12 min walk, run test, and a multi-stage progressive shuttle run test (MST)--were performed by 22 young healthy males (mean...

  12. A comparison of five extraction methods for extracellular polymeric ...

    African Journals Online (AJOL)

    Two physical methods (centrifugation and ultrasonication) and 3 chemical methods (extraction with EDTA, extraction with formaldehyde, and extraction with formaldehyde plus NaOH) for extraction of EPS from alga-bacteria biofilm were assessed. Pretreatment with ultrasound at low intensity doubled the EPS yield without ...

  13. Physics Exam Preparation: A Comparison of Three Methods

    Science.gov (United States)

    Fakcharoenphol, Witat; Stelzer, Timothy

    2014-01-01

    In this clinical study on helping students prepare for an exam, we compared three different treatments. All students were asked to take a practice exam. One group was then given worked-out solutions for that exam, another group was given the solutions and targeted exercises to do as homework based on the result of their practice exam, and the…

  14. Trauma outcome analysis of a Jakarta University Hospital using the TRISS method: validation and limitation in comparison with the major trauma outcome study. Trauma and Injury Severity Score

    NARCIS (Netherlands)

    Joosse, P.; Soedarmo, S.; Luitse, J. S.; Ponsen, K. J.

    2001-01-01

    In this prospective study, the TRISS methodology is used to compare trauma care at a University Hospital in Jakarta, Indonesia, with the standards reported in the Major Trauma Outcome Study (MTOS). Between February 24, 1999, and July 1, 1999, all consecutive patients with multiple and severe trauma

  15. Three looks at users: a comparison of methods for studying digital library use. User studies, Digital libraries, Digital music libraries, Music, Information use, Information science, Contextual inquiry, Contextual design, User research, Questionnaires, Log file analysis

    Directory of Open Access Journals (Sweden)

    Mark Notess

    2004-01-01

    Full Text Available Compares three user research methods of studying real-world digital library usage within the context of the Variations and Variations2 digital music libraries at Indiana University. After a brief description of both digital libraries, each method is described and illustrated with findings from the studies. User satisfaction questionnaires were used in two studies, one of Variations (n=30 and the other of Variations2 (n=12. Second, session activity log files were examined for 175 Variations2 sessions using both quantitative and qualitative methods. The third method, contextual inquiry, is illustrated with results from field observations of four voice students' information usage patterns. The three methods are compared in terms of expertise required; time required to set up, conduct, and analyse resulting data; and the benefits derived. Further benefits are achieved with a mixed-methods approach, combining the strengths of the methods to answer questions lingering as a result of other methods.

  16. A statistical comparison of accelerated concrete testing methods

    Directory of Open Access Journals (Sweden)

    Denny Meyer

    1997-01-01

    Full Text Available Accelerated curing results, obtained after only 24 hours, are used to predict the 28 day strength of concrete. Various accelerated curing methods are available. Two of these methods are compared in relation to the accuracy of their predictions and the stability of the relationship between their 24 hour and 28 day concrete strength. The results suggest that Warm Water accelerated curing is preferable to Hot Water accelerated curing of concrete. In addition, some other methods for improving the accuracy of predictions of 28 day strengths are suggested. In particular the frequency at which it is necessary to recalibrate the prediction equation is considered.

  17. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang

    2017-09-27

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  18. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.

    Science.gov (United States)

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-09-21

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  19. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  20. A prospective randomized comparison between two MRI studies of the small bowel in Crohn's disease, the oral contrast method and MR enteroclysis

    International Nuclear Information System (INIS)

    Negaard, Anne; Paulsen, Vemund; Lygren, Idar; Sandvik, Leiv; Berstad, Audun E.; Borthne, Arne; Try, Kirsti; Storaas, Tryggve; Klow, Nils-Einar

    2007-01-01

    The aim was to compare bowel distension and diagnostic properties of magnetic resonance imaging of the small bowel with oral contrast (MRI per OS) with magnetic resonance enteroclysis (MRE). Forty patients with suspected Crohn's disease (CD) were examined with both MRI methods. MRI per OS was performed with a 6% mannitol solution and MRE with nasojejunal intubation and a polyethylenglycol solution. MRI protocol consisted of balanced fast field echo (B-FFE), T2 and T1 sequences with and without gadolinium. Two experienced radiologists individually evaluated bowel distension and pathological findings including wall thickness (BWT), contrast enhancement (BWE), ulcer (BWU), stenosis (BWS) and edema (EDM). The diameter of the small bowel was smaller with MRI per OS than with MRE (difference jejunum: 0.55 cm, p < 0.001; ileum: 0.35 cm, p < 0.001, terminal ileum: 0.09 cm, p = 0.08). However, CD was diagnosed with high diagnostic accuracy (sensitivity, specificity, positive and negative predictive values: MRI per OS 88%, 89%, 89%, 89%; MRE 88%, 84%, 82%, 89%) and inter-observer agreement (MRI per OS k = 0.95; MRE k = 1). In conclusion, bowel distension was inferior in MRI per OS compared to MRE. However, both methods diagnosed CD with a high diagnostic accuracy and reproducibility. (orig.)

  1. Teaching Idiomatic Expressions: A Comparison of Two Instructional Methods.

    Science.gov (United States)

    Rittenhouse, Robert K.; Kenyon, Patricia L.

    1990-01-01

    Twenty hearing-impaired adolescents were taught idiomatic expressions using captioned videotape presentations followed by classroom discussion, or by extended classroom discussions. Improvement in understanding idioms was significantly greater under the videotape method. (Author/JDD)

  2. Comparison of multiple-criteria decision-making methods - results of simulation study

    Directory of Open Access Journals (Sweden)

    Michał Adamczak

    2016-12-01

    Full Text Available Background: Today, both researchers and practitioners have many methods for supporting the decision-making process. Due to the conditions in which supply chains function, the most interesting are multi-criteria methods. The use of sophisticated methods for supporting decisions requires the parameterization and execution of calculations that are often complex. So is it efficient to use sophisticated methods? Methods: The authors of the publication compared two popular multi-criteria decision-making methods: the  Weighted Sum Model (WSM and the Analytic Hierarchy Process (AHP. A simulation study reflects these two decision-making methods. Input data for this study was a set of criteria weights and the value of each in terms of each criterion. Results: The iGrafx Process for Six Sigma simulation software recreated how both multiple-criteria decision-making methods (WSM and AHP function. The result of the simulation was a numerical value defining the preference of each of the alternatives according to the WSM and AHP methods. The alternative producing a result of higher numerical value  was considered preferred, according to the selected method. In the analysis of the results, the relationship between the values of the parameters and the difference in the results presented by both methods was investigated. Statistical methods, including hypothesis testing, were used for this purpose. Conclusions: The simulation study findings prove that the results obtained with the use of two multiple-criteria decision-making methods are very similar. Differences occurred more frequently in lower-value parameters from the "value of each alternative" group and higher-value parameters from the "weight of criteria" group.

  3. A comparison of alternative methods for measuring cigarette prices.

    Science.gov (United States)

    Chaloupka, Frank J; Tauras, John A; Strasser, Julia H; Willis, Gordon; Gibson, James T; Hartman, Anne M

    2015-05-01

    Government agencies, public health organisations and tobacco control researchers rely on accurate estimates of cigarette prices for a variety of purposes. Since the 1950s, the Tax Burden on Tobacco (TBOT) has served as the most widely used source of this price data despite its limitations. This paper compares the prices and collection methods of the TBOT retail-based data and the 2003 and 2006/2007 waves of the population-based Tobacco Use Supplement to the Current Population Survey (TUS-CPS). From the TUS-CPS, we constructed multiple state-level measures of cigarette prices, including weighted average prices per pack (based on average prices for single-pack purchases and average prices for carton purchases) and compared these with the weighted average price data reported in the TBOT. We also constructed several measures of tax avoidance from the TUS-CPS self-reported data. For the 2003 wave, the average TUS-CPS price was 71 cents per pack less than the average TBOT price; for the 2006/2007 wave, the difference was 47 cents. TUS-CPS and TBOT prices were also significantly different at the state level. However, these differences varied widely by state due to tax avoidance opportunities, such as cross-border purchasing. The TUS-CPS can be used to construct valid measures of cigarette prices. Unlike the TBOT, the TUS-CPS captures the effect of price-reducing marketing strategies, as well as tax avoidance practices and non-traditional types of purchasing. Thus, self-reported data like TUS-CPS appear to have advantages over TBOT in estimating the 'real' price that smokers face. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  4. THE FORMATION OF A MILKY WAY-SIZED DISK GALAXY. I. A COMPARISON OF NUMERICAL METHODS

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Qirong; Li, Yuexing, E-mail: qxz125@psu.edu [Department of Astronomy and Astrophysics, The Pennsylvania State University, 525 Davey Lab, University Park, PA 16802 (United States)

    2016-11-01

    The long-standing challenge of creating a Milky Way- (MW-) like disk galaxy from cosmological simulations has motivated significant developments in both numerical methods and physical models. We investigate these two fundamental aspects in a new comparison project using a set of cosmological hydrodynamic simulations of an MW-sized galaxy. In this study, we focus on the comparison of two particle-based hydrodynamics methods: an improved smoothed particle hydrodynamics (SPH) code Gadget, and a Lagrangian Meshless Finite-Mass (MFM) code Gizmo. All the simulations in this paper use the same initial conditions and physical models, which include star formation, “energy-driven” outflows, metal-dependent cooling, stellar evolution, and metal enrichment. We find that both numerical schemes produce a late-type galaxy with extended gaseous and stellar disks. However, notable differences are present in a wide range of galaxy properties and their evolution, including star-formation history, gas content, disk structure, and kinematics. Compared to Gizmo, the Gadget simulation produced a larger fraction of cold, dense gas at high redshift which fuels rapid star formation and results in a higher stellar mass by 20% and a lower gas fraction by 10% at z = 0, and the resulting gas disk is smoother and more coherent in rotation due to damping of turbulent motion by the numerical viscosity in SPH, in contrast to the Gizmo simulation, which shows a more prominent spiral structure. Given its better convergence properties and lower computational cost, we argue that the MFM method is a promising alternative to SPH in cosmological hydrodynamic simulations.

  5. A comparison of Ki-67 counting methods in luminal Breast Cancer: The Average Method vs. the Hot Spot Method.

    Directory of Open Access Journals (Sweden)

    Min Hye Jang

    Full Text Available In spite of the usefulness of the Ki-67 labeling index (LI as a prognostic and predictive marker in breast cancer, its clinical application remains limited due to variability in its measurement and the absence of a standard method of interpretation. This study was designed to compare the two methods of assessing Ki-67 LI: the average method vs. the hot spot method and thus to determine which method is more appropriate in predicting prognosis of luminal/HER2-negative breast cancers. Ki-67 LIs were calculated by direct counting of three representative areas of 493 luminal/HER2-negative breast cancers using the two methods. We calculated the differences in the Ki-67 LIs (ΔKi-67 between the two methods and the ratio of the Ki-67 LIs (H/A ratio of the two methods. In addition, we compared the performance of the Ki-67 LIs obtained by the two methods as prognostic markers. ΔKi-67 ranged from 0.01% to 33.3% and the H/A ratio ranged from 1.0 to 2.6. Based on the receiver operating characteristic curve method, the predictive powers of the KI-67 LI measured by the two methods were similar (Area under curve: hot spot method, 0.711; average method, 0.700. In multivariate analysis, high Ki-67 LI based on either method was an independent poor prognostic factor, along with high T stage and node metastasis. However, in repeated counts, the hot spot method did not consistently classify tumors into high vs. low Ki-67 LI groups. In conclusion, both the average and hot spot method of evaluating Ki-67 LI have good predictive performances for tumor recurrence in luminal/HER2-negative breast cancers. However, we recommend using the average method for the present because of its greater reproducibility.

  6. A comparison of Ki-67 counting methods in luminal Breast Cancer: The Average Method vs. the Hot Spot Method.

    Science.gov (United States)

    Jang, Min Hye; Kim, Hyun Jung; Chung, Yul Ri; Lee, Yangkyu; Park, So Yeon

    2017-01-01

    In spite of the usefulness of the Ki-67 labeling index (LI) as a prognostic and predictive marker in breast cancer, its clinical application remains limited due to variability in its measurement and the absence of a standard method of interpretation. This study was designed to compare the two methods of assessing Ki-67 LI: the average method vs. the hot spot method and thus to determine which method is more appropriate in predicting prognosis of luminal/HER2-negative breast cancers. Ki-67 LIs were calculated by direct counting of three representative areas of 493 luminal/HER2-negative breast cancers using the two methods. We calculated the differences in the Ki-67 LIs (ΔKi-67) between the two methods and the ratio of the Ki-67 LIs (H/A ratio) of the two methods. In addition, we compared the performance of the Ki-67 LIs obtained by the two methods as prognostic markers. ΔKi-67 ranged from 0.01% to 33.3% and the H/A ratio ranged from 1.0 to 2.6. Based on the receiver operating characteristic curve method, the predictive powers of the KI-67 LI measured by the two methods were similar (Area under curve: hot spot method, 0.711; average method, 0.700). In multivariate analysis, high Ki-67 LI based on either method was an independent poor prognostic factor, along with high T stage and node metastasis. However, in repeated counts, the hot spot method did not consistently classify tumors into high vs. low Ki-67 LI groups. In conclusion, both the average and hot spot method of evaluating Ki-67 LI have good predictive performances for tumor recurrence in luminal/HER2-negative breast cancers. However, we recommend using the average method for the present because of its greater reproducibility.

  7. Formal Methods for Abstract Specifications – A Comparison of Concepts

    DEFF Research Database (Denmark)

    Instenberg, Martin; Schneider, Axel; Schnetter, Sabine

    2006-01-01

    In industry formal methods are becoming increasingly important for the verification of hardware and software designs. However current practice for specification of system and protocol functionality on high level of abstraction is textual description. For verification of the system behavior manual...... inspections and tests are usual means. To facilitate the introduction of formal methods in the development process of complex systems and protocols, two different tools evolved from research activities – UPPAAL and SpecEdit – have been investigated and compared regarding their concepts and functionality...

  8. Physics exam preparation: A comparison of three methods

    OpenAIRE

    Witat Fakcharoenphol; Timothy Stelzer

    2014-01-01

    In this clinical study on helping students prepare for an exam, we compared three different treatments. All students were asked to take a practice exam. One group was then given worked-out solutions for that exam, another group was given the solutions and targeted exercises to do as homework based on the result of their practice exam, and the third group was given the solutions, homework, and also an hour of one-on-one tutoring. Participants from all three conditions significantly outperforme...

  9. Physics exam preparation: A comparison of three methods

    Directory of Open Access Journals (Sweden)

    Witat Fakcharoenphol

    2014-03-01

    Full Text Available In this clinical study on helping students prepare for an exam, we compared three different treatments. All students were asked to take a practice exam. One group was then given worked-out solutions for that exam, another group was given the solutions and targeted exercises to do as homework based on the result of their practice exam, and the third group was given the solutions, homework, and also an hour of one-on-one tutoring. Participants from all three conditions significantly outperformed the control group on the midterm exam. However, participants that had one-on-one tutoring did not outperform the other two participant groups.

  10. A Comparison between the Effect of Cooperative Learning Teaching Method and Lecture Teaching Method on Students' Learning and Satisfaction Level

    Science.gov (United States)

    Mohammadjani, Farzad; Tonkaboni, Forouzan

    2015-01-01

    The aim of the present research is to investigate a comparison between the effect of cooperative learning teaching method and lecture teaching method on students' learning and satisfaction level. The research population consisted of all the fourth grade elementary school students of educational district 4 in Shiraz. The statistical population…

  11. Data Mining Methods Applied to Flight Operations Quality Assurance Data: A Comparison to Standard Statistical Methods

    Science.gov (United States)

    Stolzer, Alan J.; Halford, Carl

    2007-01-01

    In a previous study, multiple regression techniques were applied to Flight Operations Quality Assurance-derived data to develop parsimonious model(s) for fuel consumption on the Boeing 757 airplane. The present study examined several data mining algorithms, including neural networks, on the fuel consumption problem and compared them to the multiple regression results obtained earlier. Using regression methods, parsimonious models were obtained that explained approximately 85% of the variation in fuel flow. In general data mining methods were more effective in predicting fuel consumption. Classification and Regression Tree methods reported correlation coefficients of .91 to .92, and General Linear Models and Multilayer Perceptron neural networks reported correlation coefficients of about .99. These data mining models show great promise for use in further examining large FOQA databases for operational and safety improvements.

  12. Predicting proteasomal cleavage sites: a comparison of available methods

    DEFF Research Database (Denmark)

    Saxova, P.; Buus, S.; Brunak, Søren

    2003-01-01

    -terminal, in particular, of CTL epitopes is cleaved precisely by the proteasome, whereas the N-terminal is produced with an extension, and later trimmed by peptidases in the cytoplasm and in the endoplasmic reticulum. Recently, three publicly available methods have been developed for prediction of the specificity...

  13. A comparison of three methods of Nitrogen analysis for feedstuffs

    African Journals Online (AJOL)

    Unknown

    Introduction. The Kjeldahl method for determining crude protein is very widely used for analysis of feed samples. However, it has its drawbacks and hence new techniques which are without some of the disadvantages are considered desirable. One such modification was developed by Hach et al. (1987). This promising ...

  14. The energetic cost of walking: a comparison of predictive methods.

    Directory of Open Access Journals (Sweden)

    Patricia Ann Kramer

    Full Text Available BACKGROUND: The energy that animals devote to locomotion has been of intense interest to biologists for decades and two basic methodologies have emerged to predict locomotor energy expenditure: those based on metabolic and those based on mechanical energy. Metabolic energy approaches share the perspective that prediction of locomotor energy expenditure should be based on statistically significant proxies of metabolic function, while mechanical energy approaches, which derive from many different perspectives, focus on quantifying the energy of movement. Some controversy exists as to which mechanical perspective is "best", but from first principles all mechanical methods should be equivalent if the inputs to the simulation are of similar quality. Our goals in this paper are 1 to establish the degree to which the various methods of calculating mechanical energy are correlated, and 2 to investigate to what degree the prediction methods explain the variation in energy expenditure. METHODOLOGY/PRINCIPAL FINDINGS: We use modern humans as the model organism in this experiment because their data are readily attainable, but the methodology is appropriate for use in other species. Volumetric oxygen consumption and kinematic and kinetic data were collected on 8 adults while walking at their self-selected slow, normal and fast velocities. Using hierarchical statistical modeling via ordinary least squares and maximum likelihood techniques, the predictive ability of several metabolic and mechanical approaches were assessed. We found that all approaches are correlated and that the mechanical approaches explain similar amounts of the variation in metabolic energy expenditure. Most methods predict the variation within an individual well, but are poor at accounting for variation between individuals. CONCLUSION: Our results indicate that the choice of predictive method is dependent on the question(s of interest and the data available for use as inputs. Although we

  15. The energetic cost of walking: a comparison of predictive methods.

    Science.gov (United States)

    Kramer, Patricia Ann; Sylvester, Adam D

    2011-01-01

    The energy that animals devote to locomotion has been of intense interest to biologists for decades and two basic methodologies have emerged to predict locomotor energy expenditure: those based on metabolic and those based on mechanical energy. Metabolic energy approaches share the perspective that prediction of locomotor energy expenditure should be based on statistically significant proxies of metabolic function, while mechanical energy approaches, which derive from many different perspectives, focus on quantifying the energy of movement. Some controversy exists as to which mechanical perspective is "best", but from first principles all mechanical methods should be equivalent if the inputs to the simulation are of similar quality. Our goals in this paper are 1) to establish the degree to which the various methods of calculating mechanical energy are correlated, and 2) to investigate to what degree the prediction methods explain the variation in energy expenditure. We use modern humans as the model organism in this experiment because their data are readily attainable, but the methodology is appropriate for use in other species. Volumetric oxygen consumption and kinematic and kinetic data were collected on 8 adults while walking at their self-selected slow, normal and fast velocities. Using hierarchical statistical modeling via ordinary least squares and maximum likelihood techniques, the predictive ability of several metabolic and mechanical approaches were assessed. We found that all approaches are correlated and that the mechanical approaches explain similar amounts of the variation in metabolic energy expenditure. Most methods predict the variation within an individual well, but are poor at accounting for variation between individuals. Our results indicate that the choice of predictive method is dependent on the question(s) of interest and the data available for use as inputs. Although we used modern humans as our model organism, these results can be extended

  16. A comparison of three methods to assess body composition.

    Science.gov (United States)

    Tewari, Nilanjana; Awad, Sherif; Macdonald, Ian A; Lobo, Dileep N

    2018-03-01

    The aim of this study was to compare the accuracy of measurements of body composition made using dual x-ray absorptiometry (DXA), analysis of computed tomography (CT) scans at the L3 vertebral level, and bioelectrical impedance analysis (BIA). DXA, CT, and BIA were performed in 47 patients recruited from two clinical trials investigating metabolic changes associated with major abdominal surgery or neoadjuvant chemotherapy for esophagogastric cancer. DXA was performed the week before surgery and before and after commencement of neoadjuvant chemotherapy. BIA was performed at the same time points and used with standard equations to calculate fat-free mass (FFM). Analysis of CT scans performed within 3 mo of the study was used to estimate FFM and fat mass (FM). There was good correlation between FM on DXA and CT (r 2  = 0.6632; P FFM on DXA and CT (r 2  = 0.7634; P FFM on DXA and BIA (r 2  = 0.6275; P FFM on CT and BIA also was significant (r 2  = 0.2742; P FFM on DXA and CT, average bias was -0.1477, with LOA of -8.621 to 8.325. For FFM on DXA and BIA, average bias was -3.792, with LOA of -15.52 to 7.936. For FFM on CT and BIA, average bias was -2.661, with LOA of -22.71 to 17.39. Although a systematic error underestimating FFM was demonstrated with BIA, it may be a useful modality to quantify body composition in the clinical situation. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Assessing the Accuracy of Generalized Inferences From Comparison Group Studies Using a Within-Study Comparison Approach: The Methodology.

    Science.gov (United States)

    Jaciw, Andrew P

    2016-06-01

    Various studies have examined bias in impact estimates from comparison group studies (CGSs) of job training programs, and in education, where results are benchmarked against experimental results. Such within-study comparison (WSC) approaches investigate levels of bias in CGS-based impact estimates, as well as the success of various design and analytic strategies for reducing bias. This article reviews past literature and summarizes conditions under which CGSs replicate experimental benchmark results. It extends the framework to, and develops the methodology for, situations where results from CGSs are generalized to untreated inference populations. Past research is summarized; methods are developed to examine bias in program impact estimates based on cross-site comparisons in a multisite trial that are evaluated against site-specific experimental benchmarks. Students in Grades K-3 in 79 schools in Tennessee; students in Grades 4-8 in 82 schools in Alabama. Grades K-3 Stanford Achievement Test (SAT) in reading and math scores; Grades 4-8 SAT10 reading scores. Past studies show that bias in CGS-based estimates can be limited through strong design, with local matching, and appropriate analysis involving pretest covariates and variables that represent selection processes. Extension of the methodology to investigate accuracy of generalized estimates from CGSs shows bias from confounders and effect moderators. CGS results, when extrapolated to untreated inference populations, may be biased due to variation in outcomes and impact. Accounting for effects of confounders or moderators may reduce bias. © The Author(s) 2016.

  18. A method comparison study between two hemoglobinometer models (Hemocue Hb 301 and Hb 201+) to measure hemoglobin concentrations and estimate anemia prevalence among women in Preah Vihear, Cambodia.

    Science.gov (United States)

    Rappaport, A I; Karakochuk, C D; Whitfield, K C; Kheang, K M; Green, T J

    2017-02-01

    Hemoglobin (Hb) concentration is often measured in global health and nutrition surveys to determine anemia prevalence using a portable hemoglobinometer such as the Hemocue® Hb 201+. More recently, a newer model was released (Hemocue Hb 301) utilizing slightly different methods to measure Hb as compared to the older model. The objective was to measure bias and concordance between Hb concentrations using the Hemocue Hb 301 and Hb 201+ models in a rural field setting. Hemoglobin (Hb) concentration was measured using one finger prick of blood (approximately 10 μL) from 175 Cambodian women (18-49 years) using three Hemocue Hb 201+ and three Hb 301 machines. Bias and concordance were measured and plotted. Overall, mean ± SD Hb concentration was 116 ± 13 g/L using the Hb 201+ and 118 ± 12 g/L using the Hb 301; and anemia prevalence (Hb < 120 g/L) was 58% (n = 102) and 58% (n = 101), respectively. Overall bias ± SD was 2.0 ± 10.5 g/L and concordance (95% CI) was 0.63 (0.54, 0.72). Despite the 2 g/L bias detected between models, anemia prevalence was very similar in both models. The two models measured anemia prevalence comparably in this population of women in rural Cambodia. © 2016 John Wiley & Sons Ltd.

  19. Serological diagnosis of syphilis: a comparison of different diagnostic methods.

    Science.gov (United States)

    Simčič, Saša; Potočnik, Marko

    2015-01-01

    Serological tests' limitations in syphilis diagnosis as well as numerous test interpretations mean that patients with discordant serology results can present diagnostic and treatment challenges for clinicians. We analyzed three common diagnostic algorithms for detecting suspected syphilis in high-prevalence populations in Slovenia. The prospective study included a total of 437 clinical serum samples from adults throughout Slovenia tested with Rapid Plasma Reagin (RPR), Treponema pallidum hemagglutination (TPHA), and an automated chemiluminescence immunoassay (CIA) according to the manufacturer's instructions. In addition to percent agreement, kappa coefficients were calculated as a secondary measure of agreement between the three algorithms. Overall, of 183 subjects that had seroreactive results, 180 were seroreactive in both the reverse sequence and the European Centre for Disease Prevention and Control (ECDC) algorithm. The traditional algorithm had a missed serodiagnosis rate of 30.0%, the overall percent agreement between the traditional and the reverse algorithm (or the ECDC algorithm) was 87.6%, and the kappa value was 0.733. However, the reverse and ECDC algorithm failed to detect three subjects with positive serodiagnosis determined by additional confirmative treponemal assays. Our results supported the ECDC algorithm in the serodiagnosis of syphilis in high-prevalence populations and the use of nontreponemal serology to monitor the response to treatment.

  20. Complementary biomarker-based methods for characterising Arctic sea ice conditions: A case study comparison between multivariate analysis and the PIP25 index

    Science.gov (United States)

    Köseoğlu, Denizcan; Belt, Simon T.; Smik, Lukas; Yao, Haoyi; Panieri, Giuliana; Knies, Jochen

    2018-02-01

    The discovery of IP25 as a qualitative biomarker proxy for Arctic sea ice and subsequent introduction of the so-called PIP25 index for semi-quantitative descriptions of sea ice conditions has significantly advanced our understanding of long-term paleo Arctic sea ice conditions over the past decade. We investigated the potential for classification tree (CT) models to provide a further approach to paleo Arctic sea ice reconstruction through analysis of a suite of highly branched isoprenoid (HBI) biomarkers in ca. 200 surface sediments from the Barents Sea. Four CT models constructed using different HBI assemblages revealed IP25 and an HBI triene as the most appropriate classifiers of sea ice conditions, achieving a >90% cross-validated classification rate. Additionally, lower model performance for locations in the Marginal Ice Zone (MIZ) highlighted difficulties in characterisation of this climatically-sensitive region. CT model classification and semi-quantitative PIP25-derived estimates of spring sea ice concentration (SpSIC) for four downcore records from the region were consistent, although agreement between proxy and satellite/observational records was weaker for a core from the west Svalbard margin, likely due to the highly variable sea ice conditions. The automatic selection of appropriate biomarkers for description of sea ice conditions, quantitative model assessment, and insensitivity to the c-factor used in the calculation of the PIP25 index are key attributes of the CT approach, and we provide an initial comparative assessment between these potentially complementary methods. The CT model should be capable of generating longer-term temporal shifts in sea ice conditions for the climatically sensitive Barents Sea.

  1. Clustering Scientific Publications Based on Citation Relations: A Systematic Comparison of Different Methods.

    Science.gov (United States)

    Šubelj, Lovro; van Eck, Nees Jan; Waltman, Ludo

    2016-01-01

    Clustering methods are applied regularly in the bibliometric literature to identify research areas or scientific fields. These methods are for instance used to group publications into clusters based on their relations in a citation network. In the network science literature, many clustering methods, often referred to as graph partitioning or community detection techniques, have been developed. Focusing on the problem of clustering the publications in a citation network, we present a systematic comparison of the performance of a large number of these clustering methods. Using a number of different citation networks, some of them relatively small and others very large, we extensively study the statistical properties of the results provided by different methods. In addition, we also carry out an expert-based assessment of the results produced by different methods. The expert-based assessment focuses on publications in the field of scientometrics. Our findings seem to indicate that there is a trade-off between different properties that may be considered desirable for a good clustering of publications. Overall, map equation methods appear to perform best in our analysis, suggesting that these methods deserve more attention from the bibliometric community.

  2. Clustering Scientific Publications Based on Citation Relations: A Systematic Comparison of Different Methods

    Science.gov (United States)

    Šubelj, Lovro; van Eck, Nees Jan; Waltman, Ludo

    2016-01-01

    Clustering methods are applied regularly in the bibliometric literature to identify research areas or scientific fields. These methods are for instance used to group publications into clusters based on their relations in a citation network. In the network science literature, many clustering methods, often referred to as graph partitioning or community detection techniques, have been developed. Focusing on the problem of clustering the publications in a citation network, we present a systematic comparison of the performance of a large number of these clustering methods. Using a number of different citation networks, some of them relatively small and others very large, we extensively study the statistical properties of the results provided by different methods. In addition, we also carry out an expert-based assessment of the results produced by different methods. The expert-based assessment focuses on publications in the field of scientometrics. Our findings seem to indicate that there is a trade-off between different properties that may be considered desirable for a good clustering of publications. Overall, map equation methods appear to perform best in our analysis, suggesting that these methods deserve more attention from the bibliometric community. PMID:27124610

  3. Comparison of manual and automatic segmentation methods for brain structures in the presence of space-occupying lesions: a multi-expert study

    International Nuclear Information System (INIS)

    Deeley, M A; Cmelak, A J; Malcolm, A W; Moretti, L; Jaboin, J; Niermann, K; Yang, Eddy S; Yu, David S; Ding, G X; Chen, A; Datteri, R; Noble, J H; Dawant, B M; Donnelly, E F; Yei, F; Koyama, T

    2011-01-01

    The purpose of this work was to characterize expert variation in segmentation of intracranial structures pertinent to radiation therapy, and to assess a registration-driven atlas-based segmentation algorithm in that context. Eight experts were recruited to segment the brainstem, optic chiasm, optic nerves, and eyes, of 20 patients who underwent therapy for large space-occupying tumors. Performance variability was assessed through three geometric measures: volume, Dice similarity coefficient, and Euclidean distance. In addition, two simulated ground truth segmentations were calculated via the simultaneous truth and performance level estimation algorithm and a novel application of probability maps. The experts and automatic system were found to generate structures of similar volume, though the experts exhibited higher variation with respect to tubular structures. No difference was found between the mean Dice similarity coefficient (DSC) of the automatic and expert delineations as a group at a 5% significance level over all cases and organs. The larger structures of the brainstem and eyes exhibited mean DSC of approximately 0.8-0.9, whereas the tubular chiasm and nerves were lower, approximately 0.4-0.5. Similarly low DSCs have been reported previously without the context of several experts and patient volumes. This study, however, provides evidence that experts are similarly challenged. The average maximum distances (maximum inside, maximum outside) from a simulated ground truth ranged from (-4.3, +5.4) mm for the automatic system to (-3.9, +7.5) mm for the experts considered as a group. Over all the structures in a rank of true positive rates at a 2 mm threshold from the simulated ground truth, the automatic system ranked second of the nine raters. This work underscores the need for large scale studies utilizing statistically robust numbers of patients and experts in evaluating quality of automatic algorithms.

  4. Treatment of radiation enteritis: a comparison study

    International Nuclear Information System (INIS)

    Loiudice, T.A.; Lang, J.A.

    1983-01-01

    Twenty-four patients with severe radiation injury to the small bowel seen over a 4-year period were randomized to four treatment groups: 1) methylprednisolone 80 mg intravenously plus Vivonex-HN, 2 L/day po, 2) methylprednisolone 80 mg intravenously plus total parenteral nutrition, 2.5 L/day, 3) total parenteral nutrition, 2.5 L/day, and 4) Vivonex-HN, 2 L/day po. Patients received nothing by mouth except water in groups II and III, and only Vivonex-HN in groups I and IV. Patients were treated for 8-wk periods. Improvement was gauged by overall nutritional assessment measurements, nitrogen balance data and by radiological and clinical parameters. No significant difference between groups I, II, III, and IV could be found for age, sex, mean radiation dosage, time of onset after radiation therapy, or initial nutritional assessment data. Differences statistically could be found between groups II and III and I and IV regarding nutritional assessment data, nitrogen balance, radiographic and clinical parameters after therapy, with marked improvement noted in groups II and III. We conclude that a treatment regimen consisting of total parenteral nutrition and bowel rest is beneficial in the treatment of radiation enteritis. Methylprednisolone appears to enhance this effect and indeed, may be responsible for a longer lasting response

  5. A Comparison of Methods and Results in Recruiting White and Black Women into Reproductive Studies: The MMC-PSU Cooperative Center on Reproduction Experience

    Science.gov (United States)

    Sweet, Stephanie; Legro, Richard S; Coney, PonJola

    2008-01-01

    Establishing a holistic approach for the enrollment of subjects into clinical trials that includes strategies for the recruitment of non-traditional and minority populations has been an elusive task. The existence of such a design, that is understood and embraced by investigators and the target communities, would streamline the current level of commitment of time, energy and resources. This is necessary to successfully encourage individual and community participation in research studies. The Center for Research in Reproduction at Meharry set out to recruit a large number of African American women volunteers of reproductive age into clinical trials. The experience, of recruiting volunteers from the African American community for clinical trials in the Meharry Medical College/Pennsylvania State University (MMC/PSU)'s Cooperative Center for Research in Reproduction at Meharry, is presented. PMID:18082470

  6. A comparison of two popular statistical methods for estimating the ...

    Indian Academy of Sciences (India)

    Unknown

    The numerical size of the succeeding generations is controlled after the founding population is created. In this study we have considered two demographic scenarios: (i) constancy of population size over generations and (ii) exponential growth in size, allowing for variability in the growth parameter over generations; that is ...

  7. Comparison of conventional straight and swan-neck straight catheters inserted by percutaneous method for continuous ambulatory peritoneal dialysis: a single-center study.

    Science.gov (United States)

    Singh, Shivendra; Prakash, Jai; Singh, R G; Dole, P K; Pant, Pragya

    2015-10-01

    To evaluate the incidence of mechanical and infectious complications of conventional straight catheter (SC) versus swan-neck straight catheter (SNSC) implanted by percutaneous method. We retrospectively analyzed 45 catheter insertions being done by percutaneous method from January 1, 2011, to May 31, 2014. SC was inserted in 24 patients, and SNSC was inserted in 21 patients. Baseline characteristics for the two groups were similar with respect to age, sex and diabetic nephropathy as the cause for end-stage renal disease. Incidence of mechanical and infectious complications in SNSC group was found to be low as compared to the SC group and was statistically significant (1 in 11.6 patient months vs. 1 in 14.4 patient months, p = 0.02). Catheter migration was found to be the most common mechanical complication (20 %), and peritonitis was found to be the most common infectious complication in conventional SC group (27 episodes in 420 patient months vs. 11 episodes in 333 patient months, p = 0.03). The incidence of exit site and tunnel infection rates revealed no difference between the groups. SNSC insertion by percutaneous method is associated with low mechanical and infectious complications.

  8. Quantitative Imaging Biomarkers: A Review of Statistical Methods for Computer Algorithm Comparisons

    Science.gov (United States)

    2014-01-01

    Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. PMID:24919829

  9. Quantitative imaging biomarkers: a review of statistical methods for computer algorithm comparisons.

    Science.gov (United States)

    Obuchowski, Nancy A; Reeves, Anthony P; Huang, Erich P; Wang, Xiao-Feng; Buckler, Andrew J; Kim, Hyun J Grace; Barnhart, Huiman X; Jackson, Edward F; Giger, Maryellen L; Pennello, Gene; Toledano, Alicia Y; Kalpathy-Cramer, Jayashree; Apanasovich, Tatiyana V; Kinahan, Paul E; Myers, Kyle J; Goldgof, Dmitry B; Barboriak, Daniel P; Gillies, Robert J; Schwartz, Lawrence H; Sullivan, Daniel C

    2015-02-01

    Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  10. A comparison of radiosity with current methods of sound level prediction in commercial spaces

    Science.gov (United States)

    Beamer, C. Walter, IV; Muehleisen, Ralph T.

    2002-11-01

    The ray tracing and image methods (and variations thereof) are widely used for the computation of sound fields in architectural spaces. The ray tracing and image methods are best suited for spaces with mostly specular reflecting surfaces. The radiosity method, a method based on solving a system of energy balance equations, is best applied to spaces with mainly diffusely reflective surfaces. Because very few spaces are either purely specular or purely diffuse, all methods must deal with both types of reflecting surfaces. A comparison of the radiosity method to other methods for the prediction of sound levels in commercial environments is presented. [Work supported by NSF.

  11. Comparison of methods of extracting information for meta-analysis of observational studies in nutritional epidemiology

    Directory of Open Access Journals (Sweden)

    Jong-Myon Bae

    2016-01-01

    Full Text Available OBJECTIVES: A common method for conducting a quantitative systematic review (QSR for observational studies related to nutritional epidemiology is the “highest versus lowest intake” method (HLM, in which only the information concerning the effect size (ES of the highest category of a food item is collected on the basis of its lowest category. However, in the interval collapsing method (ICM, a method suggested to enable a maximum utilization of all available information, the ES information is collected by collapsing all categories into a single category. This study aimed to compare the ES and summary effect size (SES between the HLM and ICM. METHODS: A QSR for evaluating the citrus fruit intake and risk of pancreatic cancer and calculating the SES by using the HLM was selected. The ES and SES were estimated by performing a meta-analysis using the fixed-effect model. The directionality and statistical significance of the ES and SES were used as criteria for determining the concordance between the HLM and ICM outcomes. RESULTS: No significant differences were observed in the directionality of SES extracted by using the HLM or ICM. The application of the ICM, which uses a broader information base, yielded more-consistent ES and SES, and narrower confidence intervals than the HLM. CONCLUSIONS: The ICM is advantageous over the HLM owing to its higher statistical accuracy in extracting information for QSR on nutritional epidemiology. The application of the ICM should hence be recommended for future studies.

  12. Statistical inference methods for two crossing survival curves: a comparison of methods.

    Science.gov (United States)

    Li, Huimin; Han, Dong; Hou, Yawen; Chen, Huilin; Chen, Zheng

    2015-01-01

    A common problem that is encountered in medical applications is the overall homogeneity of survival distributions when two survival curves cross each other. A survey demonstrated that under this condition, which was an obvious violation of the assumption of proportional hazard rates, the log-rank test was still used in 70% of studies. Several statistical methods have been proposed to solve this problem. However, in many applications, it is difficult to specify the types of survival differences and choose an appropriate method prior to analysis. Thus, we conducted an extensive series of Monte Carlo simulations to investigate the power and type I error rate of these procedures under various patterns of crossing survival curves with different censoring rates and distribution parameters. Our objective was to evaluate the strengths and weaknesses of tests in different situations and for various censoring rates and to recommend an appropriate test that will not fail for a wide range of applications. Simulation studies demonstrated that adaptive Neyman's smooth tests and the two-stage procedure offer higher power and greater stability than other methods when the survival distributions cross at early, middle or late times. Even for proportional hazards, both methods maintain acceptable power compared with the log-rank test. In terms of the type I error rate, Renyi and Cramér-von Mises tests are relatively conservative, whereas the statistics of the Lin-Xu test exhibit apparent inflation as the censoring rate increases. Other tests produce results close to the nominal 0.05 level. In conclusion, adaptive Neyman's smooth tests and the two-stage procedure are found to be the most stable and feasible approaches for a variety of situations and censoring rates. Therefore, they are applicable to a wider spectrum of alternatives compared with other tests.

  13. A Comparison of Reading Response Methods to Increase Student Learning

    Science.gov (United States)

    Davis, Cheryl J.; Zane, Thomas

    2016-01-01

    It is common in college courses to test students on the required readings for that course. With a rise in online education it is often the case that students are required to provide evidence of reading the material. However, there is little empirical research stating the best written means to assess that students read the materials. This study…

  14. A comparison of DNA barcode clustering methods applied

    Indian Academy of Sciences (India)

    2012-10-15

    Oct 15, 2012 ... to geography-based vs clade-based sampling of amphibians. ANDREA ... phylogenetic sampling, the addition of DNA barcoding to RAPs may present a greater challenge for the identification ...... odes for soil nematode identification. Mol. .... barcoding amphibians: take the chance, meet the challenge. Mol.

  15. Improving Assessment of Lifetime Solar Ultraviolet Radiation Exposure in Epidemiologic Studies: Comparison of Ultraviolet Exposure Assessment Methods in a Nationwide United States Occupational Cohort.

    Science.gov (United States)

    Little, Mark P; Tatalovich, Zaria; Linet, Martha S; Fang, Michelle; Kendall, Gerald M; Kimlin, Michael G

    2018-06-13

    Solar ultraviolet radiation is the primary risk factor for skin cancers and sun-related eye disorders. Estimates of individual ambient ultraviolet irradiance derived from ground-based solar measurements and from satellite measurements have rarely been compared. Using self-reported residential history from 67,189 persons in a nationwide occupational US radiologic technologists cohort, we estimated ambient solar irradiance using data from ground-based meters and noontime satellite measurements. The mean distance-moved from city of longest residence in childhood increased from 137.6 km at ages 13-19 to 870.3 km at ages ≥65, with corresponding increases in absolute latitude-difference moved. At ages 20/40/60/80, the Pearson/Spearman correlation coefficients of ground-based and satellite-derived solar potential ultraviolet exposure, using irradiance and cumulative radiant-exposure metrics, were high (=0.87-0.92). There was also moderate correlation (Pearson/Spearman correlation coefficients=0.51-0.60) between irradiance at birth and at last-known address, for ground-based and satellite data. Satellite-based lifetime estimates of ultraviolet radiation were generally 14-15% lower than ground-based estimates, albeit with substantial uncertainties, possibly because ground-based estimates incorporate fluctuations in cloud and ozone, which are incompletely incorporated in the single noontime satellite-overpass ultraviolet value. If confirmed elsewhere, the findings suggest that ground-based estimates may improve exposure-assessment accuracy and potentially provide new insights into ultraviolet-radiation-disease relationships in epidemiologic studies. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  16. A New Method to Study Analytic Inequalities

    Directory of Open Access Journals (Sweden)

    Xiao-Ming Zhang

    2010-01-01

    Full Text Available We present a new method to study analytic inequalities involving n variables. Regarding its applications, we proved some well-known inequalities and improved Carleman's inequality.

  17. ABCD Matrix Method a Case Study

    CERN Document Server

    Seidov, Zakir F; Yahalom, Asher

    2004-01-01

    In the Israeli Electrostatic Accelerator FEL, the distance between the accelerator's end and the wiggler's entrance is about 2.1 m, and 1.4 MeV electron beam is transported through this space using four similar quadrupoles (FODO-channel). The transfer matrix method (ABCD matrix method) was used for simulating the beam transport, a set of programs is written in the several programming languages (MATHEMATICA, MATLAB, MATCAD, MAPLE) and reasonable agreement is demonstrated between experimental results and simulations. Comparison of ABCD matrix method with the direct "numerical experiments" using EGUN, ELOP, and GPT programs with and without taking into account the space-charge effects showed the agreement to be good enough as well. Also the inverse problem of finding emittance of the electron beam at the S1 screen position (before FODO-channel), by using the spot image at S2 screen position (after FODO-channel) as function of quad currents, is considered. Spot and beam at both screens are described as tilted eel...

  18. A method optimization study for atomic absorption ...

    African Journals Online (AJOL)

    A sensitive, reliable and relative fast method has been developed for the determination of total zinc in insulin by atomic absorption spectrophotometer. This designed study was used to optimize the procedures for the existing methods. Spectrograms of both standard and sample solutions of zinc were recorded by measuring ...

  19. A comparison of in vivo and in vitro methods for determining availability of iron from meals

    International Nuclear Information System (INIS)

    Schricker, B.R.; Miller, D.D.; Rasmussen, R.R.; Van Campen, D.

    1981-01-01

    A comparison is made between in vitro and human and rat in vivo methods for estimating food iron availability. Complex meals formulated to replicate meals used by Cook and Monsen (Am J Clin Nutr 1976;29:859) in human iron availability trials were used in the comparison. The meals were prepared by substituting pork, fish, cheese, egg, liver, or chicken for beef in two basic test meals and were evaluated for iron availability using in vitro and rat in vivo methods. When the criterion for comparison was the ability to show statistically significant differences between iron availability in the various meals, there was substantial agreement between the in vitro and human in vivo methods. There was less agreement between the human in vivo and the rat in vivo and between the in vivo and the rat in vivo and between the in vitro and the rat in vivo methods. Correlation analysis indicated significant agreement between in vitro and human in vivo methods. Correlation between the rat in vivo and human in vivo methods were also significant but correlations between the in vitro and rat in vivo methods were less significant and, in some cases, not significant. The comparison supports the contention that the in vitro method allows a rapid, inexpensive, and accurate estimation of nonheme iron availability in complex meals

  20. A Comparison of Three Methods for the Analysis of Skin Flap Viability: Reliability and Validity.

    Science.gov (United States)

    Tim, Carla Roberta; Martignago, Cintia Cristina Santi; da Silva, Viviane Ribeiro; Dos Santos, Estefany Camila Bonfim; Vieira, Fabiana Nascimento; Parizotto, Nivaldo Antonio; Liebano, Richard Eloin

    2018-05-01

    Objective: Technological advances have provided new alternatives to the analysis of skin flap viability in animal models; however, the interrater validity and reliability of these techniques have yet to be analyzed. The present study aimed to evaluate the interrater validity and reliability of three different methods: weight of paper template (WPT), paper template area (PTA), and photographic analysis. Approach: Sixteen male Wistar rats had their cranially based dorsal skin flap elevated. On the seventh postoperative day, the viable tissue area and the necrotic area of the skin flap were recorded using the paper template method and photo image. The evaluation of the percentage of viable tissue was performed using three methods, simultaneously and independently by two raters. The analysis of interrater reliability and viability was performed using the intraclass correlation coefficient and Bland Altman Plot Analysis was used to visualize the presence or absence of systematic bias in the evaluations of data validity. Results: The results showed that interrater reliability for WPT, measurement of PTA, and photographic analysis were 0.995, 0.990, and 0.982, respectively. For data validity, a correlation >0.90 was observed for all comparisons made between the three methods. In addition, Bland Altman Plot Analysis showed agreement between the comparisons of the methods and the presence of systematic bias was not observed. Innovation: Digital methods are an excellent choice for assessing skin flap viability; moreover, they make data use and storage easier. Conclusion: Independently from the method used, the interrater reliability and validity proved to be excellent for the analysis of skin flaps' viability.

  1. Comparison of the effectiveness of sterilizing endodontic files by 4 different methods: An in vitro study

    Directory of Open Access Journals (Sweden)

    Venkatasubramanian R

    2010-03-01

    Full Text Available Sterilization is the best method to counter the threats of microorganisms. The purpose of sterilization in the field of health care is to prevent the spread of infectious diseases. In dentistry, it primarily relates to processing reusable instruments to prevent cross-infection. The aim of this study was to investigate the efficacy of 4 methods of sterilizing endodontic instruments: Autoclaving, carbon dioxide laser sterilization, chemical sterilization (with glutaraldehyde and glass-bead sterilization. The endodontic file was sterilized by 4 different methods after contaminating it with bacillus stearothermophillus and then checked for sterility by incubating after putting it in test tubes containing thioglycollate medium. The study showed that the files sterilized by autoclave and lasers were completely sterile. Those sterilized by glass bead were 90% sterile and those with glutaraldehyde were 80% sterile. The study concluded that autoclave or laser could be used as a method of sterilization in clinical practice and in advanced clinics; laser can be used also as a chair side method of sterilization.

  2. Numerical simulation and comparison of two ventilation methods for a restaurant - displacement vs mixed flow ventilation

    Science.gov (United States)

    Chitaru, George; Berville, Charles; Dogeanu, Angel

    2018-02-01

    This paper presents a comparison between a displacement ventilation method and a mixed flow ventilation method using computational fluid dynamics (CFD) approach. The paper analyses different aspects of the two systems, like the draft effect in certain areas, the air temperatureand velocity distribution in the occupied zone. The results highlighted that the displacement ventilation system presents an advantage for the current scenario, due to the increased buoyancy driven flows caused by the interior heat sources. For the displacement ventilation case the draft effect was less prone to appear in the occupied zone but the high heat emissions from the interior sources have increased the temperature gradient in the occupied zone. Both systems have been studied in similar conditions, concentrating only on the flow patterns for each case.

  3. Assessment of the eye irritation potential of chemicals: A comparison study between two test methods based on human 3D hemi-cornea models.

    Science.gov (United States)

    Tandon, R; Bartok, M; Zorn-Kruppa, M; Brandner, J M; Gabel, D; Engelke, M

    2015-12-25

    We have recently developed two hemi-cornea models (Bartok et al., Toxicol in Vitro 29, 72, 2015; Zorn-Kruppa et al. PLoS One 9, e114181, 2014), which allow the correct prediction of eye irritation potential of chemicals according to the United Nations globally harmonized system of classification and labeling of chemicals (UN GHS). Both models comprise a multilayered epithelium and a stroma with embedded keratocytes in a collagenous matrix. These two models were compared, using a set of fourteen test chemicals. Their effects after 10 and 60 minutes (min) exposure were assessed from the quantification of cell viability using the MTT reduction assay. The first approach separately quantifies the damage inflicted to the epithelium and the stroma. The second approach quantifies the depth of injury by recording cell death as a function of depth. The classification obtained by the two models was compared to the Draize rabbit eye test and an ex vivo model using rabbit cornea (Jester et al. Toxicol in Vitro. 24, 597-604, 2010). With a 60 min exposure, both of our models are able to clearly differentiate UN GHS Category 1 and UN GHS Category 2 test chemicals. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. From a tree to a stand in Finnish boreal forests - biomass estimation and comparison of methods

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Chunjiang

    2009-07-01

    There is an increasing need to compare the results obtained with different methods of estimation of tree biomass in order to reduce the uncertainty in the assessment of forest biomass carbon. In this study, tree biomass was investigated in a 30-year-old Scots pine (Pinus sylvestris) (Young-Stand) and a 130-year-old mixed Norway spruce (Picea abies)-Scots pine stand (Mature-Stand) located in southern Finland (61deg50' N, 24deg22' E). In particular, a comparison of the results of different estimation methods was conducted to assess the reliability and suitability of their applications. For the trees in Mature-Stand, annual stem biomass increment fluctuated following a sigmoid equation, and the fitting curves reached a maximum level (from about 1 kg yr-1 for understorey spruce to 7 kg yr-1 for dominant pine) when the trees were 100 years old). Tree biomass was estimated to be about 70 Mg ha-1 in Young-Stand and about 220 Mg ha-1 in Mature-Stand. In the region (58.00-62.13 degN, 14-34 degE, <= 300 m a.s.l.) surrounding the study stands, the tree biomass accumulation in Norway spruce and Scots pine stands followed a sigmoid equation with stand age, with a maximum of 230 Mg ha-1 at the age of 140 years. In Mature-Stand, lichen biomass on the trees was 1.63 Mg ha-1 with more than half of the biomass occurring on dead branches, and the standing crop of litter lichen on the ground was about 0.09 Mg ha-1. There were substantial differences among the results estimated by different methods in the stands. These results imply that a possible estimation error should be taken into account when calculating tree biomass in a stand with an indirect approach. (orig.)

  5. A comparison of visual and quantitative methods to identify interstitial lung abnormalities

    OpenAIRE

    Kliment, Corrine R.; Araki, Tetsuro; Doyle, Tracy J.; Gao, Wei; Dupuis, Jos?e; Latourelle, Jeanne C.; Zazueta, Oscar E.; Fernandez, Isis E.; Nishino, Mizuki; Okajima, Yuka; Ross, James C.; Est?par, Ra?l San Jos?; Diaz, Alejandro A.; Lederer, David J.; Schwartz, David A.

    2015-01-01

    Background: Evidence suggests that individuals with interstitial lung abnormalities (ILA) on a chest computed tomogram (CT) may have an increased risk to develop a clinically significant interstitial lung disease (ILD). Although methods used to identify individuals with ILA on chest CT have included both automated quantitative and qualitative visual inspection methods, there has been not direct comparison between these two methods. To investigate this relationship, we created lung density met...

  6. A Comparison of Central Composite Design and Taguchi Method for Optimizing Fenton Process

    Directory of Open Access Journals (Sweden)

    Anam Asghar

    2014-01-01

    Full Text Available In the present study, a comparison of central composite design (CCD and Taguchi method was established for Fenton oxidation. Dyeini, Dye : Fe+2, H2O2 : Fe+2, and pH were identified control variables while COD and decolorization efficiency were selected responses. L9 orthogonal array and face-centered CCD were used for the experimental design. Maximum 99% decolorization and 80% COD removal efficiency were obtained under optimum conditions. R squared values of 0.97 and 0.95 for CCD and Taguchi method, respectively, indicate that both models are statistically significant and are in well agreement with each other. Furthermore, Prob > F less than 0.0500 and ANOVA results indicate the good fitting of selected model with experimental results. Nevertheless, possibility of ranking of input variables in terms of percent contribution to the response value has made Taguchi method a suitable approach for scrutinizing the operating parameters. For present case, pH with percent contribution of 87.62% and 66.2% was ranked as the most contributing and significant factor. This finding of Taguchi method was also verified by 3D contour plots of CCD. Therefore, from this comparative study, it is concluded that Taguchi method with 9 experimental runs and simple interaction plots is a suitable alternative to CCD for several chemical engineering applications.

  7. A Quantitative Comparison of Calibration Methods for RGB-D Sensors Using Different Technologies

    Directory of Open Access Journals (Sweden)

    Víctor Villena-Martínez

    2017-01-01

    Full Text Available RGB-D (Red Green Blue and Depth sensors are devices that can provide color and depth information from a scene at the same time. Recently, they have been widely used in many solutions due to their commercial growth from the entertainment market to many diverse areas (e.g., robotics, CAD, etc.. In the research community, these devices have had good uptake due to their acceptable levelofaccuracyformanyapplicationsandtheirlowcost,butinsomecases,theyworkatthelimitof their sensitivity, near to the minimum feature size that can be perceived. For this reason, calibration processes are critical in order to increase their accuracy and enable them to meet the requirements of such kinds of applications. To the best of our knowledge, there is not a comparative study of calibration algorithms evaluating its results in multiple RGB-D sensors. Specifically, in this paper, a comparison of the three most used calibration methods have been applied to three different RGB-D sensors based on structured light and time-of-flight. The comparison of methods has been carried out by a set of experiments to evaluate the accuracy of depth measurements. Additionally, an object reconstruction application has been used as example of an application for which the sensor works at the limit of its sensitivity. The obtained results of reconstruction have been evaluated through visual inspection and quantitative measurements.

  8. Gastroesophageal reflux diagnosis by scintigraphic method - a comparison among different methods

    International Nuclear Information System (INIS)

    Cruz, Maria das Gracas de Almedia; Penas, Maria Exposito; Maliska, Carmelindo; Fonseca, Lea Miriam B. da; Lemme, Eponina M.; Campos, Deisy Guacyaba S.; Rebelo, Ana Maria de O.; Martinho, Maria Jose R.; Dias, Vera Maria

    1996-01-01

    The gastroesophageal reflux disease is characterized by the return of gastric liquid to the esophagus which can origin the reflux esophagitis. The aim is to evaluate the contribution of dynamic quality scintigraphy of reflux (CTR) in the diagnosis of DRGE. We studied a ground of sick people with typical symptoms and compared the results given by digestive endoscopy (EDA), the histopathology and 24 hours pHmetry. There have been evaluated 24 healthy individuals and 97 who were sick. The first ones were submitted to CTR. In 20 controlled and 34 patients, the reflux index (IR) was determined by the evaluation of the reflux material percentage in relation to the basal activity. All the sick patients were submitted to CTR and EDA. This one classified them in: group A with esophagitis 45 patients (46,4%) where 14 did the biopsy of the third segment/part from the esophagus and 16 did the long pHmetry and group B without esophagitis = 52 patients (53.6%), 26 did the biopsy and 36 the pHmetry. In group A the CTR was positive in 38 (84%), the biopsy showed esophagitis in 11 (78,6%) and the pHmetry showed normal reflux in 14 (87,5%). The positivity of the same exams in group B were of 42 (81%), 11 (42,3%) and 19 (53%) respectively. The IR were determined to the controls and the measure to the patients were of 59% and 106% respectively as p<0,0001 (test from Mann Whitney). The correlation of CTR with the other methods showed sensitivity of 84,1%, specificity of 95,8%, positive predictive value of 98,3% and negative predictive value of 67,7%. As it was showed the authors concluded that the scintigraphy method can confirm the diagnosis of DRGE in patients of typical symptomology which can be recommended by the initial of investigation method in these conditions. (author)

  9. Combining static and dynamic modelling methods: a comparison of four methods

    NARCIS (Netherlands)

    Wieringa, Roelf J.

    1995-01-01

    A conceptual model of a system is an explicit description of the behaviour required of the system. Methods for conceptual modelling include entity-relationship (ER) modelling, data flow modelling, Jackson System Development (JSD) and several object-oriented analysis method. Given the current

  10. Direct comparison of phase-sensitive vibrational sum frequency generation with maximum entropy method: case study of water.

    Science.gov (United States)

    de Beer, Alex G F; Samson, Jean-Sebastièn; Hua, Wei; Huang, Zishuai; Chen, Xiangke; Allen, Heather C; Roke, Sylvie

    2011-12-14

    We present a direct comparison of phase sensitive sum-frequency generation experiments with phase reconstruction obtained by the maximum entropy method. We show that both methods lead to the same complex spectrum. Furthermore, we discuss the strengths and weaknesses of each of these methods, analyzing possible sources of experimental and analytical errors. A simulation program for maximum entropy phase reconstruction is available at: http://lbp.epfl.ch/. © 2011 American Institute of Physics

  11. A method comparison of photovoice and content analysis: research examining challenges and supports of family caregivers.

    Science.gov (United States)

    Faucher, Mary Ann; Garner, Shelby L

    2015-11-01

    The purpose of this manuscript is to compare methods and thematic representations of the challenges and supports of family caregivers identified with photovoice methodology contrasted with content analysis, a more traditional qualitative approach. Results from a photovoice study utilizing a participatory action research framework was compared to an analysis of the audio-transcripts from that study utilizing content analysis methodology. Major similarities between the results are identified with some notable differences. Content analysis provides a more in-depth and abstract elucidation of the nature of the challenges and supports of the family caregiver. The comparison provides evidence to support the trustworthiness of photovoice methodology with limitations identified. The enhanced elaboration of theme and categories with content analysis may have some advantages relevant to the utilization of this knowledge by health care professionals. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. A novel model approach for esophageal burns in rats: A comparison of three methods.

    Science.gov (United States)

    Kalkan, Yildiray; Tumkaya, Levent; Akdogan, Remzi Adnan; Yucel, Ahmet Fikret; Tomak, Yakup; Sehitoglu, İbrahim; Pergel, Ahmet; Kurt, Aysel

    2015-07-01

    Corrosive esophageal injury causes serious clinical problems. We aimed to create a new experimental esophageal burn model using a single catheter without a surgical procedure. We conducted the study with two groups of 12 male rats that fasted for 12 h before application. A modified Foley balloon catheter was inserted into the esophageal lumen. The control group was given 0.9% sodium chloride, while the experimental group was given 37.5% sodium hydroxide with the other part of the catheter. After 60s, esophagus was washed with distilled water. The killed rats were examined using histopathological methods after 28 days. In comparison with the histopathological changes experienced by the study groups, the control groups were observed to have no pathological changes. Basal cell degeneration, dermal edema, and a slight increase in the keratin layer and collagen density of submucosa due to stenosis were all observed in the group subjected to esophageal corrosion. A new burn model can thus, we believe, be created without the involvement of invasive laparoscopic surgery and general anesthesia. The burn in our experiment was formed in both the distal and proximal esophagus, as in other models; it can also be formed optionally in the entire esophagus. © The Author(s) 2013.

  13. Comparison of Data Fusion Methods Using Crowdsourced Data in Creating a Hybrid Forest Cover Map

    Directory of Open Access Journals (Sweden)

    Myroslava Lesiv

    2016-03-01

    Full Text Available Data fusion represents a powerful way of integrating individual sources of information to produce a better output than could be achieved by any of the individual sources on their own. This paper focuses on the data fusion of different land cover products derived from remote sensing. In the past, many different methods have been applied, without regard to their relative merit. In this study, we compared some of the most commonly-used methods to develop a hybrid forest cover map by combining available land cover/forest products and crowdsourced data on forest cover obtained through the Geo-Wiki project. The methods include: nearest neighbour, naive Bayes, logistic regression and geographically-weighted logistic regression (GWR, as well as classification and regression trees (CART. We ran the comparison experiments using two data types: presence/absence of forest in a grid cell; percentage of forest cover in a grid cell. In general, there was little difference between the methods. However, GWR was found to perform better than the other tested methods in areas with high disagreement between the inputs.

  14. Comparison of three methods of calculating strain in the mouse ulna in exogenous loading studies.

    Science.gov (United States)

    Norman, Stephanie C; Wagner, David W; Beaupre, Gary S; Castillo, Alesha B

    2015-01-02

    Axial compression of mouse limbs is commonly used to induce bone formation in a controlled, non-invasive manner. Determination of peak strains caused by loading is central to interpreting results. Load-strain calibration is typically performed using uniaxial strain gauges attached to the diaphyseal, periosteal surface of a small number of sacrificed animals. Strain is measured as the limb is loaded to a range of physiological loads known to be anabolic to bone. The load-strain relationship determined by this subgroup is then extrapolated to a larger group of experimental mice. This method of strain calculation requires the challenging process of strain gauging very small bones which is subject to variability in placement of the strain gauge. We previously developed a method to estimate animal-specific periosteal strain during axial ulnar loading using an image-based computational approach that does not require strain gauges. The purpose of this study was to compare the relationship between load-induced bone formation rates and periosteal strain at ulnar midshaft using three different methods to estimate strain: (A) Nominal strain values based solely on load-strain calibration; (B) Strains calculated from load-strain calibration, but scaled for differences in mid-shaft cross-sectional geometry among animals; and (C) An alternative image-based computational method for calculating strains based on beam theory and animal-specific bone geometry. Our results show that the alternative method (C) provides comparable correlation between strain and bone formation rates in the mouse ulna relative to the strain gauge-dependent methods (A and B), while avoiding the need to use strain gauges. Published by Elsevier Ltd.

  15. The Comparison Study of Short-Term Prediction Methods to Enhance the Model Predictive Controller Applied to Microgrid Energy Management

    Directory of Open Access Journals (Sweden)

    César Hernández-Hernández

    2017-06-01

    Full Text Available Electricity load forecasting, optimal power system operation and energy management play key roles that can bring significant operational advantages to microgrids. This paper studies how methods based on time series and neural networks can be used to predict energy demand and production, allowing them to be combined with model predictive control. Comparisons of different prediction methods and different optimum energy distribution scenarios are provided, permitting us to determine when short-term energy prediction models should be used. The proposed prediction models in addition to the model predictive control strategy appear as a promising solution to energy management in microgrids. The controller has the task of performing the management of electricity purchase and sale to the power grid, maximizing the use of renewable energy sources and managing the use of the energy storage system. Simulations were performed with different weather conditions of solar irradiation. The obtained results are encouraging for future practical implementation.

  16. Comparison of radioimmunological and enzyme-immunological methods of determination in thyroid function studies

    International Nuclear Information System (INIS)

    Arnold, L.P.

    1984-01-01

    During the study described here parallel investigations were carried out using radio-immuno-assays (RIA) and enzyme-immuno-assays (EIA) in order to assess the quality and diagnostic value of both these techniques as well as to weigh up their relative advantages and disadvantages in connection with thyroid function studies. The following comparisons were made: T4 RIA versus T4 EIA, T3 RIA versus T3 EIA, T3 uptake versus thyroxine binding capacity, total balance of free hormone bonds. Correlation coefficients of 0.982 for T4 and 0.989 for T3 proved that the test results obtained using either RIA or EIA were in fairly good agreement. Less satisfactory correlation was suggested by a coefficient of 0.807 between the tests for thyroxine binding capacity and T3 uptake, which was even seen to decrease with increasing binding capacity so that the greatest deviations were observed in hyperthyroidism. The same holds true for the free T4 index and total balance, where the correlation coefficient is altered in a similar way by the inclusion of a thyroxine binding capacity EIA. Better agreement was achieved by optimal adjustment of the defined diagnostic ranges for hypothyroidism, euthyroidism and hyperthyroidism. The EIA results were biassed in the presence of hyperbilirubinemia. Intra-assay abd inter-assay veriations were analysed and found to range from 8.1 to 16.9% and 13.6 to 28.1%, respectively. EIA offers advantages over RIA inasmuch as it is free from radioactivity and does not require any special safety measures, although it was judged considerably less favourable than RIA in terms of sensitivity. (TRV) [de

  17. Experimental Sentinel-2 LAI estimation using parametric, non-parametric and physical retrieval methods - A comparison

    NARCIS (Netherlands)

    Verrelst, Jochem; Rivera, Juan Pablo; Veroustraete, Frank; Muñoz-Marí, Jordi; Clevers, J.G.P.W.; Camps-Valls, Gustau; Moreno, José

    2015-01-01

    Given the forthcoming availability of Sentinel-2 (S2) images, this paper provides a systematic comparison of retrieval accuracy and processing speed of a multitude of parametric, non-parametric and physically-based retrieval methods using simulated S2 data. An experimental field dataset (SPARC),

  18. Comparison of measurement methods with a mixed effects procedure accounting for replicated evaluations (COM3PARE): method comparison algorithm implementation for head and neck IGRT positional verification.

    Science.gov (United States)

    Roy, Anuradha; Fuller, Clifton D; Rosenthal, David I; Thomas, Charles R

    2015-08-28

    Comparison of imaging measurement devices in the absence of a gold-standard comparator remains a vexing problem; especially in scenarios where multiple, non-paired, replicated measurements occur, as in image-guided radiotherapy (IGRT). As the number of commercially available IGRT presents a challenge to determine whether different IGRT methods may be used interchangeably, an unmet need conceptually parsimonious and statistically robust method to evaluate the agreement between two methods with replicated observations. Consequently, we sought to determine, using an previously reported head and neck positional verification dataset, the feasibility and utility of a Comparison of Measurement Methods with the Mixed Effects Procedure Accounting for Replicated Evaluations (COM3PARE), a unified conceptual schema and analytic algorithm based upon Roy's linear mixed effects (LME) model with Kronecker product covariance structure in a doubly multivariate set-up, for IGRT method comparison. An anonymized dataset consisting of 100 paired coordinate (X/ measurements from a sequential series of head and neck cancer patients imaged near-simultaneously with cone beam CT (CBCT) and kilovoltage X-ray (KVX) imaging was used for model implementation. Software-suggested CBCT and KVX shifts for the lateral (X), vertical (Y) and longitudinal (Z) dimensions were evaluated for bias, inter-method (between-subject variation), intra-method (within-subject variation), and overall agreement using with a script implementing COM3PARE with the MIXED procedure of the statistical software package SAS (SAS Institute, Cary, NC, USA). COM3PARE showed statistically significant bias agreement and difference in inter-method between CBCT and KVX was observed in the Z-axis (both p - value<0.01). Intra-method and overall agreement differences were noted as statistically significant for both the X- and Z-axes (all p - value<0.01). Using pre-specified criteria, based on intra-method agreement, CBCT was deemed

  19. Descriptive Qualitative Method of Evaluation from the Viewpoint of Math Teachers and Its Comparison with the Quantitative Evaluation (Giving scores) Method (A Case Study on the Primary Schools for Girls in Zone 1 of Tehran City)

    OpenAIRE

    Farnaz Ostad-Ali; Mohammad Hasan Behzadi; Ahmad Shahvarani

    2015-01-01

    In recent years, one of the most important developments which have been taking place in the primary school education system is the development of the qualitative-descriptive method of evaluating the students' achievements. The main goals of the qualitative-descriptive evaluation are improving the quality of learning and promoting the level of mental health in teaching-learning environments. Therefore, based on the the raised hypothesis, the purpose of this study is to investigate the teachers...

  20. A method for comparison of animal and human alveolar dose and toxic effect of inhaled ozone

    International Nuclear Information System (INIS)

    Hatch, G.E.; Koren, H.; Aissa, M.

    1989-01-01

    Present models for predicting the pulmonary toxicity of O 3 in humans from the toxic effects observed in animals rely on dosimetric measurements of O 3 mass balance and species comparisons of mechanisms that protect tissue against O 3 . The goal of the study described was to identify a method to directly compare O 3 dose and effect in animals and humans using bronchoalveolar lavage fluid markers. The feasibility of estimating O 3 dose to alveoli of animals and humans was demonstrated through assay of reaction products of 18 O-labeled O 3 in lung surfactant and macrophage pellets of rabbits. The feasibility of using lung lavage fluid protein measurements to quantify the O 3 toxic response in humans was demonstrated by the finding of significantly increased lung lavage protein in 10 subjects exposed to 0.4 ppm O 3 for 2 h with intermittent periods of heavy exercise. The validity of using the lavage protein marker to quantify the response in animals has already been established. The positive results obtained in both the 18 O 3 and the lavage protein studies reported here suggest that it should be possible to obtain a direct comparison of both alveolar dose and toxic effect of O 3 to alveoli of animals or humans

  1. Methods for cost management during product development: A review and comparison of different literatures

    NARCIS (Netherlands)

    Wouters, M.; Morales, S.; Grollmuss, S.; Scheer, M.

    2016-01-01

    Purpose The paper provides an overview of research published in the innovation and operations management (IOM) literature on 15 methods for cost management in new product development, and it provides a comparison to an earlier review of the management accounting (MA) literature (Wouters & Morales,

  2. Comments on the paper 'Optical properties of bovine muscle tissue in vitro; a comparison of methods'

    International Nuclear Information System (INIS)

    Marchesini, R.

    1999-01-01

    In reply to R. Marchesini's comments that optical values derived by himself and other authors given in the paper entitled 'Optical properties of bovine muscle tissue in vitro; a comparison of methods' were incorrectly cited, the author, J.R. Zijp, apologizes for this mistake and explains the reasons for this misinterpretation. Letter-to-the-editor

  3. Comparison between Two Radiological Methods for Assessment of Tooth Root Resorption: An In Vitro Study

    Directory of Open Access Journals (Sweden)

    Sabina Saccomanno

    2018-01-01

    Full Text Available Purpose. This study aims to verify the validity of the radiographic image and the most effective radiological techniques for the diagnosis of root resorption to prevent, cure, and reduce it and to verify if radiological images can be helpful in medical and legal situations. Methods. 19 dental elements without root resorption extracted from several patients were examined: endooral and panoramic radiographs were performed, with traditional and digital methods. Then the root of each tooth was dipped into 3-4 mm of 10% nitric acid for 24 hours to simulate the resorption of the root and later submitted again to radiological examinations and measurements using the same criteria and methods. Results. For teeth with root resorption the real measurements and the values obtained with endooral techniques and digital sensors are almost the same, while image values obtained by panoramic radiographs are more distorted than the real ones. Conclusions. Panoramic radiographs are not useful for the diagnosis of root resorption. The endooral examination is, in medical and legal fields, the most valid and objective instrument to detect root resorption. Although the literature suggests that CBCT is a reliable tool in detecting root resorption defects, the increased radiation dosage and expense and the limited availability of CBCT in most clinical settings accentuate the outcome of this study.

  4. A Method for the Comparison of Item Selection Rules in Computerized Adaptive Testing

    Science.gov (United States)

    Barrada, Juan Ramon; Olea, Julio; Ponsoda, Vicente; Abad, Francisco Jose

    2010-01-01

    In a typical study comparing the relative efficiency of two item selection rules in computerized adaptive testing, the common result is that they simultaneously differ in accuracy and security, making it difficult to reach a conclusion on which is the more appropriate rule. This study proposes a strategy to conduct a global comparison of two or…

  5. Bioanalysis works in the IAA AMS facility: Comparison of AMS analytical method with LSC method in human mass balance study

    International Nuclear Information System (INIS)

    Miyaoka, Teiji; Isono, Yoshimi; Setani, Kaoru; Sakai, Kumiko; Yamada, Ichimaro; Sato, Yoshiaki; Gunji, Shinobu; Matsui, Takao

    2007-01-01

    Institute of Accelerator Analysis Ltd. (IAA) is the first Contract Research Organization in Japan providing Accelerator Mass Spectrometry (AMS) analysis services for carbon dating and bioanalysis works. The 3 MV AMS machines are maintained by validated analysis methods using multiple control compounds. It is confirmed that these AMS systems have reliabilities and sensitivities enough for each objective. The graphitization of samples for bioanalysis is prepared by our own purification lines including the measurement of total carbon content in the sample automatically. In this paper, we present the use of AMS analysis in human mass balance and metabolism profiling studies with IAA 3 MV AMS, comparing results obtained from the same samples with liquid scintillation counting (LSC). Human samples such as plasma, urine and feces were obtained from four healthy volunteers orally administered a 14 C-labeled drug Y-700, a novel xanthine oxidase inhibitor, of which radioactivity was about 3 MBq (85 μCi). For AMS measurement, these samples were diluted 100-10,000-fold with pure-water or blank samples. The results indicated that AMS method had a good correlation with LSC method (e.g. plasma: r = 0.998, urine: r = 0.997, feces: r = 0.997), and that the drug recovery in the excreta exceeded 92%. The metabolite profiles of plasma, urine and feces obtained with HPLC-AMS corresponded to radio-HPLC results measured at much higher radioactivity level. These results revealed that AMS analysis at IAA is useful to measure 14 C-concentration in bioanalysis studies at very low radioactivity level

  6. Comparison of Three Methods of Reducing Test Anxiety: Systematic Desensitization, Implosive Therapy, and Study Counseling

    Science.gov (United States)

    Cornish, Richard D.; Dilley, Josiah S.

    1973-01-01

    Systematic desensitization, implosive therapy, and study counseling have all been effective in reducing test anxiety. In addition, systematic desensitization has been compared to study counseling for effectiveness. This study compares all three methods and suggests that systematic desentization is more effective than the others, and that implosive…

  7. A comparison of different interpolation methods for wind data in Central Asia

    Science.gov (United States)

    Reinhardt, Katja; Samimi, Cyrus

    2017-04-01

    For the assessment of the global climate change and its consequences, the results of computer based climate models are of central importance. The quality of these results and the validity of the derived forecasts are strongly determined by the quality of the underlying climate data. However, in many parts of the world high resolution data are not available. This is particularly true for many regions in Central Asia, where the density of climatological stations has often to be described as thinned out. Due to this insufficient data base the use of statistical methods to improve the resolution of existing climate data is of crucial importance. Only this can provide a substantial data base for a well-founded analysis of past climate changes as well as for a reliable forecast of future climate developments for the particular region. The study presented here shows a comparison of different interpolation methods for the wind components u and v for a region in Central Asia with a pronounced topography. The aim of the study is to find out whether there is an optimal interpolation method which can equally be applied for all pressure levels or if different interpolation methods have to be applied for each pressure level. The European reanalysis data Era-Interim for the years 1989 - 2015 are used as input data for the pressure levels of 850 hPa, 500 hPa and 200 hPa. In order to improve the input data, two different interpolation procedures were applied: On the one hand pure interpolation methods were used, such as inverse distance weighting and ordinary kriging. On the other hand machine learning algorithms, generalized additive models and regression kriging were applied, considering additional influencing factors, e.g. geopotential and topography. As a result it can be concluded that regression kriging provides the best results for all pressure levels, followed by support vector machine, neural networks and ordinary kriging. Inverse distance weighting showed the worst

  8. A study of densitometry comparison among three radiographic processing solutions

    International Nuclear Information System (INIS)

    Changizi, V.; Jazayeri, E.; Talaeepour, A.

    2006-01-01

    The radiographic image accuracy depends on the X-ray film information visibility. Good visibility is found by good contrast. Radiation exposure parameters (kVp, mAs) and film processing conditions have impact on contrast. In dentistry radiography machines, exposure time and processing procedure are set by radiographer. No optimized exposure time and processing conditions may lead to incorrect diagnosis and re-exposure of the patient. Therefore, we studied the performance of the three different available processing solutions with dental X-ray film. Materials and Methods: Dental intraoral E-speed films, size 2 (Kodak company, USA) were used in this study. These films were developed in a manual processor using three different brands of processing solution: 1) Taifsaz (Iran), 2) Darutasvir (Iran) and 3) Agfa (Germany) for temperatures of 25 d ig C , 28 d ig C and 30 d ig C at the three different exposure times, 0.2 s, 0.25 s and 0.35 s. Performance was evaluated with respect to base plus fog, relative contrast and relative speed. Results: Darutasvir processing solution as the cheapest one showed higher base plus fog density at 25 d ig C and 30 d ig C than that of Taifsaz and Agfa solutions. Also, Darutasvir solution was found to have better relative contrast than that of the others, except for 30 d ig C at 0.25 s. Relative speed was higher in Darutsavir solution than Agfa for 25 d ig C at three exposure times used in this study, for 28 d ig C at 0.2 s and for 30 d ig C at 0.35 s. Taifsaz Processing solution was in the second order with respect to tested conditions. Conclusion: Comparison among available X-ray film processing solutions for different temperatures at different exposure times can help to maintain image quality while patient exposure and film cost are kept considerably low

  9. A real case study on transportation scenario comparison

    Directory of Open Access Journals (Sweden)

    Tsoukiás A.

    2002-01-01

    Full Text Available This paper presents a real case study dealing with the comparison of transport scenarios. The study is conducted within a larger project concerning the establishment of the maritime traffic policy in Greece. The paper presents the problem situation and an appropriate problem formulation. Moreover a detailed version of the evaluation model is presented in the paper. The model consists of a complex hierarchy of evaluation models enabling us to take into account the multiple dimensions and points of view of the actors involved in the evaluations.

  10. Comparison of matrix method and ray tracing in the study of complex optical systems

    Science.gov (United States)

    Anterrieu, Eric; Perez, Jose-Philippe

    2000-06-01

    In the context of the classical study of optical systems within the geometrical Gauss approximation, the cardinal elements are efficiently obtained with the aid of the transfer matrix between the input and output planes of the system. In order to take into account the geometrical aberrations, a ray tracing approach, using the Snell- Descartes laws, has been implemented in an interactive software. Both methods are applied for measuring the correction to be done to a human eye suffering from ametropia. This software may be used by optometrists and ophthalmologists for solving the problems encountered when considering this pathology. The ray tracing approach gives a significant improvement and could be very helpful for a better understanding of an eventual surgical act.

  11. A method for statistical comparison of data sets and its uses in analysis of nuclear physics data

    International Nuclear Information System (INIS)

    Bityukov, S.I.; Smirnova, V.V.; Krasnikov, N.V.; Maksimushkina, A.V.; Nikitenko, A.N.

    2014-01-01

    Authors propose a method for statistical comparison of two data sets. The method is based on the method of statistical comparison of histograms. As an estimator of quality of the decision made, it is proposed to use the value which it is possible to call the probability that the decision (data sets are various) is correct [ru

  12. A comparison of different quasi-newton acceleration methods for partitioned multi-physics codes

    CSIR Research Space (South Africa)

    Haelterman, R

    2018-02-01

    Full Text Available & structures, 88/7, pp. 446–457 (2010) 8. J.E. Dennis, J.J. More´, Quasi-Newton methods: motivation and theory. SIAM Rev. 19, pp. 46–89 (1977) A Comparison of Quasi-Newton Acceleration Methods 15 9. J.E. Dennis, R.B. Schnabel, Least Change Secant Updates... Dois Metodos de Broyden. Mat. Apl. Comput. 1/2, pp. 135– 143 (1982) 25. J.M. Martinez, A quasi-Newton method with modification of one column per iteration. Com- puting 33, pp. 353–362 (1984) 26. J.M. Martinez, M.C. Zambaldi, An Inverse Column...

  13. Regional Attenuation in Northern California: A Comparison of Five 1-D Q Methods

    Energy Technology Data Exchange (ETDEWEB)

    Ford, S R; Dreger, D S; Mayeda, K; Walter, W R; Malagnini, L; Phillips, W S

    2007-08-03

    The determination of regional attenuation Q{sup -1} can depend upon the analysis method employed. The discrepancies between methods are due to differing parameterizations (e.g., geometrical spreading rates), employed datasets (e.g., choice of path lengths and sources), and the methodologies themselves (e.g., measurement in the frequency or time domain). Here we apply five different attenuation methodologies to a Northern California dataset. The methods are: (1) coda normalization (CN), (2) two-station (TS), (3) reverse two-station (RTS), (4) source-pair/receiver-pair (SPRP), and (5) coda-source normalization (CS). The methods are used to measure Q of the regional phase, Lg (Q{sub Lg}), and its power-law dependence on frequency of the form Q{sub 0}f{sup {eta}} with controlled parameterization in the well-studied region of Northern California using a high-quality dataset from the Berkeley Digital Seismic Network. We investigate the difference in power-law Q calculated among the methods by focusing on the San Francisco Bay Area, where knowledge of attenuation is an important part of seismic hazard mitigation. This approximately homogeneous subset of our data lies in a small region along the Franciscan block. All methods return similar power-law parameters, though the range of the joint 95% confidence regions is large (Q{sub 0} = 85 {+-} 40; {eta} = 0.65 {+-} 0.35). The RTS and TS methods differ the most from the other methods and from each other. This may be due to the removal of the site term in the RTS method, which is shown to be significant in the San Francisco Bay Area. In order to completely understand the range of power-law Q in a region, it is advisable to use several methods to calculate the model. We also test the sensitivity of each method to changes in geometrical spreading, Lg frequency bandwidth, the distance range of data, and the Lg measurement window. For a given method, there are significant differences in the power-law parameters, Q{sub 0} and {eta

  14. Comparison of manual versus automated data collection method for an evidence-based nursing practice study.

    Science.gov (United States)

    Byrne, M D; Jordan, T R; Welle, T

    2013-01-01

    The objective of this study was to investigate and improve the use of automated data collection procedures for nursing research and quality assurance. A descriptive, correlational study analyzed 44 orthopedic surgical patients who were part of an evidence-based practice (EBP) project examining post-operative oxygen therapy at a Midwestern hospital. The automation work attempted to replicate a manually-collected data set from the EBP project. Automation was successful in replicating data collection for study data elements that were available in the clinical data repository. The automation procedures identified 32 "false negative" patients who met the inclusion criteria described in the EBP project but were not selected during the manual data collection. Automating data collection for certain data elements, such as oxygen saturation, proved challenging because of workflow and practice variations and the reliance on disparate sources for data abstraction. Automation also revealed instances of human error including computational and transcription errors as well as incomplete selection of eligible patients. Automated data collection for analysis of nursing-specific phenomenon is potentially superior to manual data collection methods. Creation of automated reports and analysis may require initial up-front investment with collaboration between clinicians, researchers and information technology specialists who can manage the ambiguities and challenges of research and quality assurance work in healthcare.

  15. Comparison of Three Popular Methods for Recruiting Young Persons Who Inject Drugs for Interventional Studies.

    Science.gov (United States)

    Collier, Melissa G; Garfein, Richard S; Cuevas-Mota, Jazmine; Teshale, Eyasu H

    2017-08-01

    Persons who inject drugs (PWID) are at risk for adverse health outcomes as a result of their drug use, and the resulting social stigma makes this a difficult population to reach for interventions aimed at reducing morbidity and mortality. During our study of adult PWID aged ≤40 years living in San Diego during 2009 and 2010, we compared three different sampling methods: respondent-driven sampling (RDS), venue-based sampling at one syringe exchange program (SEP), and street-based outreach. We compared demographic, socioeconomic, health, and behavioral factors and tested participants for HIV, hepatitis B virus (HBV), and hepatitis C virus (HCV) and compared across the three methods. Overall, 561 (74.8%) of the targeted 750 PWID were enrolled. Venue-based convenience sampling enrolled 96% (242/250) of the targeted participants, followed closely by street-based outreach with 92% (232/250) recruited. While RDS yielded the fewest recruits, producing only 35% (87/250) of the expected participants, those recruited through RDS were more likely to be female, more racially diverse, and younger.

  16. A comparison of fitness-case sampling methods for genetic programming

    Science.gov (United States)

    Martínez, Yuliana; Naredo, Enrique; Trujillo, Leonardo; Legrand, Pierrick; López, Uriel

    2017-11-01

    Genetic programming (GP) is an evolutionary computation paradigm for automatic program induction. GP has produced impressive results but it still needs to overcome some practical limitations, particularly its high computational cost, overfitting and excessive code growth. Recently, many researchers have proposed fitness-case sampling methods to overcome some of these problems, with mixed results in several limited tests. This paper presents an extensive comparative study of four fitness-case sampling methods, namely: Interleaved Sampling, Random Interleaved Sampling, Lexicase Selection and Keep-Worst Interleaved Sampling. The algorithms are compared on 11 symbolic regression problems and 11 supervised classification problems, using 10 synthetic benchmarks and 12 real-world data-sets. They are evaluated based on test performance, overfitting and average program size, comparing them with a standard GP search. Comparisons are carried out using non-parametric multigroup tests and post hoc pairwise statistical tests. The experimental results suggest that fitness-case sampling methods are particularly useful for difficult real-world symbolic regression problems, improving performance, reducing overfitting and limiting code growth. On the other hand, it seems that fitness-case sampling cannot improve upon GP performance when considering supervised binary classification.

  17. Human body mass estimation: a comparison of "morphometric" and "mechanical" methods.

    Science.gov (United States)

    Auerbach, Benjamin M; Ruff, Christopher B

    2004-12-01

    In the past, body mass was reconstructed from hominin skeletal remains using both "mechanical" methods which rely on the support of body mass by weight-bearing skeletal elements, and "morphometric" methods which reconstruct body mass through direct assessment of body size and shape. A previous comparison of two such techniques, using femoral head breadth (mechanical) and stature and bi-iliac breadth (morphometric), indicated a good general correspondence between them (Ruff et al. [1997] Nature 387:173-176). However, the two techniques were never systematically compared across a large group of modern humans of diverse body form. This study incorporates skeletal measures taken from 1,173 Holocene adult individuals, representing diverse geographic origins, body sizes, and body shapes. Femoral head breadth, bi-iliac breadth (after pelvic rearticulation), and long bone lengths were measured on each individual. Statures were estimated from long bone lengths using appropriate reference samples. Body masses were calculated using three available femoral head breadth (FH) formulae and the stature/bi-iliac breadth (STBIB) formula, and compared. All methods yielded similar results. Correlations between FH estimates and STBIB estimates are 0.74-0.81. Slight differences in results between the three FH estimates can be attributed to sampling differences in the original reference samples, and in particular, the body-size ranges included in those samples. There is no evidence for systematic differences in results due to differences in body proportions. Since the STBIB method was validated on other samples, and the FH methods produced similar estimates, this argues that either may be applied to skeletal remains with some confidence. 2004 Wiley-Liss, Inc.

  18. A Comparison of Didactic and Inquiry Teaching Methods in a Rural Community College Earth Science Course

    Science.gov (United States)

    Beam, Margery Elizabeth

    The combination of increasing enrollment and the importance of providing transfer students a solid foundation in science calls for science faculty to evaluate teaching methods in rural community colleges. The purpose of this study was to examine and compare the effectiveness of two teaching methods, inquiry teaching methods and didactic teaching methods, applied in a rural community college earth science course. Two groups of students were taught the same content via inquiry and didactic teaching methods. Analysis of quantitative data included a non-parametric ranking statistical testing method in which the difference between the rankings and the median of the post-test scores was analyzed for significance. Results indicated there was not a significant statistical difference between the teaching methods for the group of students participating in the research. The practical and educational significance of this study provides valuable perspectives on teaching methods and student learning styles in rural community colleges.

  19. The continual reassessment method: comparison of Bayesian stopping rules for dose-ranging studies.

    Science.gov (United States)

    Zohar, S; Chevret, S

    2001-10-15

    The continual reassessment method (CRM) provides a Bayesian estimation of the maximum tolerated dose (MTD) in phase I clinical trials and is also used to estimate the minimal efficacy dose (MED) in phase II clinical trials. In this paper we propose Bayesian stopping rules for the CRM, based on either posterior or predictive probability distributions that can be applied sequentially during the trial. These rules aim at early detection of either the mis-choice of dose range or a prefixed gain in the point estimate or accuracy of estimated probability of response associated with the MTD (or MED). They were compared through a simulation study under six situations that could represent the underlying unknown dose-response (either toxicity or failure) relationship, in terms of sample size, probability of correct selection and bias of the response probability associated to the MTD (or MED). Our results show that the stopping rules act correctly, with early stopping by using the two first rules based on the posterior distribution when the actual underlying dose-response relationship is far from that initially supposed, while the rules based on predictive gain functions provide a discontinuation of inclusions whatever the actual dose-response curve after 20 patients on average, that is, depending mostly on the accumulated data. The stopping rules were then applied to a data set from a dose-ranging phase II clinical trial aiming at estimating the MED dose of midazolam in the sedation of infants during cardiac catheterization. All these findings suggest the early use of the two first rules to detect a mis-choice of dose range, while they confirm the requirement of including at least 20 patients at the same dose to reach an accurate estimate of MTD (MED). A two-stage design is under study. Copyright 2001 John Wiley & Sons, Ltd.

  20. Stochastic spectral Galerkin and collocation methods for PDEs with random coefficients: A numerical comparison

    KAUST Repository

    Bä ck, Joakim; Nobile, Fabio; Tamellini, Lorenzo; Tempone, Raul

    2010-01-01

    Much attention has recently been devoted to the development of Stochastic Galerkin (SG) and Stochastic Collocation (SC) methods for uncertainty quantification. An open and relevant research topic is the comparison of these two methods

  1. A GIS-based comparison of the Mexican national and IUCN methods for determining extinction risk.

    Science.gov (United States)

    Arroyo, Teresa P Feria; Olson, Mark E; García-Mendoza, Abisaí; Solano, Eloy

    2009-10-01

    The national systems used in the evaluation of extinction risk are often touted as more readily applied and somehow more regionally appropriate than the system of the International Union for Conservation of Nature (IUCN). We compared risk assessments of the Mexican national system (method for evaluation of risk of extinction of wild species [MER]) with the IUCN system for the 16 Polianthes taxa (Agavaceae), a genus of plants with marked variation in distribution sizes. We used a novel combination of herbarium data, geographic information systems (GIS), and species distribution models to provide rapid, repeatable estimates of extinction risk. Our GIS method showed that the MER and the IUCN system use similar data. Our comparison illustrates how the IUCN method can be applied even when all desirable data are not available, and that the MER offers no special regional advantage with respect to the IUCN regional system. Instead, our results coincided, with both systems identifying 14 taxa of conservation concern and the remaining two taxa of low risk, largely because both systems use similar information. An obstacle for the application of the MER is that there are no standards for quantifying the criteria of habitat condition and intrinsic biological vulnerability. If these impossible-to-quantify criteria are left out, what are left are geographical distribution and the impact of human activity, essentially the considerations we were able to assess for the IUCN method. Our method has the advantage of making the IUCN criteria easy to apply, and because each step can be standardized between studies, it ensures greater comparability of extinction risk estimates among taxa.

  2. A comparison of the weights-of-evidence method and probabilistic neural networks

    Science.gov (United States)

    Singer, Donald A.; Kouda, Ryoichi

    1999-01-01

    The need to integrate large quantities of digital geoscience information to classify locations as mineral deposits or nondeposits has been met by the weights-of-evidence method in many situations. Widespread selection of this method may be more the result of its ease of use and interpretation rather than comparisons with alternative methods. A comparison of the weights-of-evidence method to probabilistic neural networks is performed here with data from Chisel Lake-Andeson Lake, Manitoba, Canada. Each method is designed to estimate the probability of belonging to learned classes where the estimated probabilities are used to classify the unknowns. Using these data, significantly lower classification error rates were observed for the neural network, not only when test and training data were the same (0.02 versus 23%), but also when validation data, not used in any training, were used to test the efficiency of classification (0.7 versus 17%). Despite these data containing too few deposits, these tests of this set of data demonstrate the neural network's ability at making unbiased probability estimates and lower error rates when measured by number of polygons or by the area of land misclassified. For both methods, independent validation tests are required to ensure that estimates are representative of real-world results. Results from the weights-of-evidence method demonstrate a strong bias where most errors are barren areas misclassified as deposits. The weights-of-evidence method is based on Bayes rule, which requires independent variables in order to make unbiased estimates. The chi-square test for independence indicates no significant correlations among the variables in the Chisel Lake–Andeson Lake data. However, the expected number of deposits test clearly demonstrates that these data violate the independence assumption. Other, independent simulations with three variables show that using variables with correlations of 1.0 can double the expected number of deposits

  3. A surrogate method for comparison analysis of salivary concentrations of Xylitol-containing products

    Directory of Open Access Journals (Sweden)

    Zhou Lingmei

    2008-02-01

    g.min/mL, gummy bears – 55.9 μg.min/mL, and syrup – 59.0 μg.min/mL. Conclusion The comparison method demonstrated high reliability and validity. In both studies other xylitol-containing products had time curves and mean xylitol concentration peaks similar to xylitol pellet gum suggesting this test may be a surrogate for longer studies comparing various products.

  4. Comparison of multivariate methods for studying the G×E interaction

    Directory of Open Access Journals (Sweden)

    Deoclécio Domingos Garbuglio

    2015-12-01

    Full Text Available The objective of this work was to evaluate three statistical multivariate methods for analyzing adaptability and environmental stratification simultaneously, using data from maize cultivars indicated for planting in the State of Paraná-Brazil. Under the FGGE and GGE methods, the genotypic effect adjusts the G×E interactions across environments, resulting in a high percentage of explanation associated with a smaller number of axes. Environmental stratification via the FGGE and GGE methods showed similar responses, while the AMMI method did not ensure grouping of environments. The adaptability analysis revealed low divergence patterns of the responses obtained through the three methods. Genotypes P30F35, P30F53, P30R50, P30K64 and AS 1570 showed high yields associated with general adaptability. The FGGE method allowed differences in yield responses in specific regions and the impact in locations belonging to the same environmental group (through rE to be associated with the level of the simple portion of the G×E interaction.

  5. A Normalized Transfer Matrix Method for the Free Vibration of Stepped Beams: Comparison with Experimental and FE(3D Methods

    Directory of Open Access Journals (Sweden)

    Tamer Ahmed El-Sayed

    2017-01-01

    Full Text Available The exact solution for multistepped Timoshenko beam is derived using a set of fundamental solutions. This set of solutions is derived to normalize the solution at the origin of the coordinates. The start, end, and intermediate boundary conditions involve concentrated masses and linear and rotational elastic supports. The beam start, end, and intermediate equations are assembled using the present normalized transfer matrix (NTM. The advantage of this method is that it is quicker than the standard method because the size of the complete system coefficient matrix is 4 × 4. In addition, during the assembly of this matrix, there are no inverse matrix steps required. The validity of this method is tested by comparing the results of the current method with the literature. Then the validity of the exact stepped analysis is checked using experimental and FE(3D methods. The experimental results for stepped beams with single step and two steps, for sixteen different test samples, are in excellent agreement with those of the three-dimensional finite element FE(3D. The comparison between the NTM method and the finite element method results shows that the modal percentage deviation is increased when a beam step location coincides with a peak point in the mode shape. Meanwhile, the deviation decreases when a beam step location coincides with a straight portion in the mode shape.

  6. Stochastic spectral Galerkin and collocation methods for PDEs with random coefficients: A numerical comparison

    KAUST Repository

    Bäck, Joakim

    2010-09-17

    Much attention has recently been devoted to the development of Stochastic Galerkin (SG) and Stochastic Collocation (SC) methods for uncertainty quantification. An open and relevant research topic is the comparison of these two methods. By introducing a suitable generalization of the classical sparse grid SC method, we are able to compare SG and SC on the same underlying multivariate polynomial space in terms of accuracy vs. computational work. The approximation spaces considered here include isotropic and anisotropic versions of Tensor Product (TP), Total Degree (TD), Hyperbolic Cross (HC) and Smolyak (SM) polynomials. Numerical results for linear elliptic SPDEs indicate a slight computational work advantage of isotropic SC over SG, with SC-SM and SG-TD being the best choices of approximation spaces for each method. Finally, numerical results corroborate the optimality of the theoretical estimate of anisotropy ratios introduced by the authors in a previous work for the construction of anisotropic approximation spaces. © 2011 Springer.

  7. Activity optimization method in nuclear medicine: a comparison with roc analysis

    International Nuclear Information System (INIS)

    Perez Diaz, M.; Diaz Rizo, O.; Lopez, A.; Estevez Aparicio, E.; Roque Diaz, R.

    2006-01-01

    Full text of publication follows: A discriminant method for optimizing the administered activity is validated by comparison with R.O.C. curves. The method is tested in 21 SPECT studies, performed with a Cardiac phantom. Three different cold lesions (L1, L2 and L3) are placed in the myocardium-wall by pairs for each SPECT. Three activities (84 MBq, 37 MBq or 18.5 MBq) of Tc 99 m diluted in water are used as background. The discriminant analysis is used to select the parameters that characterize image quality among the measured variables in the obtained tomographic images. They are a group of Lesion-to-Background (L/B) and Signal-to-Noise (S/N) ratios. Two clusters with different image quality (p=0.021) are obtained following the measured variables. The first one contains the studies performed with 37 MBq and 84 MBq and the second one the studies made with 18.5 MBq. Cluster classifications constitute the dependent variable in the discriminant function. The ratios B/L1, B/L2 and B/L3 are the parameters able to construct the function with 100 % of cases correctly classified into the clusters. The value of 37 MBq is the lowest tested activity that permits good results for the L/B variables, without significant differences respect to 84 MBq (p>0.05). The result is coincident with the applied R.O.C.-analysis, in which 37 MBq permits the highest area under the curve and low false-positive and false-negative rates with significant differences respect to 18.5 MBq (p=0.008)

  8. Estimating HIV incidence among adults in Kenya and Uganda: a systematic comparison of multiple methods.

    Directory of Open Access Journals (Sweden)

    Andrea A Kim

    2011-03-01

    Full Text Available Several approaches have been used for measuring HIV incidence in large areas, yet each presents specific challenges in incidence estimation.We present a comparison of incidence estimates for Kenya and Uganda using multiple methods: 1 Epidemic Projections Package (EPP and Spectrum models fitted to HIV prevalence from antenatal clinics (ANC and national population-based surveys (NPS in Kenya (2003, 2007 and Uganda (2004/2005; 2 a survey-derived model to infer age-specific incidence between two sequential NPS; 3 an assay-derived measurement in NPS using the BED IgG capture enzyme immunoassay, adjusted for misclassification using a locally derived false-recent rate (FRR for the assay; (4 community cohorts in Uganda; (5 prevalence trends in young ANC attendees. EPP/Spectrum-derived and survey-derived modeled estimates were similar: 0.67 [uncertainty range: 0.60, 0.74] and 0.6 [confidence interval: (CI 0.4, 0.9], respectively, for Uganda (2005 and 0.72 [uncertainty range: 0.70, 0.74] and 0.7 [CI 0.3, 1.1], respectively, for Kenya (2007. Using a local FRR, assay-derived incidence estimates were 0.3 [CI 0.0, 0.9] for Uganda (2004/2005 and 0.6 [CI 0, 1.3] for Kenya (2007. Incidence trends were similar for all methods for both Uganda and Kenya.Triangulation of methods is recommended to determine best-supported estimates of incidence to guide programs. Assay-derived incidence estimates are sensitive to the level of the assay's FRR, and uncertainty around high FRRs can significantly impact the validity of the estimate. Systematic evaluations of new and existing incidence assays are needed to the study the level, distribution, and determinants of the FRR to guide whether incidence assays can produce reliable estimates of national HIV incidence.

  9. Comparison of methods for generating typical meteorological year using meteorological data from a tropical environment

    Energy Technology Data Exchange (ETDEWEB)

    Janjai, S.; Deeyai, P. [Laboratory of Tropical Atmospheric Physics, Department of Physics, Faculty of Science, Silpakorn University, Nakhon Pathom 73000 (Thailand)

    2009-04-15

    This paper presents the comparison of methods for generating typical meteorological year (TMY) data set using a 10-year period of meteorological data from four stations in a tropical environment of Thailand. These methods are the Sadia National Laboratory method, the Danish method and the Festa and Ratto method. In investigating their performance, these methods were employed to generate TMYs for each station. For all parameters of the TMYs and the stations, statistical test indicates that there is no significant difference between the 10-year average values of these parameters and the corresponding average values from TMY generated from each method. The TMY obtained from each method was also used as input data to simulate two solar water heating systems and two photovoltaic systems with different sizes at the four stations by using the TRNSYS simulation program. Solar fractions and electrical output calculated using TMYs are in good agreement with those computed employing the 10-year period hourly meteorological data. It is concluded that the performance of the three methods has no significant difference for all stations under this investigation. Due to its simplicity, the method of Sandia National Laboratories is recommended for the generation of TMY for this tropical environment. The TMYs developed in this work can be used for solar energy and energy conservation applications at the four locations in Thailand. (author)

  10. A comparison of two methods for assessing awareness of antitobacco television advertisements.

    Science.gov (United States)

    Luxenberg, Michael G; Greenseid, Lija O; Depue, Jacob; Mowery, Andrea; Dreher, Marietta; Larsen, Lindsay S; Schillo, Barbara

    2016-05-01

    This study uses an online survey panel to compare two approaches for assessing ad awareness. The first uses a screenshot of a television ad and the second shows participants a full-length video of the ad. We randomly assigned 1034 Minnesota respondents to view a screenshot or a streaming video from two antitobacco ads. The study used one ad from ClearWay Minnesota's ITALIC! We All Pay the Price campaign, and one from the Centers for Disease Control ITALIC! Tips campaign. The key measure used to assess ad awareness was aided ad recall. Multivariate analyses of recall with cessation behaviour and attitudinal beliefs assessed the validity of these approaches. The respondents who saw the video reported significantly higher recall than those who saw the screenshot. Associations of recall with cessation behaviour and attitudinal beliefs were stronger and in the anticipated direction using the screenshot method. Over 20% of the respondents assigned to the video group could not see the ad. People who were under 45 years old, had incomes greater than $35,000 and women were reportedly less able to access the video. The methodology used to assess recall matters. Campaigns may exaggerate the successes or failures of their media campaigns, depending on the approach they employ and how they compare it to other media campaign evaluations. When incorporating streaming video, researchers should consider accessibility and report possible response bias. Researchers should fully define the measures they use, specify any viewing accessibility issues, and make ad comparisons only when using comparable methods. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  11. Consumer preferences for hearing aid attributes: a comparison of rating and conjoint analysis methods.

    Science.gov (United States)

    Bridges, John F P; Lataille, Angela T; Buttorff, Christine; White, Sharon; Niparko, John K

    2012-03-01

    Low utilization of hearing aids has drawn increased attention to the study of consumer preferences using both simple ratings (e.g., Likert scale) and conjoint analyses, but these two approaches often produce inconsistent results. The study aims to directly compare Likert scales and conjoint analysis in identifying important attributes associated with hearing aids among those with hearing loss. Seven attributes of hearing aids were identified through qualitative research: performance in quiet settings, comfort, feedback, frequency of battery replacement, purchase price, water and sweat resistance, and performance in noisy settings. The preferences of 75 outpatients with hearing loss were measured with both a 5-point Likert scale and with 8 paired-comparison conjoint tasks (the latter being analyzed using OLS [ordinary least squares] and logistic regression). Results were compared by examining implied willingness-to-pay and Pearson's Rho. A total of 56 respondents (75%) provided complete responses. Two thirds of respondents were male, most had sensorineural hearing loss, and most were older than 50; 44% of respondents had never used a hearing aid. Both methods identified improved performance in noisy settings as the most valued attribute. Respondents were twice as likely to buy a hearing aid with better functionality in noisy environments (p < .001), and willingness to pay for this attribute ranged from US$2674 on the Likert to US$9000 in the conjoint analysis. The authors find a high level of concordance between the methods-a result that is in stark contrast with previous research. The authors conclude that their result stems from constraining the levels on the Likert scale.

  12. A comparison between workshop and DVD methods of training for physiotherapists in diagnostic ultrasound

    International Nuclear Information System (INIS)

    McKiernan, Sharmaine; Chiarelli, Pauline; Warren-Forward, Helen

    2012-01-01

    Diagnostic ultrasound has expanded into physiotherapy though training in the modality appears to be and is reported by physiotherapists as limited. To address this, a training package was specifically developed for physiotherapists within Australia. The aim of this study was to evaluate the training package for improved educational outcome and to ascertain if there was a difference in outcome between two forms of delivery. The training package was delivered either during a workshop, where the training package was delivered face to face, or via a self paced DVD, which was mailed to participants. Both participant groups completed a web based assessment prior to and at the completion of the training. The assessment assessed their knowledge in ultrasound physics, scanning technique and anatomy. Pre and post training assessment scores were available for 84 participants who attended a workshop and 96 participants who received the DVD. Important and statistically significant (p < 0.05) increases in assessment scores from the beginning to the end of the training program were seen in both groups. On average, workshop participant scores improved by 37% and DVD participant scores improved by 27%. No statistical difference in the post assessment scores of the workshop trained or DVD trained participants was evident. On comparison, no statistically significant difference between the two methods of training; workshop and DVD, was found so both can be seen to be beneficial to the professional development of the physiotherapist in the use of diagnostic ultrasound within their profession.

  13. Study of comparison between neutron activation analysis and the other analytical methods

    International Nuclear Information System (INIS)

    Nagatsuka, Sumiko

    1986-01-01

    The neutron activation analysis (NAA) is compared with other analytical methods based on various data. It is concluded that NAA is most frequently used for the analysis of NBS environmental standard samples. NAA is suitable for the analysis of trace elements contained in environmental samples since non-destructive quantitative determination can be carried out simultaneously for different elements. NAA and XRF are the only methods which can be used for analyzing oil samples. This also indicates the usefulness of non-destructive techniques. In any standard sample, NAA can achieve a high accuracy for more than 90 % of the elements contained. On the other hand, the accuracy varies depending on the type of sample in the case of the other analytical methods examined. Regarding the prescision, NAA for any standard sample is the smallest in the proportion of the number of elements determined with C.V. (coefficient of variation) of less than 10 % to the total number of elements contained. However, the total number of elements which can be determined by NAA with C.V. of less than 10 % is greater than that by any of the other four methods examined. It should be noticed that there are some elements which can be determined only by NAA. (Nogami, K.)

  14. A Mixed Methods Sampling Methodology for a Multisite Case Study

    Science.gov (United States)

    Sharp, Julia L.; Mobley, Catherine; Hammond, Cathy; Withington, Cairen; Drew, Sam; Stringfield, Sam; Stipanovic, Natalie

    2012-01-01

    The flexibility of mixed methods research strategies makes such approaches especially suitable for multisite case studies. Yet the utilization of mixed methods to select sites for these studies is rarely reported. The authors describe their pragmatic mixed methods approach to select a sample for their multisite mixed methods case study of a…

  15. A comparison of moving object detection methods for real-time moving object detection

    Science.gov (United States)

    Roshan, Aditya; Zhang, Yun

    2014-06-01

    Moving object detection has a wide variety of applications from traffic monitoring, site monitoring, automatic theft identification, face detection to military surveillance. Many methods have been developed across the globe for moving object detection, but it is very difficult to find one which can work globally in all situations and with different types of videos. The purpose of this paper is to evaluate existing moving object detection methods which can be implemented in software on a desktop or laptop, for real time object detection. There are several moving object detection methods noted in the literature, but few of them are suitable for real time moving object detection. Most of the methods which provide for real time movement are further limited by the number of objects and the scene complexity. This paper evaluates the four most commonly used moving object detection methods as background subtraction technique, Gaussian mixture model, wavelet based and optical flow based methods. The work is based on evaluation of these four moving object detection methods using two (2) different sets of cameras and two (2) different scenes. The moving object detection methods have been implemented using MatLab and results are compared based on completeness of detected objects, noise, light change sensitivity, processing time etc. After comparison, it is observed that optical flow based method took least processing time and successfully detected boundary of moving objects which also implies that it can be implemented for real-time moving object detection.

  16. Comparison of methods for measuring the ion exchange capacity of a soil. Development of a quick method

    International Nuclear Information System (INIS)

    Amavis, R.

    1959-01-01

    In the course of a study on the movement of radioactive ions in soil we had to measure the cationic exchange capacity of various soil samples, this parameter being one of the most important in the appreciation of the extent of fixation of radioactive ions in the ground. The object of this report is to describe the various methods used and to compare the results obtained. A colorimetric method, using Co(NH 3 ) 6 3+ as exchangeable ion, was developed. It gives results comparable to those obtained with conventional methods, whilst considerably reducing the time necessary for the operations. (author) [fr

  17. Abstinence, Social Norms, and Drink Responsibly Messages: A Comparison Study

    Science.gov (United States)

    Glassman, Tavis J.; Kruger, Jessica Sloan; Deakins, Bethany A.; Paprzycki, Peter; Blavos, Alexis A.; Hutzelman, Erin N.; Diehr, Aaron

    2016-01-01

    Objective: The purpose of this study was to determine which type of prevention message (abstinence, social norms, or responsible drinking) was most effective at reducing alcohol consumption. Participants: The subjects from this study included 194 college students from a public university. Methods: Researchers employed a quasi-experimental design,…

  18. A Comparison of Fully Automated Methods of Data Analysis and Computer Assisted Heuristic Methods in an Electrode Kinetic Study of the Pathologically Variable [Fe(CN) 6 ] 3–/4– Process by AC Voltammetry

    KAUST Repository

    Morris, Graham P.; Simonov, Alexandr N.; Mashkina, Elena A.; Bordas, Rafel; Gillow, Kathryn; Baker, Ruth E.; Gavaghan, David J.; Bond, Alan M.

    2013-01-01

    Fully automated and computer assisted heuristic data analysis approaches have been applied to a series of AC voltammetric experiments undertaken on the [Fe(CN)6]3-/4- process at a glassy carbon electrode in 3 M KCl aqueous electrolyte. The recovered

  19. Transperineal Prostate Core Needle Biopsy: A Comparison of Coaxial Versus Noncoaxial Method in a Randomised Trial

    International Nuclear Information System (INIS)

    Babaei Jandaghi, Ali; Habibzadeh, Habib; Falahatkar, Siavash; Heidarzadeh, Abtin; Pourghorban, Ramin

    2016-01-01

    PurposeTo compare the procedural time and complication rate of coaxial technique with those of noncoaxial technique in transperineal prostate biopsy.Materials and MethodsTransperineal prostate biopsy with coaxial (first group, n = 120) and noncoaxial (second group, n = 120) methods was performed randomly in 240 patients. The procedural time was recorded. The level of pain experienced during the procedure was assessed on a visual analogue scale (VAS), and the rate of complications was evaluated in comparison of the two methods.ResultsThe procedural time was significantly shorter in the first group (p < 0.001). In the first group, pain occurred less frequently (p = 0.002), with a significantly lower VAS score being experienced (p < 0.002). No patient had post procedural fever. Haematuria (p = 0.029) and haemorrhage from the site of biopsy (p < 0.001) were seen less frequently in the first group. There was no significant difference in the rate of urethral haemorrhage between the two groups (p = 0.059). Urinary retention occurred less commonly in the first group (p = 0.029). No significant difference was seen in the rate of dysuria between the two groups (p = 0.078).ConclusionsTransperineal prostate biopsy using a coaxial needle is a faster and less painful method with a lower rate of complications compared with conventional noncoaxial technique.

  20. Transperineal Prostate Core Needle Biopsy: A Comparison of Coaxial Versus Noncoaxial Method in a Randomised Trial

    Energy Technology Data Exchange (ETDEWEB)

    Babaei Jandaghi, Ali [Guilan University of Medical Sciences, Department of Radiology, Poursina Hospital (Iran, Islamic Republic of); Habibzadeh, Habib; Falahatkar, Siavash [Guilan University of Medical Sciences, Urology Research Center, Razi Hospital (Iran, Islamic Republic of); Heidarzadeh, Abtin [Guilan University of Medical Sciences, Department of Community Medicine (Iran, Islamic Republic of); Pourghorban, Ramin, E-mail: ramin-p2005@yahoo.com [Shahid Beheshti University of Medical Sciences, Department of Radiology, Modarres Hospital (Iran, Islamic Republic of)

    2016-12-15

    PurposeTo compare the procedural time and complication rate of coaxial technique with those of noncoaxial technique in transperineal prostate biopsy.Materials and MethodsTransperineal prostate biopsy with coaxial (first group, n = 120) and noncoaxial (second group, n = 120) methods was performed randomly in 240 patients. The procedural time was recorded. The level of pain experienced during the procedure was assessed on a visual analogue scale (VAS), and the rate of complications was evaluated in comparison of the two methods.ResultsThe procedural time was significantly shorter in the first group (p < 0.001). In the first group, pain occurred less frequently (p = 0.002), with a significantly lower VAS score being experienced (p < 0.002). No patient had post procedural fever. Haematuria (p = 0.029) and haemorrhage from the site of biopsy (p < 0.001) were seen less frequently in the first group. There was no significant difference in the rate of urethral haemorrhage between the two groups (p = 0.059). Urinary retention occurred less commonly in the first group (p = 0.029). No significant difference was seen in the rate of dysuria between the two groups (p = 0.078).ConclusionsTransperineal prostate biopsy using a coaxial needle is a faster and less painful method with a lower rate of complications compared with conventional noncoaxial technique.

  1. Comparison of floods non-stationarity detection methods: an Austrian case study

    Science.gov (United States)

    Salinas, Jose Luis; Viglione, Alberto; Blöschl, Günter

    2016-04-01

    Non-stationarities in flood regimes have a huge impact in any mid and long term flood management strategy. In particular the estimation of design floods is very sensitive to any kind of flood non-stationarity, as they should be linked to a return period, concept that can be ill defined in a non-stationary context. Therefore it is crucial when analyzing existent flood time series to detect and, where possible, attribute flood non-stationarities to changing hydroclimatic and land-use processes. This works presents the preliminary results of applying different non-stationarity detection methods on annual peak discharges time series over more than 400 gauging stations in Austria. The kind of non-stationarities analyzed include trends (linear and non-linear), breakpoints, clustering beyond stochastic randomness, and detection of flood rich/flood poor periods. Austria presents a large variety of landscapes, elevations and climates that allow us to interpret the spatial patterns obtained with the non-stationarity detection methods in terms of the dominant flood generation mechanisms.

  2. Comparison of 2 electrophoretic methods and a wet-chemistry method in the analysis of canine lipoproteins.

    Science.gov (United States)

    Behling-Kelly, Erica

    2016-03-01

    The evaluation of lipoprotein metabolism in small animal medicine is hindered by the lack of a gold standard method and paucity of validation data to support the use of automated chemistry methods available in the typical veterinary clinical pathology laboratory. The physical and chemical differences between canine and human lipoproteins draw into question whether the transference of some of these human methodologies for the study of canine lipoproteins is valid. Validation of methodology must go hand in hand with exploratory studies into the diagnostic or prognostic utility of measuring specific lipoproteins in veterinary medicine. The goal of this study was to compare one commercially available wet-chemistry method to manual and automated lipoprotein electrophoresis in the analysis of canine lipoproteins. Canine lipoproteins from 50 dogs were prospectively analyzed by 2 electrophoretic methods, one automated and one manual method, and one wet-chemistry method. Electrophoretic methods identified a higher proportion of low-density lipoproteins than the wet-chemistry method. Automated electrophoresis occasionally failed to identify very low-density lipoproteins. Wet-chemistry methods designed for evaluation of human lipoproteins are insensitive to canine low-density lipoproteins and may not be applicable to the study of canine lipoproteins. Automated electrophoretic methods will likely require significant modifications if they are to be used in the analysis of canine lipoproteins. Studies aimed at determining the impact of a disease state on lipoproteins should thoroughly investigate the selected methodology prior to the onset of the study. © 2016 American Society for Veterinary Clinical Pathology.

  3. Emulsions: the cutting edge of development in blasting agent technology - a method for economic comparison

    Energy Technology Data Exchange (ETDEWEB)

    Ayat, M.G.; Allen, S.G.

    1988-03-01

    This work examines the history and development of blasting agents beginning with ANFO in the 1950's and concluding with a specific look at the 1980's blasting technology: the emulsion. Properties of emulsions and Emulsion Blend Explosive Systems are compared with ANFO and a method of comparing their costs, useful for comparing any two explosives, is developed. Based on this comparison, the Emulsion Blend Explosive System is determined superior to ANFO on the basis of cost per unit of overburden broken. 4 refs.

  4. VerSi. A method for the quantitative comparison of repository systems

    Energy Technology Data Exchange (ETDEWEB)

    Kaempfer, T.U.; Ruebel, A.; Resele, G. [AF-Consult Switzerland Ltd, Baden (Switzerland); Moenig, J. [GRS Braunschweig (Germany)

    2015-07-01

    Decision making and design processes for radioactive waste repositories are guided by safety goals that need to be achieved. In this context, the comparison of different disposal concepts can provide relevant support to better understand the performance of the repository systems. Such a task requires a method for a traceable comparison that is as objective as possible. We present a versatile method that allows for the comparison of different disposal concepts in potentially different host rocks. The condition for the method to work is that the repository systems are defined to a comparable level including designed repository structures, disposal concepts, and engineered and geological barriers which are all based on site-specific safety requirements. The method is primarily based on quantitative analyses and probabilistic model calculations regarding the long-term safety of the repository systems under consideration. The crucial evaluation criteria for the comparison are statistical key figures of indicators that characterize the radiotoxicity flux out of the so called containment-providing rock zone (einschlusswirksamer Gebirgsbereich). The key figures account for existing uncertainties with respect to the actual site properties, the safety relevant processes, and the potential future impact of external processes on the repository system, i.e., they include scenario-, process-, and parameter-uncertainties. The method (1) leads to an evaluation of the retention and containment capacity of the repository systems and its robustness with respect to existing uncertainties as well as to potential external influences; (2) specifies the procedures for the system analyses and the calculation of the statistical key figures as well as for the comparative interpretation of the key figures; and (3) also gives recommendations and sets benchmarks for the comparative assessment of the repository systems under consideration based on the key figures and additional qualitative

  5. A Comparison of Two Methods for Recruiting Children with an Intellectual Disability

    Science.gov (United States)

    Adams, Dawn; Handley, Louise; Heald, Mary; Simkiss, Doug; Jones, Alison; Walls, Emily; Oliver, Chris

    2017-01-01

    Background: Recruitment is a widely cited barrier of representative intellectual disability research, yet it is rarely studied. This study aims to document the rates of recruiting children with intellectual disabilities using two methods and discuss the impact of such methods on sample characteristics. Methods: Questionnaire completion rates are…

  6. Burrowing as a novel voluntary strength training method for mice: A comparison of various voluntary strength or resistance exercise methods.

    Science.gov (United States)

    Roemers, P; Mazzola, P N; De Deyn, P P; Bossers, W J; van Heuvelen, M J G; van der Zee, E A

    2018-04-15

    Voluntary strength training methods for rodents are necessary to investigate the effects of strength training on cognition and the brain. However, few voluntary methods are available. The current study tested functional and muscular effects of two novel voluntary strength training methods, burrowing (digging a substrate out of a tube) and unloaded tower climbing, in male C57Bl6 mice. To compare these two novel methods with existing exercise methods, resistance running and (non-resistance) running were included. Motor coordination, grip strength and muscle fatigue were measured at baseline, halfway through and near the end of a fourteen week exercise intervention. Endurance was measured by an incremental treadmill test after twelve weeks. Both burrowing and resistance running improved forelimb grip strength as compared to controls. Running and resistance running increased endurance in the treadmill test and improved motor skills as measured by the balance beam test. Post-mortem tissue analyses revealed that running and resistance running induced Soleus muscle hypertrophy and reduced epididymal fat mass. Tower climbing elicited no functional or muscular changes. As a voluntary strength exercise method, burrowing avoids the confounding effects of stress and positive reinforcers elicited in forced strength exercise methods. Compared to voluntary resistance running, burrowing likely reduces the contribution of aerobic exercise components. Burrowing qualifies as a suitable voluntary strength training method in mice. Furthermore, resistance running shares features of strength training and endurance (aerobic) exercise and should be considered a multi-modal aerobic-strength exercise method in mice. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Methodology - PSA Regulatory handbook. Comparisons to a modern PSA study

    International Nuclear Information System (INIS)

    Bostroem, Urban; Jung, Gunnar; Flodin, Yngve

    2003-03-01

    The regulatory handbook is applicable to all types of initiating events and all operating conditions. It should be noted that it does not make the traditional subdivision of PSA into internal and external events, level 1 and level 2 PSA, or power operation and shut-down. The reason for this is that this has given the regulatory handbook a more logical structure, and that this approach underlines the integrated character of PSA when it comes to creating the plan risk profile. The regulatory handbook has been structured following the requirements on a PSA for a nuclear power plant, as this is the most demanding application. However, it is applicable also to the analysis of other nuclear installations. The purpose of the comparative review presented in this report has been to, as part of a quality review establish the PSA Handbook, compare (parts of) the handbook and its criteria with a recent PSA analysis, and to identify major discrepancies. Considerable weight has also been allocated to a review of the plant model (Risk Spectrum event trees and fault trees). The results presented in the report are not based on a complete review of the PSA in question (or of the complete PSA Handbook). Following discussions between the SKI and SwedPower, and based on the experience of the SwedPower reviewers, the following issues were chosen to be the main parts of the project: 1) General comparison according to content and transparency - Levels of ambition in PSA Handbook, PSA method description and actual PSA report. 2) Detailed comparison of: Selected component failure data - Assumptions regarding room events - CCI frequencies, realism, identification, categorisation - Taking credit for non-safety classified systems - Event tree modelling - Presentation of results 3) Fault tree model, specifically - Time frame for crediting of battery capacity - Modelling of regulators - Modelling of dependencies for room events - general quality, like how the paper documentation and the logic

  8. Reexamining the Dissolution of Spent Fuel: A Comparison of Different Methods for Calculating Rates

    International Nuclear Information System (INIS)

    Hanson, Brady D.; Stout, Ray B.

    2004-01-01

    Dissolution rates for spent fuel have typically been reported in terms of a rate normalized to the surface area of the specimen. Recent evidence has shown that neither the geometric surface area nor that measured with BET accurately predicts the effective surface area of spent fuel. Dissolution rates calculated from results obtained by flowthrough tests were reexamined comparing the cumulative releases and surface area normalized rates. While initial surface area is important for comparison of different rates, it appears that normalizing to the surface area introduces unnecessary uncertainty compared to using cumulative or fractional release rates. Discrepancies in past data analyses are mitigated using this alternative method

  9. Onset Detection in Surface Electromyographic Signals: A Systematic Comparison of Methods

    Directory of Open Access Journals (Sweden)

    Claus Flachenecker

    2001-06-01

    Full Text Available Various methods to determine the onset of the electromyographic activity which occurs in response to a stimulus have been discussed in the literature over the last decade. Due to the stochastic characteristic of the surface electromyogram (SEMG, onset detection is a challenging task, especially in weak SEMG responses. The performance of the onset detection methods were tested, mostly by comparing their automated onset estimations to the manually determined onsets found by well-trained SEMG examiners. But a systematic comparison between methods, which reveals the benefits and the drawbacks of each method compared to the other ones and shows the specific dependence of the detection accuracy on signal parameters, is still lacking. In this paper, several classical threshold-based approaches as well as some statistically optimized algorithms were tested on large samples of simulated SEMG data with well-known signal parameters. Rating between methods is performed by comparing their performance to that of a statistically optimal maximum likelihood estimator which serves as reference method. In addition, performance was evaluated on real SEMG data obtained in a reaction time experiment. Results indicate that detection behavior strongly depends on SEMG parameters, such as onset rise time, signal-to-noise ratio or background activity level. It is shown that some of the threshold-based signal-power-estimation procedures are very sensitive to signal parameters, whereas statistically optimized algorithms are generally more robust.

  10. A comparison of U.S. and European methods for accident scenario, identificaton, selection and quantification

    International Nuclear Information System (INIS)

    Cadwallader, L.C.; Djerassi, H.; Lampin, I.

    1989-10-01

    This paper presents a comparison of the varying methods used to identify and select accident-initiating events for safety analysis and probabilistic risk assessment (PRA). Initiating events are important in that they define the extent of a given safety analysis or PRA. Comprehensiveness in identification and selection of initiating events is necessary to ensure that a thorough analysis is being performed. While total completeness cannot ever be realized, inclusion of all safety significant events can be attained. The European approach to initiating event identification and selection arises from within a newly developed Safety Analysis methodology framework. This is a functional approach, with accident initiators based on events that will cause a system or facility loss of function. The US method divides accident initiators into two groups, internal accident initiators into two groups, internal and external events. Since traditional US PRA techniques are applied to fusion facilities, the recommended PRA-based approach is a review of historical safety documents coupled with a facility-level Master Logic Diagram. The US and European methods are described, and both are applied to a proposed International Thermonuclear Experiment Reactor (ITER) Magnet System in a sample problem. Contrasts in the US and European methods are discussed. Within their respective frameworks, each method can provide the comprehensiveness of safety-significant events needed for a thorough analysis. 4 refs., 8 figs., 11 tabs

  11. Robustness of phase retrieval methods in x-ray phase contrast imaging: A comparison

    International Nuclear Information System (INIS)

    Yan, Aimin; Wu, Xizeng; Liu, Hong

    2011-01-01

    Purpose: The robustness of the phase retrieval methods is of critical importance for limiting and reducing radiation doses involved in x-ray phase contrast imaging. This work is to compare the robustness of two phase retrieval methods by analyzing the phase maps retrieved from the experimental images of a phantom. Methods: Two phase retrieval methods were compared. One method is based on the transport of intensity equation (TIE) for phase contrast projections, and the TIE-based method is the most commonly used method for phase retrieval in the literature. The other is the recently developed attenuation-partition based (AP-based) phase retrieval method. The authors applied these two methods to experimental projection images of an air-bubble wrap phantom for retrieving the phase map of the bubble wrap. The retrieved phase maps obtained by using the two methods are compared. Results: In the wrap's phase map retrieved by using the TIE-based method, no bubble is recognizable, hence, this method failed completely for phase retrieval from these bubble wrap images. Even with the help of the Tikhonov regularization, the bubbles are still hardly visible and buried in the cluttered background in the retrieved phase map. The retrieved phase values with this method are grossly erroneous. In contrast, in the wrap's phase map retrieved by using the AP-based method, the bubbles are clearly recovered. The retrieved phase values with the AP-based method are reasonably close to the estimate based on the thickness-based measurement. The authors traced these stark performance differences of the two methods to their different techniques employed to deal with the singularity problem involved in the phase retrievals. Conclusions: This comparison shows that the conventional TIE-based phase retrieval method, regardless if Tikhonov regularization is used or not, is unstable against the noise in the wrap's projection images, while the AP-based phase retrieval method is shown in these

  12. Estimation of potential evapotranspiration of a coastal savannah environment; comparison of methods

    International Nuclear Information System (INIS)

    Asare, D.K.; Ayeh, E.O.; Amenorpe, G.; Banini, G.K.

    2011-01-01

    Six potential evapotranspiration models namely, Penman-Monteith, Hargreaves-Samani , Priestley-Taylor, IRMAK1, IRMAK2 and TURC, were used to estimate daily PET values at Atomic-Kwabenya in the coastal savannah environment of Ghana for the year 2005. The study compared PET values generated by six models and identified which ones compared favourably with the Penman-Monteith model which is the recommended standard method for estimating PET. Cross comparison analysis showed that only the daily estimates of PET of Hargreaves-Samani model correlated reasonably (r = 0.82) with estimates by the Penman-Monteith model. Additionally, PET values by the Priestley-Taylor and TURC models were highly correlated (r = 0.99) as well as those generated by IRMAK2 and TURC models (r = 0.96). Statistical analysis, based on pair comparison of means, showed that daily PET estimates of the Penman-Monteith model were not different from the Priestley-Taylor model for the Kwabenya-Atomic area located in the coastal savannah environment of Ghana. The Priestley-Taylor model can be used, in place of the Penman-Monteith model, to estimate daily PET for the Atomic-Kwabenya area of the coastal savannah environment of Ghana. The Hargreaves-Samani model can also be used to estimate PET for the study area because its PET estimates correlated reasonably with those of the Penman-Monteith model (r = 0.82) and requires only air temperature measurements as inputs. (au)

  13. A comparison of two methods of logMAR visual acuity data scoring for statistical analysis

    Directory of Open Access Journals (Sweden)

    O. A. Oduntan

    2009-12-01

    Full Text Available The purpose of this study was to compare two methods of logMAR visual acuity (VA scoring. The two methods are referred to as letter scoring (method 1 and line scoring (method 2. The two methods were applied to VA data obtained from one hundred and forty (N=140 children with oculocutaneous albinism. Descriptive, correlation andregression statistics were then used to analyze the data.  Also, where applicable, the Bland and Altman analysis was used to compare sets of data from the two methods.  The right and left eyes data were included in the study, but because the findings were similar in both eyes, only the results for the right eyes are presented in this paper.  For method 1, the mean unaided VA (mean UAOD1 = 0.39 ±0.15 logMAR. The mean aided (mean ADOD1 VA = 0.50 ± 0.16 logMAR.  For method 2, the mean unaided (mean UAOD2 VA = 0.71 ± 0.15 logMAR, while the mean aided VA (mean ADOD2 = 0.60 ± 0.16 logMAR. The range and mean values of the improvement in VA for both methods were the same. The unaided VAs (UAOD1, UAOD2 and aided (ADOD1, ADOD2 for methods 1 and 2 correlated negatively (Unaided, r = –1, p<0.05, (Aided, r = –1, p<0.05.  The improvement in VA (differences between the unaided and aided VA values (DOD1 and DOD2 were positively correlated (r = +1, p <0.05. The Bland and Altman analyses showed that the VA improvement (unaided – aided VA values (DOD1 and DOD2 were similar for the two methods. Findings indicated that only the improvement in VA could be compared when different scoring methods are used. Therefore the scoring method used in any VA research project should be stated in the publication so that appropriate comparisons could be made by other researchers.

  14. A cross-benchmark comparison of 87 learning to rank methods

    NARCIS (Netherlands)

    Tax, N.; Bockting, S.; Hiemstra, D.

    2015-01-01

    Learning to rank is an increasingly important scientific field that comprises the use of machine learning for the ranking task. New learning to rank methods are generally evaluated on benchmark test collections. However, comparison of learning to rank methods based on evaluation results is hindered

  15. A comparison of methods for the assessment of postural load and duration of computer use

    NARCIS (Netherlands)

    Heinrich, J.; Blatter, B.M.; Bongers, P.M.

    2004-01-01

    Aim: To compare two different methods for assessment of postural load and duration of computer use in office workers. Methods: The study population existed of 87 computer workers. Questionnaire data about exposure were compared with exposures measured by a standardised or objective method. Measuring

  16. A comparative study of three different gene expression analysis methods.

    Science.gov (United States)

    Choe, Jae Young; Han, Hyung Soo; Lee, Seon Duk; Lee, Hanna; Lee, Dong Eun; Ahn, Jae Yun; Ryoo, Hyun Wook; Seo, Kang Suk; Kim, Jong Kun

    2017-12-04

    TNF-α regulates immune cells and acts as an endogenous pyrogen. Reverse transcription polymerase chain reaction (RT-PCR) is one of the most commonly used methods for gene expression analysis. Among the alternatives to PCR, loop-mediated isothermal amplification (LAMP) shows good potential in terms of specificity and sensitivity. However, few studies have compared RT-PCR and LAMP for human gene expression analysis. Therefore, in the present study, we compared one-step RT-PCR, two-step RT-LAMP and one-step RT-LAMP for human gene expression analysis. We compared three gene expression analysis methods using the human TNF-α gene as a biomarker from peripheral blood cells. Total RNA from the three selected febrile patients were subjected to the three different methods of gene expression analysis. In the comparison of three gene expression analysis methods, the detection limit of both one-step RT-PCR and one-step RT-LAMP were the same, while that of two-step RT-LAMP was inferior. One-step RT-LAMP takes less time, and the experimental result is easy to determine. One-step RT-LAMP is a potentially useful and complementary tool that is fast and reasonably sensitive. In addition, one-step RT-LAMP could be useful in environments lacking specialized equipment or expertise.

  17. Comparison study between traditional and finite element methods for slopes under heavy rainfall

    Directory of Open Access Journals (Sweden)

    M. Rabie

    2014-08-01

    Moreover, slope stability concerning rainfall and infiltration is analyzed. Specially, two kinds of infiltrations (saturated and unsaturated are considered. Many slopes become saturated during periods of intense rainfall or snowmelt, with the water table rising to the ground surface, and water flowing essentially parallel to the direction of the “slope” and “Influence” of the change in shear strength, density, pore-water pressure and seepage force in soil slices on the slope stability is explained. Finally, it is found that classical limit equilibrium methods are highly conservative compared to the finite element approach. For assessment the factor of safety for slope using the later technique, no assumption needs to be made in advance about the shape or location of the failure surface, slice side forces and their directions. This document outlines the capabilities of the finite element method in the analysis of slope stability problems.

  18. A Comparison of underground opening support design methods in jointed rock mass

    International Nuclear Information System (INIS)

    Gharavi, M.; Shafiezadeh, N.

    2008-01-01

    It is of great importance to consider long-term stability of rock mass around the openings of underground structure. during design, construction and operation of the said structures in rock. In this context. three methods namely. empirical. analytical and numerical have been applied to design and analyze the stability of underground infrastructure at the Siah Bisheh Pumping Storage Hydro-Electric Power Project in Iran. The geological and geotechnical data utilized in this article were selected and based on the preliminary studies of this project. In the initial stages of design. it was recommended that, two methods of rock mass classification Q and rock mass rating should be utilized for the support system of the underground cavern. Next, based on the structural instability, the support system was adjusted by the analytical method. The performance of the recommended support system was reviewed by the comparison of the ground response curve and rock support interactions with surrounding rock mass, using FEST03 software. Moreover, for further assessment of the realistic rock mass behavior and support system, the numerical modeling was performed utilizing FEST03 software. Finally both the analytical and numerical methods were compared, to obtain satisfactory results complimenting each other

  19. Burrowing as a novel voluntary strength training method for mice : A comparison of various voluntary strength or resistance exercise methods

    NARCIS (Netherlands)

    Roemers, P; Mazzola, P N; De Deyn, P P; Bossers, W J; van Heuvelen, M J G; van der Zee, E A

    2018-01-01

    BACKGROUND: Voluntary strength training methods for rodents are necessary to investigate the effects of strength training on cognition and the brain. However, few voluntary methods are available. NEW METHOD: The current study tested functional and muscular effects of two novel voluntary strength

  20. Automatic variable selection method and a comparison for quantitative analysis in laser-induced breakdown spectroscopy

    Science.gov (United States)

    Duan, Fajie; Fu, Xiao; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Zhang, Cong

    2018-05-01

    In this work, an automatic variable selection method for quantitative analysis of soil samples using laser-induced breakdown spectroscopy (LIBS) is proposed, which is based on full spectrum correction (FSC) and modified iterative predictor weighting-partial least squares (mIPW-PLS). The method features automatic selection without artificial processes. To illustrate the feasibility and effectiveness of the method, a comparison with genetic algorithm (GA) and successive projections algorithm (SPA) for different elements (copper, barium and chromium) detection in soil was implemented. The experimental results showed that all the three methods could accomplish variable selection effectively, among which FSC-mIPW-PLS required significantly shorter computation time (12 s approximately for 40,000 initial variables) than the others. Moreover, improved quantification models were got with variable selection approaches. The root mean square errors of prediction (RMSEP) of models utilizing the new method were 27.47 (copper), 37.15 (barium) and 39.70 (chromium) mg/kg, which showed comparable prediction effect with GA and SPA.

  1. A simple method for the comparison of commercially available ATP hygiene-monitoring systems.

    Science.gov (United States)

    Colquhoun, K O; Timms, S; Fricker, C R

    1998-04-01

    The purpose of this study was to evaluate a methodology which could easily be used in any test laboratory in a uniform and consistent way for determining the sensitivity and reproducibility of results obtained with three ATP hygiene-monitoring systems. The test protocol discussed here allows such comparison to be made, thereby establishing a method of benchmarking both new systems and developments of existing systems. The sensitivity of the LUMINOMETER K, PocketSwab (Charm Sciences) was found to be between 0.4 and 4.0 nmol of ATP with poor reproducibility at the 40.0 nmol level (CV, 35%). The sensitivity of the IDEXX LIGHTING system and the Biotrace UNILITE Xcel were both between 0.04 and 0.4 nmol with coefficients of variation (CVs) of between 9% at 0.04 nmol and 10% at 0.4 nmol for the IDEXX system and 17% at 0.04 nmol and 21% at 0.4 nmol for the Biotrace system. The three systems were tested with a range of dilutions of different food residues: orange juice, raw milk, and ground beef slurry. All three test systems allowed detection of orange juice and raw milk at dilutions of 1:1,000, although the CV of results from the Charm system (54 and 74% respectively) was poor at this dilution for both residues. The sensitivity of the test systems was poorer for ground beef slurry than it was for orange juice and raw milk. Both the Biotrace and IDEXX systems were able to detect a 1:100 dilution of beef slurry (with CVs of 17 and 10% respectively), whilst at this dilution results from the Charm system had a CV of 55%. It was possible by using the method described in this paper to rank in order of sensitivity and reproducibility the three single-shot ATP hygiene-monitoring systems investigated, with the IDEXX LIGHTNING being the best, followed by the Biotrace UNILITE Xcel, and then the charm LUMINOMETER K, PocketSwab.

  2. Real-time GPS seismology using a single receiver: method comparison, error analysis and precision validation

    Science.gov (United States)

    Li, Xingxing

    2014-05-01

    Earthquake monitoring and early warning system for hazard assessment and mitigation has traditional been based on seismic instruments. However, for large seismic events, it is difficult for traditional seismic instruments to produce accurate and reliable displacements because of the saturation of broadband seismometers and problematic integration of strong-motion data. Compared with the traditional seismic instruments, GPS can measure arbitrarily large dynamic displacements without saturation, making them particularly valuable in case of large earthquakes and tsunamis. GPS relative positioning approach is usually adopted to estimate seismic displacements since centimeter-level accuracy can be achieved in real-time by processing double-differenced carrier-phase observables. However, relative positioning method requires a local reference station, which might itself be displaced during a large seismic event, resulting in misleading GPS analysis results. Meanwhile, the relative/network approach is time-consuming, particularly difficult for the simultaneous and real-time analysis of GPS data from hundreds or thousands of ground stations. In recent years, several single-receiver approaches for real-time GPS seismology, which can overcome the reference station problem of the relative positioning approach, have been successfully developed and applied to GPS seismology. One available method is real-time precise point positioning (PPP) relied on precise satellite orbit and clock products. However, real-time PPP needs a long (re)convergence period, of about thirty minutes, to resolve integer phase ambiguities and achieve centimeter-level accuracy. In comparison with PPP, Colosimo et al. (2011) proposed a variometric approach to determine the change of position between two adjacent epochs, and then displacements are obtained by a single integration of the delta positions. This approach does not suffer from convergence process, but the single integration from delta positions to

  3. Accessibility of long-term family planning methods: a comparison study between Output Based Approach (OBA) clients verses non-OBA clients in the voucher supported facilities in Kenya.

    Science.gov (United States)

    Oyugi, Boniface; Kioko, Urbanus; Kaboro, Stephen Mbugua; Gikonyo, Shadrack; Okumu, Clarice; Ogola-Munene, Sarah; Kalsi, Shaminder; Thiani, Simon; Korir, Julius; Odundo, Paul; Baltazaar, Billy; Ranji, Moses; Muraguri, Nicholas; Nzioka, Charles

    2017-03-27

    The study seeks to evaluate the difference in access of long-term family planning (LTFP) methods among the output based approach (OBA) and non-OBA clients within the OBA facility. The study utilises a quasi experimental design. A two tailed unpaired t-test with unequal variance is used to test for the significance variation in the mean access. The difference in difference (DiD) estimates of program effect on long term family planning methods is done to estimate the causal effect by exploiting the group level difference on two or more dimensions. The study also uses a linear regression model to evaluate the predictors of choice of long-term family planning methods. Data was analysed using SPSS version 17. All the methods (Bilateral tubal ligation-BTL, Vasectomy, intrauterine contraceptive device -IUCD, Implants, and Total or combined long-term family planning methods -LTFP) showed a statistical significant difference in the mean utilization between OBA versus non-OBA clients. The difference in difference estimates reveal that the difference in access between OBA and non OBA clients can significantly be attributed to the implementation of the OBA program for intrauterine contraceptive device (p = 0.002), Implants (p = 0.004), and total or combined long-term family planning methods (p = 0.001). The county of residence is a significant determinant of access to all long-term family planning methods except vasectomy and the year of registration is a significant determinant of access especially for implants and total or combined long-term family planning methods. The management level and facility type does not play a role in determining the type of long-term family planning method preferred; however, non-governmental organisations (NGOs) as management level influences the choice of all methods (Bilateral tubal ligation, intrauterine contraceptive device, Implants, and combined methods) except vasectomy. The adjusted R 2 value, representing the percentage of

  4. Modality comparison for small animal radiotherapy: A simulation study

    Energy Technology Data Exchange (ETDEWEB)

    Bazalova, Magdalena, E-mail: bazalova@stanford.edu; Nelson, Geoff; Noll, John M.; Graves, Edward E. [Department of Radiation Oncology, Molecular Imaging Program at Stanford, Stanford University, Stanford, California 94305 (United States)

    2014-01-15

    Purpose: Small animal radiation therapy has advanced significantly in recent years. Whereas in the past dose was delivered using a single beam and a lead shield for sparing of healthy tissue, conformal doses can be now delivered using more complex dedicated small animal radiotherapy systems with image guidance. The goal of this paper is to investigate dose distributions for three small animal radiation treatment modalities. Methods: This paper presents a comparison of dose distributions generated by the three approaches—a single-field irradiator with a 200 kV beam and no image guidance, a small animal image-guided conformal system based on a modified microCT scanner with a 120 kV beam developed at Stanford University, and a dedicated conformal system, SARRP, using a 220 kV beam developed at Johns Hopkins University. The authors present a comparison of treatment plans for the three modalities using two cases: a mouse with a subcutaneous tumor and a mouse with a spontaneous lung tumor. A 5 Gy target dose was calculated using the EGSnrc Monte Carlo codes. Results: All treatment modalities generated similar dose distributions for the subcutaneous tumor case, with the highest mean dose to the ipsilateral lung and bones in the single-field plan (0.4 and 0.4 Gy) compared to the microCT (0.1 and 0.2 Gy) and SARRP (0.1 and 0.3 Gy) plans. The lung case demonstrated that due to the nine-beam arrangements in the conformal plans, the mean doses to the ipsilateral lung, spinal cord, and bones were significantly lower in the microCT plan (2.0, 0.4, and 1.9 Gy) and the SARRP plan (1.5, 0.5, and 1.8 Gy) than in single-field irradiator plan (4.5, 3.8, and 3.3 Gy). Similarly, the mean doses to the contralateral lung and the heart were lowest in the microCT plan (1.5 and 2.0 Gy), followed by the SARRP plan (1.7 and 2.2 Gy), and they were highest in the single-field plan (2.5 and 2.4 Gy). For both cases, dose uniformity was greatest in the single-field irradiator plan followed by

  5. A Comparison of Methods Used to Define the Phenolic Content and Antioxidant Activity of Croatian Wines

    Directory of Open Access Journals (Sweden)

    Sanja Martinez

    2005-01-01

    Full Text Available Concentrations of phenolic antioxidants and antioxidant activities were determined for three different vintages of red varietal Plavac mali wines (Grgich, white varietal Pošip wines (Grgich and white varietal Žlahtina wines (Gršković. All three mentioned cultivars (Vitis vinifera L. are well exploited in vineyards along the Croatian coast. Two different tests, the spectrophotometric Folin-Ciocalteau test and redox derivative potentiometric titration with electrogenerated chlorine, were used to quantify phenolic antioxidants and express them in gallic acid equivalents. The sequence of wines obtained by the two methods, ranked according to the increasing phenolic content, was comparable. Among all the tested wines, Plavac mali of the vintage 2003 showed the highest phenol content of ~5 g/L. As expected, due to the lack of anthocyanins and other pigments present in red wines, all six white wines showed approximately ten times lower phenolic levels in comparison with red wines, averaging between 190–380 mg/L. This study demonstrates the utilization of quick and reliable analytical techniques, spectrophotometry and derivative potentiometric titration, in quantification of wine phenolics. The change in free radical scavenging ability of the same set of wines was evaluated according to the Brand-Williams assay. The results show, on average, eight times higher free radical scavenging ability of red wines. Also, a slight decrease in the free radical scavenging ability of the older vintage white wines was observed, while the antioxidant activities of the older vintage red wines (Plavac mali were slightly higher, due to formation of condensed tannins with time.

  6. On solutions of stochastic oscillatory quadratic nonlinear equations using different techniques, a comparison study

    International Nuclear Information System (INIS)

    El-Tawil, M A; Al-Jihany, A S

    2008-01-01

    In this paper, nonlinear oscillators under quadratic nonlinearity with stochastic inputs are considered. Different methods are used to obtain first order approximations, namely, the WHEP technique, the perturbation method, the Pickard approximations, the Adomian decompositions and the homotopy perturbation method (HPM). Some statistical moments are computed for the different methods using mathematica 5. Comparisons are illustrated through figures for different case-studies

  7. Pulsational stabilities of a star in thermal imbalance: comparison between the methods

    International Nuclear Information System (INIS)

    Vemury, S.K.

    1978-01-01

    The stability coefficients for quasi-adiabatic pulsations for a model in thermal imbalance are evaluated using the dynamical energy (DE) approach, the total (kinetic plus potential) energy (TE) approach, and the small amplitude (SA) approaches. From a comparison among the methods, it is found that there can exist two distinct stability coefficients under conditions of thermal imbalance as pointed out by Demaret. It is shown that both the TE approaches lead to one stability coefficient, while both the SA approaches lead to another coefficient. The coefficient obtained through the energy approaches is identified as the one which determines the stability of the velocity amplitudes.For a prenova model with a thin hydrogen-burning shell in thermal imbalance, several radial modes are found to be unstable both for radial displacements and for velocity amplitudes. However, a new kind of pulsational instability also appears, viz., while the radial displacements are unstable, the velocity amplitudes may be stabilized through the thermal imbalance terms

  8. A comparison of land use change accounting methods: seeking common grounds for key modeling choices in biofuel assessments

    DEFF Research Database (Denmark)

    de Bikuna Salinas, Koldo Saez; Hamelin, Lorie; Hauschild, Michael Zwicky

    2018-01-01

    Five currently used methods to account for the global warming (GW) impact of the induced land-use change (LUC) greenhouse gas (GHG) emissions have been applied to four biofuel case studies. Two of the investigated methods attempt to avoid the need of considering a definite occupation -thus...... amortization period by considering ongoing LUC trends as a dynamic baseline. This leads to the accounting of a small fraction (0.8%) of the related emissions from the assessed LUC, thus their validity is disputed. The comparison of methods and contrasting case studies illustrated the need of clearly...... distinguishing between the different time horizons involved in life cycle assessments (LCA) of land-demanding products like biofuels. Absent in ISO standards, and giving rise to several confusions, definitions for the following time horizons have been proposed: technological scope, inventory model, impact...

  9. Comparison of fine particle measurements from a direct-reading instrument and a gravimetric sampling method.

    Science.gov (United States)

    Kim, Jee Young; Magari, Shannon R; Herrick, Robert F; Smith, Thomas J; Christiani, David C

    2004-11-01

    Particulate air pollution, specifically the fine particle fraction (PM2.5), has been associated with increased cardiopulmonary morbidity and mortality in general population studies. Occupational exposure to fine particulate matter can exceed ambient levels by a large factor. Due to increased interest in the health effects of particulate matter, many particle sampling methods have been developed In this study, two such measurement methods were used simultaneously and compared. PM2.5 was sampled using a filter-based gravimetric sampling method and a direct-reading instrument, the TSI Inc. model 8520 DUSTTRAK aerosol monitor. Both sampling methods were used to determine the PM2.5 exposure in a group of boilermakers exposed to welding fumes and residual fuel oil ash. The geometric mean PM2.5 concentration was 0.30 mg/m3 (GSD 3.25) and 0.31 mg/m3 (GSD 2.90)from the DUSTTRAK and gravimetric method, respectively. The Spearman rank correlation coefficient for the gravimetric and DUSTTRAK PM2.5 concentrations was 0.68. Linear regression models indicated that log, DUSTTRAK PM2.5 concentrations significantly predicted loge gravimetric PM2.5 concentrations (p gravimetric PM2.5 concentrations was found to be modified by surrogate measures for seasonal variation and type of aerosol. PM2.5 measurements from the DUSTTRAK are well correlated and highly predictive of measurements from the gravimetric sampling method for the aerosols in these work environments. However, results from this study suggest that aerosol particle characteristics may affect the relationship between the gravimetric and DUSTTRAK PM2.5 measurements. Recalibration of the DUSTTRAK for the specific aerosol, as recommended by the manufacturer, may be necessary to produce valid measures of airborne particulate matter.

  10. Results of a comparison study of advanced reactors

    International Nuclear Information System (INIS)

    Bueno de Mesquita, K.G.; Gout, W.; Heil, J.A.; Tanke, R.H.J.; Geevers, F.

    1991-06-01

    The PINK programme is a 4-year programme of five parties involved in nuclear energy in the Netherlands: GKN (operator of the Dodewaard plant), KEMA (Research institute of the Netherlands Utilities), ECN (Netherlands Energy Research Foundation), NUCON (Engineering and Contracting Company) and IRI Interfaculty Reactor Institute of the Delft University of Technology), to coordinate their efforts to intensify the nuclear competence of the industry, the utilities and the research and engineering companies. This programme is sponsored by the Ministry of Economic Affairs. The PINK programme consists of five parts. This report pertains to part 1 of the programme: comparison study of advanced reactors concerning the four so-called second-stage designs SBWR, AP600, SIR and CANDU, which, compared to the first-stage reactor designs, features increased use of passive safety systems and simplification. The objective of the current study is to compare these advanced reactor designs in order to provide comprehensive information for the PINK steering committee that is useful in the selection process of a design for further study and development work. In ch. 2 the main features of the four reactors are highlighted. In ch. 3 the most important safety features and the behaviour of the four reactors under accident situations are compared. Passive safety systems are identified and forgivingness is described and compared. Results of the preliminary probabilistic safety analysis are presented. Ch. 4 deals with the proven technology of the four concepts, ch. 5 with the Netherlands requirements, ch. 6 with commercial aspects, and ch. 7 with the fuel cycle and radioactive waste produced. In ch. 8 the costs are compared and finally in ch. 9 conclusions are drawn and recommendations are made. (author). 13 figs

  11. A comparison of the microstructures and electrochemical capacitive properties of 2 graphenes prepared by arc discharge method and chemical method

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, H.; Yang, Y. [Research Inst. of Chemical Defense, Beijing (China); Univ. of Science and Technology, Beijing (China); Cao, G.; Xu, B. [Research Inst. of Chemical Defense, Beijing (China)

    2010-07-01

    In this study, 2 kinds of graphene materials were prepared using both arc discharge and chemical methods. The pore structures and electrochemical capacitive properties of the materials were investigated. A mesopore structure was obtained for the graphene prepared using the arc discharge method, with a capacitance of 12.9 F/g and a high rate capability when used in electrochemical applications. The graphene prepared with the chemical method demonstrated a more highly developed micropore structure and capacitances greater than 70 F/g. However, rate performance for the graphene was normal. 2 figs.

  12. A COMPARISON OF METHODS FOR DETERMINING THE MOLECULAR CONTENT OF MODEL GALAXIES

    International Nuclear Information System (INIS)

    Krumholz, Mark R.; Gnedin, Nickolay Y.

    2011-01-01

    Recent observations indicate that star formation occurs only in the molecular phase of a galaxy's interstellar medium. A realistic treatment of star formation in simulations and analytic models of galaxies therefore requires that one determine where the transition from the atomic to molecular gas occurs. In this paper, we compare two methods for making this determination in cosmological simulations where the internal structures of molecular clouds are unresolved: a complex time-dependent chemistry network coupled to a radiative transfer calculation of the dissociating ultraviolet (UV) radiation field and a simple time-independent analytic approximation. We show that these two methods produce excellent agreement at all metallicities ∼>10 -2 of the Milky Way value across a very wide range of UV fields. At lower metallicities the agreement is worse, likely because time-dependent effects become important; however, there are no observational calibrations of molecular gas content at such low metallicities, so it is unclear if either method is accurate. The comparison suggests that, in many but not all applications, the analytic approximation provides a viable and nearly cost-free alternative to full time-dependent chemistry and radiative transfer.

  13. Dealing with missing data in a multi-question depression scale: a comparison of imputation methods

    Directory of Open Access Journals (Sweden)

    Stuart Heather

    2006-12-01

    Full Text Available Abstract Background Missing data present a challenge to many research projects. The problem is often pronounced in studies utilizing self-report scales, and literature addressing different strategies for dealing with missing data in such circumstances is scarce. The objective of this study was to compare six different imputation techniques for dealing with missing data in the Zung Self-reported Depression scale (SDS. Methods 1580 participants from a surgical outcomes study completed the SDS. The SDS is a 20 question scale that respondents complete by circling a value of 1 to 4 for each question. The sum of the responses is calculated and respondents are classified as exhibiting depressive symptoms when their total score is over 40. Missing values were simulated by randomly selecting questions whose values were then deleted (a missing completely at random simulation. Additionally, a missing at random and missing not at random simulation were completed. Six imputation methods were then considered; 1 multiple imputation, 2 single regression, 3 individual mean, 4 overall mean, 5 participant's preceding response, and 6 random selection of a value from 1 to 4. For each method, the imputed mean SDS score and standard deviation were compared to the population statistics. The Spearman correlation coefficient, percent misclassified and the Kappa statistic were also calculated. Results When 10% of values are missing, all the imputation methods except random selection produce Kappa statistics greater than 0.80 indicating 'near perfect' agreement. MI produces the most valid imputed values with a high Kappa statistic (0.89, although both single regression and individual mean imputation also produced favorable results. As the percent of missing information increased to 30%, or when unbalanced missing data were introduced, MI maintained a high Kappa statistic. The individual mean and single regression method produced Kappas in the 'substantial agreement' range

  14. A comparison of two microscale laboratory reporting methods in a secondary chemistry classroom

    Science.gov (United States)

    Martinez, Lance Michael

    This study attempted to determine if there was a difference between the laboratory achievement of students who used a modified reporting method and those who used traditional laboratory reporting. The study also determined the relationships between laboratory performance scores and the independent variables score on the Group Assessment of Logical Thinking (GALT) test, chronological age in months, gender, and ethnicity for each of the treatment groups. The study was conducted using 113 high school students who were enrolled in first-year general chemistry classes at Pueblo South High School in Colorado. The research design used was the quasi-experimental Nonequivalent Control Group Design. The statistical treatment consisted of the Multiple Regression Analysis and the Analysis of Covariance. Based on the GALT, students in the two groups were generally in the concrete and transitional stages of the Piagetian cognitive levels. The findings of the study revealed that the traditional and the modified methods of laboratory reporting did not have any effect on the laboratory performance outcome of the subjects. However, the students who used the traditional method of reporting showed a higher laboratory performance score when evaluation was conducted using the New Standards rubric recommended by the state. Multiple Regression Analysis revealed that there was a significant relationship between the criterion variable student laboratory performance outcome of individuals who employed traditional laboratory reporting methods and the composite set of predictor variables. On the contrary, there was no significant relationship between the criterion variable student laboratory performance outcome of individuals who employed modified laboratory reporting methods and the composite set of predictor variables.

  15. A Comparison of Methods for Estimating Quadratic Effects in Nonlinear Structural Equation Models

    Science.gov (United States)

    Harring, Jeffrey R.; Weiss, Brandi A.; Hsu, Jui-Chen

    2012-01-01

    Two Monte Carlo simulations were performed to compare methods for estimating and testing hypotheses of quadratic effects in latent variable regression models. The methods considered in the current study were (a) a 2-stage moderated regression approach using latent variable scores, (b) an unconstrained product indicator approach, (c) a latent…

  16. The Comparison of Matching Methods Using Different Measures of Balance: Benefits and Risks Exemplified within a Study to Evaluate the Effects of German Disease Management Programs on Long-Term Outcomes of Patients with Type 2 Diabetes.

    Science.gov (United States)

    Fullerton, Birgit; Pöhlmann, Boris; Krohn, Robert; Adams, John L; Gerlach, Ferdinand M; Erler, Antje

    2016-10-01

    To present a case study on how to compare various matching methods applying different measures of balance and to point out some pitfalls involved in relying on such measures. Administrative claims data from a German statutory health insurance fund covering the years 2004-2008. We applied three different covariance balance diagnostics to a choice of 12 different matching methods used to evaluate the effectiveness of the German disease management program for type 2 diabetes (DMPDM2). We further compared the effect estimates resulting from applying these different matching techniques in the evaluation of the DMPDM2. The choice of balance measure leads to different results on the performance of the applied matching methods. Exact matching methods performed well across all measures of balance, but resulted in the exclusion of many observations, leading to a change of the baseline characteristics of the study sample and also the effect estimate of the DMPDM2. All PS-based methods showed similar effect estimates. Applying a higher matching ratio and using a larger variable set generally resulted in better balance. Using a generalized boosted instead of a logistic regression model showed slightly better performance for balance diagnostics taking into account imbalances at higher moments. Best practice should include the application of several matching methods and thorough balance diagnostics. Applying matching techniques can provide a useful preprocessing step to reveal areas of the data that lack common support. The use of different balance diagnostics can be helpful for the interpretation of different effect estimates found with different matching methods. © Health Research and Educational Trust.

  17. Comparison of the lysis centrifugation method with the conventional blood culture method in cases of sepsis in a tertiary care hospital.

    Science.gov (United States)

    Parikh, Harshal R; De, Anuradha S; Baveja, Sujata M

    2012-07-01

    Physicians and microbiologists have long recognized that the presence of living microorganisms in the blood of a patient carries with it considerable morbidity and mortality. Hence, blood cultures have become critically important and frequently performed test in clinical microbiology laboratories for diagnosis of sepsis. To compare the conventional blood culture method with the lysis centrifugation method in cases of sepsis. Two hundred nonduplicate blood cultures from cases of sepsis were analyzed using two blood culture methods concurrently for recovery of bacteria from patients diagnosed clinically with sepsis - the conventional blood culture method using trypticase soy broth and the lysis centrifugation method using saponin by centrifuging at 3000 g for 30 minutes. Overall bacteria recovered from 200 blood cultures were 17.5%. The conventional blood culture method had a higher yield of organisms, especially Gram positive cocci. The lysis centrifugation method was comparable with the former method with respect to Gram negative bacilli. The sensitivity of lysis centrifugation method in comparison to conventional blood culture method was 49.75% in this study, specificity was 98.21% and diagnostic accuracy was 89.5%. In almost every instance, the time required for detection of the growth was earlier by lysis centrifugation method, which was statistically significant. Contamination by lysis centrifugation was minimal, while that by conventional method was high. Time to growth by the lysis centrifugation method was highly significant (P value 0.000) as compared to time to growth by the conventional blood culture method. For the diagnosis of sepsis, combination of the lysis centrifugation method and the conventional blood culture method with trypticase soy broth or biphasic media is advocable, in order to achieve faster recovery and a better yield of microorganisms.

  18. Sampling for ants in different-aged spruce forests: A comparison of methods

    Czech Academy of Sciences Publication Activity Database

    Véle, A.; Holuša, J.; Frouz, Jan

    2009-01-01

    Roč. 45, č. 4 (2009), s. 301-305 ISSN 1164-5563 Institutional research plan: CEZ:AV0Z60660521 Keywords : ants * baits * methods comparison Subject RIV: EH - Ecology, Behaviour Impact factor: 1.247, year: 2009

  19. Comparison of a clinical gait analysis method using videography and temporal-distance measures with 16-mm cinematography.

    Science.gov (United States)

    Stuberg, W A; Colerick, V L; Blanke, D J; Bruce, W

    1988-08-01

    The purpose of this study was to compare a clinical gait analysis method using videography and temporal-distance measures with 16-mm cinematography in a gait analysis laboratory. Ten children with a diagnosis of cerebral palsy (means age = 8.8 +/- 2.7 years) and 9 healthy children (means age = 8.9 +/- 2.4 years) participated in the study. Stride length, walking velocity, and goniometric measurements of the hip, knee, and ankle were recorded using the two gait analysis methods. A multivariate analysis of variance was used to determine significant differences between the data collected using the two methods. Pearson product-moment correlation coefficients were determined to examine the relationship between the measurements recorded by the two methods. The consistency of performance of the subjects during walking was examined by intraclass correlation coefficients. No significant differences were found between the methods for the variables studied. Pearson product-moment correlation coefficients ranged from .79 to .95, and intraclass coefficients ranged from .89 to .97. The clinical gait analysis method was found to be a valid tool in comparison with 16-mm cinematography for the variables that were studied.

  20. A Comparison of Three Methods for Measuring Distortion in Optical Windows

    Science.gov (United States)

    Youngquist, Robert C.; Nurge, Mark A.; Skow, Miles

    2015-01-01

    It's important that imagery seen through large-area windows, such as those used on space vehicles, not be substantially distorted. Many approaches are described in the literature for measuring the distortion of an optical window, but most suffer from either poor resolution or processing difficulties. In this paper a new definition of distortion is presented, allowing accurate measurement using an optical interferometer. This new definition is shown to be equivalent to the definitions provided by the military and the standards organizations. In order to determine the advantages and disadvantages of this new approach, the distortion of an acrylic window is measured using three different methods: image comparison, moiré interferometry, and phase-shifting interferometry.

  1. A COMPARISON OF A SPECTROPHOTOMETRIC (QUERCETIN) METHOD AND AN ATOMIC-ABSORPTION METHOD FOR DETERMINATION OF TIN IN FOOD

    DEFF Research Database (Denmark)

    Engberg, Å

    1973-01-01

    Procedures for the determination of tin in food, which involve a spectrophotometric method (with the quercetin-tin complex) and an atomic-absorption method, are described. The precision of the complete methods and of the individual analytical steps required is evaluated, and the parameters...

  2. A comparison study of electrodes for neonate electrical impedance tomography

    International Nuclear Information System (INIS)

    Rahal, Mohamad; Demosthenous, Andreas; Khor, Joo Moy; Tizzard, Andrew; Bayford, Richard

    2009-01-01

    Electrical impedance tomography (EIT) is an imaging technique that has the potential to be used for studying neonate lung function. The properties of the electrodes are very important in multi-frequency EIT (MFEIT) systems, particularly for neonates, as the skin cannot be abraded to reduce contact impedance. In this work, the impedance of various clinical electrodes as a function of frequency is investigated to identify the optimum electrode type for this application. Six different types of self-adhesive electrodes commonly used in general and neonatal cardiology have been investigated. These electrodes are Ag/AgCl electrodes from the Ambu® Cardiology Blue sensors range (BR, NF and BRS), Kendall (KittyCat(TM) and ARBO®) and Philips 13953D electrodes. In addition, a textile electrode without gel from Textronics was tested on two subjects to allow comparison with the hydrogel-based electrodes. Two- and four-electrode measurements were made to determine the electrode-interface and tissue impedances, respectively. The measurements were made on the back of the forearm of six healthy adult volunteers without skin preparation with 2.5 cm electrode spacing. Impedance measurements were carried out using a Solartron SI 1260 impedance/gain-phase analyser with a frequency range from 10 Hz to 1 MHz. For the electrode-interface impedance, the average magnitude decreased with frequency, with an average value of 5 kΩ at 10 kHz and 337 Ω at 1 MHz; for the tissue impedance, the respective values were 987 Ω and 29 Ω. Overall, the Ambu BRS, Kendall ARBO® and Textronics textile electrodes gave the lowest electrode contact impedance at 1 MHz. Based on the results of the two-electrode measurements, simple RC models for the Ambu BRS and Kendall-ARBO and Textronics textile electrodes have been derived for MFEIT applications

  3. A comparison of earthquake backprojection imaging methods for dense local arrays

    Science.gov (United States)

    Beskardes, G. D.; Hole, J. A.; Wang, K.; Michaelides, M.; Wu, Q.; Chapman, M. C.; Davenport, K. K.; Brown, L. D.; Quiros, D. A.

    2018-03-01

    Backprojection imaging has recently become a practical method for local earthquake detection and location due to the deployment of densely sampled, continuously recorded, local seismograph arrays. While backprojection sometimes utilizes the full seismic waveform, the waveforms are often pre-processed and simplified to overcome imaging challenges. Real data issues include aliased station spacing, inadequate array aperture, inaccurate velocity model, low signal-to-noise ratio, large noise bursts and varying waveform polarity. We compare the performance of backprojection with four previously used data pre-processing methods: raw waveform, envelope, short-term averaging/long-term averaging and kurtosis. Our primary goal is to detect and locate events smaller than noise by stacking prior to detection to improve the signal-to-noise ratio. The objective is to identify an optimized strategy for automated imaging that is robust in the presence of real-data issues, has the lowest signal-to-noise thresholds for detection and for location, has the best spatial resolution of the source images, preserves magnitude, and considers computational cost. Imaging method performance is assessed using a real aftershock data set recorded by the dense AIDA array following the 2011 Virginia earthquake. Our comparisons show that raw-waveform backprojection provides the best spatial resolution, preserves magnitude and boosts signal to detect events smaller than noise, but is most sensitive to velocity error, polarity error and noise bursts. On the other hand, the other methods avoid polarity error and reduce sensitivity to velocity error, but sacrifice spatial resolution and cannot effectively reduce noise by stacking. Of these, only kurtosis is insensitive to large noise bursts while being as efficient as the raw-waveform method to lower the detection threshold; however, it does not preserve the magnitude information. For automatic detection and location of events in a large data set, we

  4. Geocoding rural addresses in a community contaminated by PFOA: a comparison of methods

    Directory of Open Access Journals (Sweden)

    Gallagher Lisa G

    2010-04-01

    Full Text Available Abstract Background Location is often an important component of exposure assessment, and positional errors in geocoding may result in exposure misclassification. In rural areas, successful geocoding to a street address is limited by rural route boxes. Communities have assigned physical street addresses to rural route boxes as part of E911 readdressing projects for improved emergency response. Our study compared automated and E911 methods for recovering and geocoding valid street addresses and assessed the impact of positional errors on exposure classification. Methods The current study is a secondary analysis of existing data that included 135 addresses self-reported by participants of a rural community study who were exposed via public drinking water to perfluorooctanoate (PFOA released from a DuPont facility in Parkersburg, West Virginia. We converted pre-E911 to post-E911 addresses using two methods: automated ZP4 address-correction software with the U.S. Postal Service LACS database and E911 data provided by Wood County, West Virginia. Addresses were geocoded using TeleAtlas, an online commercial service, and ArcView with StreetMap Premium North America NAVTEQ 2008 enhanced street dataset. We calculated positional errors using GPS measurements collected at each address and assessed exposure based on geocoded location in relation to public water pipes. Results The county E911 data converted 89% of the eligible addresses compared to 35% by ZP4 LACS. ArcView/NAVTEQ geocoded more addresses (n = 130 and with smaller median distance between geocodes and GPS coordinates (39 meters than TeleAtlas (n = 85, 188 meters. Without E911 address conversion, 25% of the geocodes would have been more than 1000 meters from the true location. Positional errors in TeleAtlas geocoding resulted in exposure misclassification of seven addresses whereas ArcView/NAVTEQ methods did not misclassify any addresses. Conclusions Although the study was limited by small

  5. Modified X-ray method of a study of duodenum

    Energy Technology Data Exchange (ETDEWEB)

    Korolyuk, I.P.; Bugakov, V.M.; Shinkin, V.M.

    A modified X-ray examination of duodenum under hypotension conditions is described. In comparison with the existing method, the above-mentioned modification allows one to investigate the duodenum by using double contrast - the high-concentrated barium suspension and the gas, formed after the gasificated powder dose. 327 patients have been examined by the given method, 126 of them have been diagnosed to suffer from inflammatory diseases of the stomach and the duodenum, 22 of them suffering from the duodenum peptic ulcer, 107 of them - pancreatitis, 48-cholelithiasis, 24 - the tumor of the pancreatoduodenum zone. 65 patients have been operated on. Roentgenomorphologic comparisons have been carried out for 66 patients suffering from inflammatory deseases of the duodenum. Duodenum visualization of 283 patients is found to be good and satisfactory. The given method may be used under any conditions, including polyclinics, due to the sparing nature.

  6. Technical Brief: A comparison of two methods of euthanasia on retinal dopamine levels

    OpenAIRE

    Hwang, Christopher K.; Iuvone, P. Michael

    2013-01-01

    Purpose Mice are commonly used in biomedical research, and euthanasia is an important part of mouse husbandry. Approved, humane methods of euthanasia are designed to minimize the potential for pain or discomfort, but may also influence the measurement of experimental variables. Methods We compared the effects of two approved methods of mouse euthanasia on the levels of retinal dopamine. We examined the level of retinal dopamine, a commonly studied neuromodulator, following euthanasia by carbo...

  7. Comparison of conventional and digital cephalometric analysis: A pilot study

    Directory of Open Access Journals (Sweden)

    Hemlata Bhagwan Tanwani

    2014-01-01

    Full Text Available Aim: The aim of the study was to analyze and compare the manual cephalometric tracings with computerized cephalometric tracings using Burstone hard tissue analysis and McNamara analysis. Materials and Methods: Conventional lateral cephalograms of 20 subjects were obtained and manually traced. The radiographs were subsequently scanned and digitized using Dolphin Imaging software version 11.7. McNamara analysis and Burstone hard tissue analysis were performed by both conventional and digital method. No differentiations were made for age or gender. Data were subjected to statistical analysis. Statistical analysis was undertaken using SPSS 17.0 version (Chicago, Illinois, USA statistical software program. A paired t-test was used to detect differences between the manual and digital methods. Statistical significance was set at the P < 0.05 level of confidence. Results: (A From Burstone analysis variables N-Pg II Hp show statistically very significant difference, and ANS-N, U1-NF, N-B II Hp, L1-Mp, and Go-Pg shows the statistically significant difference. (B From McNamara analysis variables Nasolabial angle and L1-APog show statistically significant differences and the Mandibular length shows the statistically very significant difference. Conclusion: According to this study, is reasonable to conclude that the manual and digital tracings show the statistically significant difference.

  8. A practical comparison of methods to assess sum-of-products

    International Nuclear Information System (INIS)

    Rauzy, A.; Chatelet, E.; Dutuit, Y.; Berenguer, C.

    2003-01-01

    Many methods have been proposed in the literature to assess the probability of a sum-of-products. This problem has been shown computationally hard (namely no. P-hard). Therefore, algorithms can be compared only from a practical point of view. In this article, we propose first an efficient implementation of the pivotal decomposition method. This kind of algorithms is widely used in the Artificial Intelligence framework. It is unfortunately almost never considered in the reliability engineering framework, but as a pedagogical tool. We report experimental results that show that this method is in general much more efficient than classical methods that rewrite the sum-of-products under study into an equivalent sum of disjoint products. Then, we derive from our method a factorization algorithm to be used as a preprocessing method for binary decision diagrams. We show by means of experimental results that this latter approach outperforms the formers

  9. Comparison of tuning methods for design of PID controller as an A VR

    International Nuclear Information System (INIS)

    Sheikh, S.A.; Ahmed, I.; Unar, M.A.

    2009-01-01

    The primary means of generator reactive power control is the generator-excitation Control, using Automatic Voltage Regulator (A VR). The role of A VR is to hold the terminal voltage magnitude of Synchronous generator at a specified level. This paper presents the design of a proportional integral-derivative (PID) controller as an A VR. The PID controller has been tuned by various tuning methods. From all methods, PID parameters are computed through various techniques i.e. Process-reaction curve, Closed-loop system, open-loop system gain margin and phase-margin specifications. From these methods, it has been found that Zhaung- Atherton method and Ho, Hang and Cao method are much superior to the conventional Ziegler-Nichols rules. The performance of the controller has been evaluated through Simulation Studies in MATLAB environment. It has been demonstrated that the PID controller, tuned with the said methods, yields highly satisfactory closed-loop performance. (author)

  10. A comparison of photographic, replication and direct clinical examination methods for detecting developmental defects of enamel

    Directory of Open Access Journals (Sweden)

    Pakshir Hamid-Reza

    2011-04-01

    Full Text Available Abstract Background Different methods have been used for detecting developmental defects of enamel (DDE. This study aimed to compare photographic and replication methods with the direct clinical examination method for detecting DDE in children's permanent incisors. Methods 110 8-10-year-old schoolchildren were randomly selected from an examined sample of 335 primary Shiraz school children. Modified DDE index was used in all three methods. Direct examinations were conducted by two calibrated examiners using flat oral mirrors and tongue blades. Photographs were taken using a digital SLR camera (Nikon D-80, macro lens, macro flashes, and matt flash filters. Impressions were taken using additional-curing silicon material and casts made in orthodontic stone. Impressions and models were both assessed using dental loupes (magnification=x3.5. Each photograph/impression/cast was assessed by two calibrated examiners. Reliability of methods was assessed using kappa agreement tests. Kappa agreement, McNemar's and two-sample proportion tests were used to compare results obtained by the photographic and replication methods with those obtained by the direct examination method. Results Of the 110 invited children, 90 were photographed and 73 had impressions taken. The photographic method had higher reliability levels than the other two methods, and compared to the direct clinical examination detected significantly more subjects with DDE (P = 0.002, 3.1 times more DDE (P Conclusion The photographic method was much more sensitive than direct clinical examination in detecting DDE and was the best of the three methods for epidemiological studies. The replication method provided less information about DDE compared to photography. Results of this study have implications for both epidemiological and detailed clinical studies on DDE.

  11. A comparative study of Averrhoabilimbi extraction method

    Science.gov (United States)

    Zulhaimi, H. I.; Rosli, I. R.; Kasim, K. F.; Akmal, H. Muhammad; Nuradibah, M. A.; Sam, S. T.

    2017-09-01

    In recent year, bioactive compound in plant has become a limelight in the food and pharmaceutical market, leading to research interest to implement effective technologies for extracting bioactive substance. Therefore, this study is focusing on extraction of Averrhoabilimbi by different extraction technique namely, maceration and ultrasound-assisted extraction. Fewplant partsof Averrhoabilimbiweretaken as extraction samples which are fruits, leaves and twig. Different solvents such as methanol, ethanol and distilled water were utilized in the process. Fruit extractsresult in highest extraction yield compared to other plant parts. Ethanol and distilled water have significant role compared to methanol in all parts and both extraction technique. The result also shows that ultrasound-assisted extraction gave comparable result with maceration. Besides, the shorter period on extraction process gives useful in term of implementation to industries.

  12. Unimolecular decomposition reactions at low-pressure: A comparison of competitive methods

    Science.gov (United States)

    Adams, G. F.

    1980-01-01

    The lack of a simple rate coefficient expression to describe the pressure and temperature dependence hampers chemical modeling of flame systems. Recently developed simplified models to describe unimolecular processes include the calculation of rate constants for thermal unimolecular reactions and recombinations at the low pressure limit, at the high pressure limit and in the intermediate fall-off region. Comparison between two different applications of Troe's simplified model and a comparison between the simplified model and the classic RRKM theory are described.

  13. Geocoding rural addresses in a community contaminated by PFOA: a comparison of methods.

    Science.gov (United States)

    Vieira, Verónica M; Howard, Gregory J; Gallagher, Lisa G; Fletcher, Tony

    2010-04-21

    Location is often an important component of exposure assessment, and positional errors in geocoding may result in exposure misclassification. In rural areas, successful geocoding to a street address is limited by rural route boxes. Communities have assigned physical street addresses to rural route boxes as part of E911 readdressing projects for improved emergency response. Our study compared automated and E911 methods for recovering and geocoding valid street addresses and assessed the impact of positional errors on exposure classification. The current study is a secondary analysis of existing data that included 135 addresses self-reported by participants of a rural community study who were exposed via public drinking water to perfluorooctanoate (PFOA) released from a DuPont facility in Parkersburg, West Virginia. We converted pre-E911 to post-E911 addresses using two methods: automated ZP4 address-correction software with the U.S. Postal Service LACS database and E911 data provided by Wood County, West Virginia. Addresses were geocoded using TeleAtlas, an online commercial service, and ArcView with StreetMap Premium North America NAVTEQ 2008 enhanced street dataset. We calculated positional errors using GPS measurements collected at each address and assessed exposure based on geocoded location in relation to public water pipes. The county E911 data converted 89% of the eligible addresses compared to 35% by ZP4 LACS. ArcView/NAVTEQ geocoded more addresses (n = 130) and with smaller median distance between geocodes and GPS coordinates (39 meters) than TeleAtlas (n = 85, 188 meters). Without E911 address conversion, 25% of the geocodes would have been more than 1000 meters from the true location. Positional errors in TeleAtlas geocoding resulted in exposure misclassification of seven addresses whereas ArcView/NAVTEQ methods did not misclassify any addresses. Although the study was limited by small numbers, our results suggest that the use of county E911 data in rural

  14. Variation in Results of Volume Measurements of Stumps of Lower-Limb Amputees : A Comparison of 4 Methods

    NARCIS (Netherlands)

    de Boer-Wilzing, Vera G.; Bolt, Arjen; Geertzen, Jan H.; Emmelot, Cornelis H.; Baars, Erwin C.; Dijkstra, Pieter U.

    de Boer-Wilzing VG, Bolt A, Geertzen JH, Emmelot CH, Baars EC, Dijkstra PU. Variation in results of volume measurements of stumps of lower-limb amputees: a comparison of 4 methods. Arch Phys Med Rehabil 2011;92:941-6. Objective: To analyze the reliability of 4 methods (water immersion,

  15. A morphometric study of antral G-cell density in a sample of adult general population: comparison of three different methods and correlation with patient demography, helicobacter pylori infection, histomorphology and circulating gastrin levels

    DEFF Research Database (Denmark)

    Petersson, Fredrik; Borch, Kurt; Rehfeld, Jens F

    2008-01-01

    whether these methods are intercorrelated and the relation of these methods to plasma gastrin concentrations, demography, the occurrence of H. pylori infection and chronic gastritis. Gastric antral mucosal biopsy sections from 273 adults (188 with and 85 without H pylori infection) from a general...... population sample were examined immunohistochemically for G-cells using cell counting, stereology (point counting) and computerized image analysis. Gastritis was scored according to the updated Sydney system. Basal plasma gastrin concentrations were measured by radioimmunoassay. The three methods for G...

  16. Comparison of the learning of two notations: A pilot study

    Directory of Open Access Journals (Sweden)

    ASHFAQ AKRAM

    2017-05-01

    Full Text Available Introduction: MICAP is a new notation in which the teeth are indicated by letters (I-incisor, C-canine, P-premolar, M-molar and numbers [1,2,3] which are written superscript and subscript on the relevant letters. FDI tooth notation is a two digit system where one digit shows quadrant and the second one shows the tooth of the quadrant. This study aimed to compare the short term retention of knowledge of two notation systems (FDI two digit system and MICAP notation by lecture method. Methods: Undergraduate students [N=80] of three schools participated in a cross-over study. Two theory-driven classroom based lectures on MICAP notation and FDI notation were delivered separately. Data were collected using eight randomly selected permanent teeth to be written in MICAP format and FDI format at pretest (before the lecture, post-test I (immediately after lecture and post-test II (one week after the lecture. Analysis was done by SPSS version 20.0 using repeated measures ANCOVA and independent t-test. Results: The results of pre-test and post-test I were similar for FDI education. Similar results were found between post-test I and post-test II for MICAP and FDI notations. Conclusion: The study findings indicated that the two notations (FDI and MICAP were equally mind cognitive. However, the sample size used in this study may not reflect the global scenario. Therefore, we suggest more studies to be performed for prospective adaptation of MICAP in dental curriculum.

  17. Studies on the quantitative autoradiography. III. Quantitative comparison of a novel tissue-mold measurement technique "paste-mold method," to the semiquantitative whole body autoradiography (WBA), using the same animals.

    Science.gov (United States)

    Motoji, N; Hamai, Y; Niikura, Y; Shigematsu, A

    1995-01-01

    A novel preparation technique, so called "Paste Mold," was devised for organ and tissue distribution studies. This is the most powerful by joining with autoradioluminography (ARLG), which was established and validated recently in the working group of Forum '93 of Japanese Society for study of xenobiotics. A small piece (10-50 mg) of each organ or tissue was available for measuring its radioactive concentration and it was sampled from the remains of frozen carcass used for macroautoradiography (MARG). The solubilization of the frozen pieces was performed with mixing a suitable volume of gelatine and strong alkaline solution prior to mild heating kept at 40 degrees C for a few hours. After that, the tissue paste was molded in template pattern to form the small plates. The molded plates were contacted with Imaging plate (IP) for recording their radioactive concentration. The recorded IP was processed by BAS2000. The molded plate was formed in thickness of 200 microns, so called infinit thickness against soft beta rays, and therefore the resulting relative intensities, represented by (PSL-BG)/S values, indicated practically responsible ratio of the radioactive concentration in organs and tissues, without any calibulation for beta-self absorption coefficiency. On the other hand, the left half body of the frozen carcass was used for making whole body autoradiography (WBA) before the Paste-Mold preparation. Comparison was performed for difference in (PSL-BG)/S values of organs and tissues between frozen and dried sections. A good concordance in relative intensities, (PSL-BG)/S by the Paste-Mold preparation was given with those by the frozen sections rather than dried sections.(ABSTRACT TRUNCATED AT 250 WORDS)

  18. A comparison of heuristic and model-based clustering methods for dietary pattern analysis.

    Science.gov (United States)

    Greve, Benjamin; Pigeot, Iris; Huybrechts, Inge; Pala, Valeria; Börnhorst, Claudia

    2016-02-01

    Cluster analysis is widely applied to identify dietary patterns. A new method based on Gaussian mixture models (GMM) seems to be more flexible compared with the commonly applied k-means and Ward's method. In the present paper, these clustering approaches are compared to find the most appropriate one for clustering dietary data. The clustering methods were applied to simulated data sets with different cluster structures to compare their performance knowing the true cluster membership of observations. Furthermore, the three methods were applied to FFQ data assessed in 1791 children participating in the IDEFICS (Identification and Prevention of Dietary- and Lifestyle-Induced Health Effects in Children and Infants) Study to explore their performance in practice. The GMM outperformed the other methods in the simulation study in 72 % up to 100 % of cases, depending on the simulated cluster structure. Comparing the computationally less complex k-means and Ward's methods, the performance of k-means was better in 64-100 % of cases. Applied to real data, all methods identified three similar dietary patterns which may be roughly characterized as a 'non-processed' cluster with a high consumption of fruits, vegetables and wholemeal bread, a 'balanced' cluster with only slight preferences of single foods and a 'junk food' cluster. The simulation study suggests that clustering via GMM should be preferred due to its higher flexibility regarding cluster volume, shape and orientation. The k-means seems to be a good alternative, being easier to use while giving similar results when applied to real data.

  19. A Comparison of Affect Ratings Obtained with Ecological Momentary Assessment and the Day Reconstruction Method

    Science.gov (United States)

    Dockray, Samantha; Grant, Nina; Stone, Arthur A.; Kahneman, Daniel; Wardle, Jane

    2010-01-01

    Measurement of affective states in everyday life is of fundamental importance in many types of quality of life, health, and psychological research. Ecological momentary assessment (EMA) is the recognized method of choice, but the respondent burden can be high. The day reconstruction method (DRM) was developed by Kahneman and colleagues (Science, 2004, 306, 1776–1780) to assess affect, activities and time use in everyday life. We sought to validate DRM affect ratings by comparison with contemporaneous EMA ratings in a sample of 94 working women monitored over work and leisure days. Six EMA ratings of happiness, tiredness, stress, and anger/frustration were obtained over each 24 h period, and were compared with DRM ratings for the same hour, recorded retrospectively at the end of the day. Similar profiles of affect intensity were recorded with the two techniques. The between-person correlations adjusted for attenuation ranged from 0.58 (stress, working day) to 0.90 (happiness, leisure day). The strength of associations was not related to age, educational attainment, or depressed mood. We conclude that the DRM provides reasonably reliable estimates both of the intensity of affect and variations in affect over the day, so is a valuable instrument for the measurement of everyday experience in health and social research. PMID:21113328

  20. A liquid chromatographic method for determination of theophylline in serum and capillary blood--a comparison.

    Science.gov (United States)

    Gartzke, J; Jäger, H; Vins, I

    1991-01-01

    A simple, fast and reliable liquid chromatographic method for the determination of theophylline in serum and capillary blood after a solid phase extraction is described for therapeutic drug monitoring. The employment of capillary blood permits the determination of an individual drug profile and other pharmacokinetic studies in neonates and infants. There were no differences in venous- and capillary-blood levels but these values compared poorly with those in serum. An adjustment of the results by correction of the different volumes of serum and blood by haematocrit was unsuccessful. Differences in the binding of theophylline to erythrocytes could be an explanation for the differences in serum at blood levels of theophylline.

  1. Comparison of a New Cobinamide-Based Method to a Standard Laboratory Method for Measuring Cyanide in Human Blood

    Science.gov (United States)

    Swezey, Robert; Shinn, Walter; Green, Carol; Drover, David R.; Hammer, Gregory B.; Schulman, Scott R.; Zajicek, Anne; Jett, David A.; Boss, Gerry R.

    2013-01-01

    Most hospital laboratories do not measure blood cyanide concentrations, and samples must be sent to reference laboratories. A simple method is needed for measuring cyanide in hospitals. The authors previously developed a method to quantify cyanide based on the high binding affinity of the vitamin B12 analog, cobinamide, for cyanide and a major spectral change observed for cyanide-bound cobinamide. This method is now validated in human blood, and the findings include a mean inter-assay accuracy of 99.1%, precision of 8.75% and a lower limit of quantification of 3.27 µM cyanide. The method was applied to blood samples from children treated with sodium nitroprusside and it yielded measurable results in 88 of 172 samples (51%), whereas the reference laboratory yielded results in only 19 samples (11%). In all 19 samples, the cobinamide-based method also yielded measurable results. The two methods showed reasonable agreement when analyzed by linear regression, but not when analyzed by a standard error of the estimate or paired t-test. Differences in results between the two methods may be because samples were assayed at different times on different sample types. The cobinamide-based method is applicable to human blood, and can be used in hospital laboratories and emergency rooms. PMID:23653045

  2. A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem

    OpenAIRE

    Zekić-Sušac, Marijana; Pfeifer, Sanja; Šarlija, Nataša

    2014-01-01

    Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART ...

  3. Modeling factors influencing the demand for emergency department services in ontario: a comparison of methods

    Directory of Open Access Journals (Sweden)

    Meaney Christopher

    2011-08-01

    Full Text Available Abstract Background Emergency departments are medical treatment facilities, designed to provide episodic care to patients suffering from acute injuries and illnesses as well as patients who are experiencing sporadic flare-ups of underlying chronic medical conditions which require immediate attention. Supply and demand for emergency department services varies across geographic regions and time. Some persons do not rely on the service at all whereas; others use the service on repeated occasions. Issues regarding increased wait times for services and crowding illustrate the need to investigate which factors are associated with increased frequency of emergency department utilization. The evidence from this study can help inform policy makers on the appropriate mix of supply and demand targeted health care policies necessary to ensure that patients receive appropriate health care delivery in an efficient and cost-effective manner. The purpose of this report is to assess those factors resulting in increased demand for emergency department services in Ontario. We assess how utilization rates vary according to the severity of patient presentation in the emergency department. We are specifically interested in the impact that access to primary care physicians has on the demand for emergency department services. Additionally, we wish to investigate these trends using a series of novel regression models for count outcomes which have yet to be employed in the domain of emergency medical research. Methods Data regarding the frequency of emergency department visits for the respondents of Canadian Community Health Survey (CCHS during our study interval (2003-2005 are obtained from the National Ambulatory Care Reporting System (NACRS. Patients' emergency department utilizations were linked with information from the Canadian Community Health Survey (CCHS which provides individual level medical, socio-demographic, psychological and behavioral information for

  4. Inventory-based estimates of forest biomass carbon stocks in China: A comparison of three methods

    Science.gov (United States)

    Zhaodi Guo; Jingyun Fang; Yude Pan; Richard. Birdsey

    2010-01-01

    Several studies have reported different estimates for forest biomass carbon (C) stocks in China. The discrepancy among these estimates may be largely attributed to the methods used. In this study, we used three methods [mean biomass density method (MBM), mean ratio method (MRM), and continuous biomass expansion factor (BEF) method (abbreviated as CBM)] applied to...

  5. A comparison of spatial analysis methods for the construction of topographic maps of retinal cell density.

    Directory of Open Access Journals (Sweden)

    Eduardo Garza-Gisholt

    Full Text Available Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed 'by eye'. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation 'respects' the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the 'noise' caused by artefacts and permits a clearer representation of the dominant, 'real' distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect

  6. A comparison of spatial analysis methods for the construction of topographic maps of retinal cell density.

    Science.gov (United States)

    Garza-Gisholt, Eduardo; Hemmi, Jan M; Hart, Nathan S; Collin, Shaun P

    2014-01-01

    Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed 'by eye'. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation 'respects' the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the 'noise' caused by artefacts and permits a clearer representation of the dominant, 'real' distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome.

  7. A comparison of confidence interval methods for the concordance correlation coefficient and intraclass correlation coefficient with small number of raters.

    Science.gov (United States)

    Feng, Dai; Svetnik, Vladimir; Coimbra, Alexandre; Baumgartner, Richard

    2014-01-01

    The intraclass correlation coefficient (ICC) with fixed raters or, equivalently, the concordance correlation coefficient (CCC) for continuous outcomes is a widely accepted aggregate index of agreement in settings with small number of raters. Quantifying the precision of the CCC by constructing its confidence interval (CI) is important in early drug development applications, in particular in qualification of biomarker platforms. In recent years, there have been several new methods proposed for construction of CIs for the CCC, but their comprehensive comparison has not been attempted. The methods consisted of the delta method and jackknifing with and without Fisher's Z-transformation, respectively, and Bayesian methods with vague priors. In this study, we carried out a simulation study, with data simulated from multivariate normal as well as heavier tailed distribution (t-distribution with 5 degrees of freedom), to compare the state-of-the-art methods for assigning CI to the CCC. When the data are normally distributed, the jackknifing with Fisher's Z-transformation (JZ) tended to provide superior coverage and the difference between it and the closest competitor, the Bayesian method with the Jeffreys prior was in general minimal. For the nonnormal data, the jackknife methods, especially the JZ method, provided the coverage probabilities closest to the nominal in contrast to the others which yielded overly liberal coverage. Approaches based upon the delta method and Bayesian method with conjugate prior generally provided slightly narrower intervals and larger lower bounds than others, though this was offset by their poor coverage. Finally, we illustrated the utility of the CIs for the CCC in an example of a wake after sleep onset (WASO) biomarker, which is frequently used in clinical sleep studies of drugs for treatment of insomnia.

  8. Comparison of different wind data interpolation methods for a region with complex terrain in Central Asia

    Science.gov (United States)

    Reinhardt, Katja; Samimi, Cyrus

    2018-01-01

    While climatological data of high spatial resolution are largely available in most developed countries, the network of climatological stations in many other regions of the world still constitutes large gaps. Especially for those regions, interpolation methods are important tools to fill these gaps and to improve the data base indispensible for climatological research. Over the last years, new hybrid methods of machine learning and geostatistics have been developed which provide innovative prospects in spatial predictive modelling. This study will focus on evaluating the performance of 12 different interpolation methods for the wind components \\overrightarrow{u} and \\overrightarrow{v} in a mountainous region of Central Asia. Thereby, a special focus will be on applying new hybrid methods on spatial interpolation of wind data. This study is the first evaluating and comparing the performance of several of these hybrid methods. The overall aim of this study is to determine whether an optimal interpolation method exists, which can equally be applied for all pressure levels, or whether different interpolation methods have to be used for the different pressure levels. Deterministic (inverse distance weighting) and geostatistical interpolation methods (ordinary kriging) were explored, which take into account only the initial values of \\overrightarrow{u} and \\overrightarrow{v} . In addition, more complex methods (generalized additive model, support vector machine and neural networks as single methods and as hybrid methods as well as regression-kriging) that consider additional variables were applied. The analysis of the error indices revealed that regression-kriging provided the most accurate interpolation results for both wind components and all pressure heights. At 200 and 500 hPa, regression-kriging is followed by the different kinds of neural networks and support vector machines and for 850 hPa it is followed by the different types of support vector machine and

  9. Analysis of cost data in a cluster-randomized, controlled trial: comparison of methods

    DEFF Research Database (Denmark)

    Sokolowski, Ineta; Ørnbøl, Eva; Rosendal, Marianne

    studies have used non-valid analysis of skewed data. We propose two different methods to compare mean cost in two groups. Firstly, we use a non-parametric bootstrap method where the re-sampling takes place on two levels in order to take into account the cluster effect. Secondly, we proceed with a log......-transformation of the cost data and apply the normal theory on these data. Again we try to account for the cluster effect. The performance of these two methods is investigated in a simulation study. The advantages and disadvantages of the different approaches are discussed.......  We consider health care data from a cluster-randomized intervention study in primary care to test whether the average health care costs among study patients differ between the two groups. The problems of analysing cost data are that most data are severely skewed. Median instead of mean...

  10. Production studies and documentary participants: a method

    NARCIS (Netherlands)

    Sanders, Willemien

    2016-01-01

    It was only after I finished my PhD thesis that I learned that my research related to production studies. Departing from the question of ethics in documentary filmmaking, I investigated both the perspective of filmmakers and participants on ethical issues in the documentary filmmaking practice,

  11. DNA barcode analysis: a comparison of phylogenetic and statistical classification methods.

    Science.gov (United States)

    Austerlitz, Frederic; David, Olivier; Schaeffer, Brigitte; Bleakley, Kevin; Olteanu, Madalina; Leblois, Raphael; Veuille, Michel; Laredo, Catherine

    2009-11-10

    DNA barcoding aims to assign individuals to given species according to their sequence at a small locus, generally part of the CO1 mitochondrial gene. Amongst other issues, this raises the question of how to deal with within-species genetic variability and potential transpecific polymorphism. In this context, we examine several assignation methods belonging to two main categories: (i) phylogenetic methods (neighbour-joining and PhyML) that attempt to account for the genealogical framework of DNA evolution and (ii) supervised classification methods (k-nearest neighbour, CART, random forest and kernel methods). These methods range from basic to elaborate. We investigated the ability of each method to correctly classify query sequences drawn from samples of related species using both simulated and real data. Simulated data sets were generated using coalescent simulations in which we varied the genealogical history, mutation parameter, sample size and number of species. No method was found to be the best in all cases. The simplest method of all, "one nearest neighbour", was found to be the most reliable with respect to changes in the parameters of the data sets. The parameter most influencing the performance of the various methods was molecular diversity of the data. Addition of genetically independent loci--nuclear genes--improved the predictive performance of most methods. The study implies that taxonomists can influence the quality of their analyses either by choosing a method best-adapted to the configuration of their sample, or, given a certain method, increasing the sample size or altering the amount of molecular diversity. This can be achieved either by sequencing more mtDNA or by sequencing additional nuclear genes. In the latter case, they may also have to modify their data analysis method.

  12. DNA barcode analysis: a comparison of phylogenetic and statistical classification methods

    Directory of Open Access Journals (Sweden)

    Leblois Raphael

    2009-11-01

    Full Text Available Abstract Background DNA barcoding aims to assign individuals to given species according to their sequence at a small locus, generally part of the CO1 mitochondrial gene. Amongst other issues, this raises the question of how to deal with within-species genetic variability and potential transpecific polymorphism. In this context, we examine several assignation methods belonging to two main categories: (i phylogenetic methods (neighbour-joining and PhyML that attempt to account for the genealogical framework of DNA evolution and (ii supervised classification methods (k-nearest neighbour, CART, random forest and kernel methods. These methods range from basic to elaborate. We investigated the ability of each method to correctly classify query sequences drawn from samples of related species using both simulated and real data. Simulated data sets were generated using coalescent simulations in which we varied the genealogical history, mutation parameter, sample size and number of species. Results No method was found to be the best in all cases. The simplest method of all, "one nearest neighbour", was found to be the most reliable with respect to changes in the parameters of the data sets. The parameter most influencing the performance of the various methods was molecular diversity of the data. Addition of genetically independent loci - nuclear genes - improved the predictive performance of most methods. Conclusion The study implies that taxonomists can influence the quality of their analyses either by choosing a method best-adapted to the configuration of their sample, or, given a certain method, increasing the sample size or altering the amount of molecular diversity. This can be achieved either by sequencing more mtDNA or by sequencing additional nuclear genes. In the latter case, they may also have to modify their data analysis method.

  13. A Numerical Comparison of Rule Ensemble Methods and Support Vector Machines

    Energy Technology Data Exchange (ETDEWEB)

    Meza, Juan C.; Woods, Mark

    2009-12-18

    Machine or statistical learning is a growing field that encompasses many scientific problems including estimating parameters from data, identifying risk factors in health studies, image recognition, and finding clusters within datasets, to name just a few examples. Statistical learning can be described as 'learning from data' , with the goal of making a prediction of some outcome of interest. This prediction is usually made on the basis of a computer model that is built using data where the outcomes and a set of features have been previously matched. The computer model is called a learner, hence the name machine learning. In this paper, we present two such algorithms, a support vector machine method and a rule ensemble method. We compared their predictive power on three supernova type 1a data sets provided by the Nearby Supernova Factory and found that while both methods give accuracies of approximately 95%, the rule ensemble method gives much lower false negative rates.

  14. A comparison of methods of determining the 100 percent survival of preserved red cells

    International Nuclear Information System (INIS)

    Valeri, C.R.; Pivacek, L.E.; Ouellet, R.; Gray, A.

    1984-01-01

    Studies were done to compare three methods to determine the 100 percent survival value from which to estimate the 24-hour posttransfusion survival of preserved red cells. The following methods using small aliquots of 51 Cr-labeled autologous preserved red cells were evaluated: First, the 125 I-albumin method, which is an indirect measurement of the recipient's red cell volume derived from the plasma volume measured using 125 I-labeled albumin and the total body hematocrit. Second, the body surface area method (BSA) in which the recipient's red cell volume is derived from a body surface area nomogram. Third, an extrapolation method, which extrapolates to zero time the radioactivity associated with the red cells in the recipient's circulation from 10 to 20 or 15 to 30 minutes after transfusion. The three methods gave similar results in all studies in which less than 20 percent of the transfused red cells were nonviable (24-hour posttransfusion survival values of between 80-100%), but not when more than 20 percent of the red cells were nonviable. When 21 to 35 percent of the transfused red cells were nonviable (24-hour posttransfusion survivals of 65 to 79%), values with the 125 I-albumin method and the body surface area method were about 5 percent lower (p less than 0.001) than values with the extrapolation method. When greater than 35 percent of the red cells were nonviable (24-hour posttransfusion survival values of less than 65%), values with the 125 I-albumin method and the body surface area method were about 10 percent lower (p less than 0.001) than those obtained by the extrapolation method

  15. Development of a method for the comparison of final repository sites in different host rock formations; Weiterentwicklung einer Methode zum Vergleich von Endlagerstandorten in unterschiedlichen Wirtsgesteinsformationen

    Energy Technology Data Exchange (ETDEWEB)

    Fischer-Appelt, Klaus; Frieling, Gerd; Kock, Ingo; and others

    2017-10-15

    The report on the development of a method for the comparison of final repository sites in different host rock formations covers the following issues: influence of the requirement of retrievability on the methodology, study on the possible extension of the methodology for repository sites with crystalline host rocks: boundary conditions in Germany, final disposal concept for crystalline host rocks, generic extension of the VerSi method, identification, classification and relevance weighting of safety functions, relevance of the safety functions for the crystalline host rock formation, review of the methodological need for changes for crystalline rock sites under low-permeability covering; study on the applicability of the methodology for the determination of site regions for surface exploitation (phase 1).

  16. Undergraduate prosthetics and orthotics teaching methods: A baseline for international comparison.

    Science.gov (United States)

    Aminian, Gholamreza; O'Toole, John M; Mehraban, Afsoon Hassani

    2015-08-01

    Education of Prosthetics and Orthotics is a relatively recent professional program. While there has been some work on various teaching methods and strategies in international medical education, limited publication exists within prosthetics and orthotics. To identify the teaching and learning methods that are used in Bachelor-level prosthetics and orthotics programs that are given highest priority by expert prosthetics and orthotics instructors from regions enjoying a range of economic development. Mixed method. The study partly documented by this article utilized a mixed method approach (qualitative and quantitative methods) within which each phase provided data for other phases. It began with analysis of prosthetics and orthotics curricula documents, which was followed by a broad survey of instructors in this field and then a modified Delphi process. The expert instructors who participated in this study gave high priority to student-centered, small group methods that encourage critical thinking and may lead to lifelong learning. Instructors from more developed nations placed higher priority on student's independent acquisition of prosthetics and orthotics knowledge, particularly in clinical training. Application of student-centered approaches to prosthetics and orthotics programs may be preferred by many experts, but there appeared to be regional differences in the priority given to different teaching methods. The results of this study identify the methods of teaching that are preferred by expert prosthetics and orthotics instructors from a variety of regions. This treatment of current instructional techniques may inform instructor choice of teaching methods that impact the quality of education and improve the professional skills of students. © The International Society for Prosthetics and Orthotics 2014.

  17. A comparison of statistical methods for identifying out-of-date systematic reviews.

    Directory of Open Access Journals (Sweden)

    Porjai Pattanittum

    Full Text Available BACKGROUND: Systematic reviews (SRs can provide accurate and reliable evidence, typically about the effectiveness of health interventions. Evidence is dynamic, and if SRs are out-of-date this information may not be useful; it may even be harmful. This study aimed to compare five statistical methods to identify out-of-date SRs. METHODS: A retrospective cohort of SRs registered in the Cochrane Pregnancy and Childbirth Group (CPCG, published between 2008 and 2010, were considered for inclusion. For each eligible CPCG review, data were extracted and "3-years previous" meta-analyses were assessed for the need to update, given the data from the most recent 3 years. Each of the five statistical methods was used, with random effects analyses throughout the study. RESULTS: Eighty reviews were included in this study; most were in the area of induction of labour. The numbers of reviews identified as being out-of-date using the Ottawa, recursive cumulative meta-analysis (CMA, and Barrowman methods were 34, 7, and 7 respectively. No reviews were identified as being out-of-date using the simulation-based power method, or the CMA for sufficiency and stability method. The overall agreement among the three discriminating statistical methods was slight (Kappa = 0.14; 95% CI 0.05 to 0.23. The recursive cumulative meta-analysis, Ottawa, and Barrowman methods were practical according to the study criteria. CONCLUSION: Our study shows that three practical statistical methods could be applied to examine the need to update SRs.

  18. Structural identifiability of systems biology models: a critical comparison of methods.

    Directory of Open Access Journals (Sweden)

    Oana-Teodora Chis

    Full Text Available Analysing the properties of a biological system through in silico experimentation requires a satisfactory mathematical representation of the system including accurate values of the model parameters. Fortunately, modern experimental techniques allow obtaining time-series data of appropriate quality which may then be used to estimate unknown parameters. However, in many cases, a subset of those parameters may not be uniquely estimated, independently of the experimental data available or the numerical techniques used for estimation. This lack of identifiability is related to the structure of the model, i.e. the system dynamics plus the observation function. Despite the interest in knowing a priori whether there is any chance of uniquely estimating all model unknown parameters, the structural identifiability analysis for general non-linear dynamic models is still an open question. There is no method amenable to every model, thus at some point we have to face the selection of one of the possibilities. This work presents a critical comparison of the currently available techniques. To this end, we perform the structural identifiability analysis of a collection of biological models. The results reveal that the generating series approach, in combination with identifiability tableaus, offers the most advantageous compromise among range of applicability, computational complexity and information provided.

  19. Multivariate normative comparison, a novel method for more reliably detecting cognitive impairment in HIV infection

    NARCIS (Netherlands)

    Su, Tanja; Schouten, Judith; Geurtsen, Gert J.; Wit, Ferdinand W.; Stolte, Ineke G.; Prins, Maria; Portegies, Peter; Caan, Matthan W. A.; Reiss, Peter; Majoie, Charles B.; Schmand, Ben A.

    2015-01-01

    The objective of this study is to assess whether multivariate normative comparison (MNC) improves detection of HIV-1-associated neurocognitive disorder (HAND) as compared with Frascati and Gisslén criteria. One-hundred and three HIV-1-infected men with suppressed viremia on combination

  20. Methods for systematic reviews of health economic evaluations: a systematic review, comparison, and synthesis of method literature.

    Science.gov (United States)

    Mathes, Tim; Walgenbach, Maren; Antoine, Sunya-Lee; Pieper, Dawid; Eikermann, Michaela

    2014-10-01

    The quality of systematic reviews of health economic evaluations (SR-HE) is often limited because of methodological shortcomings. One reason for this poor quality is that there are no established standards for the preparation of SR-HE. The objective of this study is to compare existing methods and suggest best practices for the preparation of SR-HE. To identify the relevant methodological literature on SR-HE, a systematic literature search was performed in Embase, Medline, the National Health System Economic Evaluation Database, the Health Technology Assessment Database, and the Cochrane methodology register, and webpages of international health technology assessment agencies were searched. The study selection was performed independently by 2 reviewers. Data were extracted by one reviewer and verified by a second reviewer. On the basis of the overlaps in the recommendations for the methods of SR-HE in the included papers, suggestions for best practices for the preparation of SR-HE were developed. Nineteen relevant publications were identified. The recommendations within them often differed. However, for most process steps there was some overlap between recommendations for the methods of preparation. The overlaps were taken as basis on which to develop suggestions for the following process steps of preparation: defining the research question, developing eligibility criteria, conducting a literature search, selecting studies, assessing the methodological study quality, assessing transferability, and synthesizing data. The differences in the proposed recommendations are not always explainable by the focus on certain evaluation types, target audiences, or integration in the decision process. Currently, there seem to be no standard methods for the preparation of SR-HE. The suggestions presented here can contribute to the harmonization of methods for the preparation of SR-HE. © The Author(s) 2014.

  1. Conceptualising patient empowerment: a mixed methods study

    NARCIS (Netherlands)

    Bravo, P.; Edwards, A.; Barr, P.J.; Scholl, I.; Elwyn, G.; Mcallister, M.

    2015-01-01

    BACKGROUND: In recent years, interventions and health policy programmes have been established to promote patient empowerment, with a particular focus on patients affected by long-term conditions. However, a clear definition of patient empowerment is lacking, making it difficult to assess

  2. Comparison of the learning of two notations: A pilot study.

    Science.gov (United States)

    Akram, Ashfaq; Fuadfuad, Maher D; Malik, Arshad Mahmood; Nasir Alzurfi, Balsam Mahdi; Changmai, Manah Chandra; Madlena, Melinda

    2017-04-01

    MICAP is a new notation in which the teeth are indicated by letters (I-incisor, C-canine, P-premolar, M-molar) and numbers [1,2,3] which are written superscript and subscript on the relevant letters. FDI tooth notation is a two digit system where one digit shows quadrant and the second one shows the tooth of the quadrant. This study aimed to compare the short term retention of knowledge of two notation systems (FDI two digit system and MICAP notation) by lecture method. Undergraduate students [N=80] of three schools participated in a cross-over study. Two theory-driven classroom based lectures on MICAP notation and FDI notation were delivered separately. Data were collected using eight randomly selected permanent teeth to be written in MICAP format and FDI format at pretest (before the lecture), post-test I (immediately after lecture) and post-test II (one week after the lecture). Analysis was done by SPSS version 20.0 using repeated measures ANCOVA and independent t-test. The results of pre-test and post-test I were similar for FDI education. Similar results were found between post-test I and post-test II for MICAP and FDI notations. The study findings indicated that the two notations (FDI and MICAP) were equally mind cognitive. However, the sample size used in this study may not reflect the global scenario. Therefore, we suggest more studies to be performed for prospective adaptation of MICAP in dental curriculum.

  3. A comparison of exogenous and endogenous CEST MRI methods for evaluating in vivo pH.

    Science.gov (United States)

    Lindeman, Leila R; Randtke, Edward A; High, Rachel A; Jones, Kyle M; Howison, Christine M; Pagel, Mark D

    2018-05-01

    Extracellular pH (pHe) is an important biomarker for cancer cell metabolism. Acido-chemical exchange saturation transfer (CEST) MRI uses the contrast agent iopamidol to create spatial maps of pHe. Measurements of amide proton transfer exchange rates (k ex ) from endogenous CEST MRI were compared to pHe measurements by exogenous acido-CEST MRI to determine whether endogenous k ex could be used as a proxy for pHe measurements. Spatial maps of pHe and k ex were obtained using exogenous acidoCEST MRI and an endogenous CEST MRI analyzed with the omega plot method, respectively, to evaluate mouse kidney, a flank tumor model, and a spontaneous lung tumor model. The pHe and k ex results were evaluated using pixelwise comparisons. The k ex values obtained from endogenous CEST measurements did not correlate with the pHe results from exogenous CEST measurements. The k ex measurements were limited to fewer pixels and had a limited dynamic range relative to pHe measurements. Measurements of k ex with endogenous CEST MRI cannot substitute for pHe measurements with acidoCEST MRI. Whereas endogenous CEST MRI may still have good utility for evaluating some specific pathologies, exogenous acido-CEST MRI is more appropriate when evaluating pathologies based on pHe values. Magn Reson Med 79:2766-2772, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  4. A comparison of uncertainty analysis methods using a groundwater flow model

    International Nuclear Information System (INIS)

    Doctor, P.G.; Jacobson, E.A.; Buchanan, J.A.

    1988-06-01

    This report evaluates three uncertainty analysis methods that are proposed for use in performances assessment activities within the OCRWM and Nuclear Regulatory Commission (NRC) communities. The three methods are Monte Carlo simulation with unconstrained sampling, Monte Carlo simulation with Latin Hypercube sampling, and first-order analysis. Monte Carlo simulation with unconstrained sampling is a generally accepted uncertainty analysis method, but it has the disadvantage of being costly and time consuming. Latin Hypercube sampling was proposed to make Monte Carlo simulation more efficient. However, although it was originally formulated for independent variables, which is a major drawback in performance assessment modeling, Latin Hypercube can be used to generate correlated samples. The first-order method is efficient to implement because it is based on the first-order Taylor series expansion; however, there is concern that it does not adequately describe the variability for complex models. These three uncertainty analysis methods were evaluated using a calibrated groundwater flow model of a unconfined aquifer in southern Arizona. The two simulation methods produced similar results, although the Latin Hypercube method tends to produce samples whose estimates of statistical parameters are closer to the desired parameters. The mean travel times for the first-order method does not agree with those of the simulations. In additions, the first-order method produces estimates of variance in travel times that are more variable than those produced by the simulation methods, resulting in nonconservative tolerance intervals. 13 refs., 33 figs

  5. Facilitation Standards: A Mixed Methods Study

    Science.gov (United States)

    Hunter, Jennifer

    2017-01-01

    Online education is increasing as a solution to manage ever increasing enrollment numbers at higher education institutions. Intentionally and thoughtfully constructed courses allow students to improve performance through practice and self-assessment and instructors benefit from improving consistency in providing content and assessing process,…

  6. Measuring Cognitive Load: A Comparison of Self-Report and Physiological Methods

    Science.gov (United States)

    Joseph, Stacey

    2013-01-01

    This study explored three methods to measure cognitive load in a learning environment using four logic puzzles that systematically varied in level of intrinsic cognitive load. Participants' perceived intrinsic load was simultaneously measured with a self-report measure-a traditional subjective measure-and two objective, physiological measures…

  7. Active Search on Carcasses versus Pitfall Traps: a Comparison of Sampling Methods.

    Science.gov (United States)

    Zanetti, N I; Camina, R; Visciarelli, E C; Centeno, N D

    2016-04-01

    The study of insect succession in cadavers and the classification of arthropods have mostly been done by placing a carcass in a cage, protected from vertebrate scavengers, which is then visited periodically. An alternative is to use specific traps. Few studies on carrion ecology and forensic entomology involving the carcasses of large vertebrates have employed pitfall traps. The aims of this study were to compare both sampling methods (active search on a carcass and pitfall trapping) for each coleopteran family, and to establish whether there is a discrepancy (underestimation and/or overestimation) in the presence of each family by either method. A great discrepancy was found for almost all families with some of them being more abundant in samples obtained through active search on carcasses and others in samples from traps, whereas two families did not show any bias towards a given sampling method. The fact that families may be underestimated or overestimated by the type of sampling technique highlights the importance of combining both methods, active search on carcasses and pitfall traps, in order to obtain more complete information on decomposition, carrion habitat and cadaveric families or species. Furthermore, a hypothesis advanced on the reasons for the underestimation by either sampling method showing biases towards certain families. Information about the sampling techniques indicating which would be more appropriate to detect or find a particular family is provided.

  8. FEATURES BASED ON NEIGHBORHOOD PIXELS DENSITY - A STUDY AND COMPARISON

    Directory of Open Access Journals (Sweden)

    Satish Kumar

    2016-02-01

    Full Text Available In optical character recognition applications, the feature extraction method(s used to recognize document images play an important role. The features are the properties of the pattern that can be statistical, structural and/or transforms or series expansion. The structural features are difficult to compute particularly from hand-printed images. The structure of the strokes present inside the hand-printed images can be estimated using statistical means. In this paper three features have been purposed, those are based on the distribution of B/W pixels on the neighborhood of a pixel in an image. We name these features as Spiral Neighbor Density, Layer Pixel Density and Ray Density. The recognition performance of these features has been compared with two more features Neighborhood Pixels Weight and Total Distances in Four Directions already studied in our work. We have used more than 20000 Devanagari handwritten character images for conducting experiments. The experiments are conducted with two classifiers i.e. PNN and k-NN.

  9. A Critical Comparison of Methods for the Analysis of Indigo in Dyeing Liquors and Effluents

    Directory of Open Access Journals (Sweden)

    Valentina Buscio

    2014-08-01

    Full Text Available Indigo is one of the most important dyes in the textile industry. The control of the indigo concentration in dyeing liquors and effluents is an important tool to ensure the reproducibility of the dyed fabrics and also to establish the efficiency of the wastewater treatment. In this work, three analytical methods were studied and validated with the aim to select a reliable, fast and automated method for the indigo dye determination. The first method is based on the extraction of the dye, with chloroform, in its oxidized form. The organic solution is measured by Ultraviolet (UV-visible spectrophotometry at 604 nm. The second method determines the concentration of indigo in its leuco form in aqueous medium by UV-visible spectrophotometry at 407 nm. Finally, in the last method, the concentration of indigo is determined by redox titration with potassium hexacyanoferrate (K3(Fe(CN6. The results indicated that the three methods that we studied met the established acceptance criteria regarding accuracy and precision. However, the third method was considered the most adequate for application on an industrial scale due to its wider work range, which provides a significant advantage over the others.

  10. A heteroscedastic measurement error model for method comparison data with replicate measurements.

    Science.gov (United States)

    Nawarathna, Lakshika S; Choudhary, Pankaj K

    2015-03-30

    Measurement error models offer a flexible framework for modeling data collected in studies comparing methods of quantitative measurement. These models generally make two simplifying assumptions: (i) the measurements are homoscedastic, and (ii) the unobservable true values of the methods are linearly related. One or both of these assumptions may be violated in practice. In particular, error variabilities of the methods may depend on the magnitude of measurement, or the true values may be nonlinearly related. Data with these features call for a heteroscedastic measurement error model that allows nonlinear relationships in the true values. We present such a model for the case when the measurements are replicated, discuss its fitting, and explain how to evaluate similarity of measurement methods and agreement between them, which are two common goals of data analysis, under this model. Model fitting involves dealing with lack of a closed form for the likelihood function. We consider estimation methods that approximate either the likelihood or the model to yield approximate maximum likelihood estimates. The fitting methods are evaluated in a simulation study. The proposed methodology is used to analyze a cholesterol dataset. Copyright © 2015 John Wiley & Sons, Ltd.

  11. A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem

    Directory of Open Access Journals (Sweden)

    Zekić-Sušac Marijana

    2014-09-01

    Full Text Available Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART classification trees, support vector machines, and k-nearest neighbour on the same dataset in order to compare their efficiency in the sense of classification accuracy. The performance of each method was compared on ten subsamples in a 10-fold cross-validation procedure in order to assess computing sensitivity and specificity of each model. Results: The artificial neural network model based on multilayer perceptron yielded a higher classification rate than the models produced by other methods. The pairwise t-test showed a statistical significance between the artificial neural network and the k-nearest neighbour model, while the difference among other methods was not statistically significant. Conclusions: Tested machine learning methods are able to learn fast and achieve high classification accuracy. However, further advancement can be assured by testing a few additional methodological refinements in machine learning methods.

  12. Magnetic Resonance Imaging in the measurement of whole body muscle mass: A comparison of interval gap methods

    International Nuclear Information System (INIS)

    Hellmanns, K.; McBean, K.; Thoirs, K.

    2015-01-01

    Purpose: Magnetic Resonance Imaging (MRI) is commonly used in body composition research to measure whole body skeletal muscle mass (SM). MRI calculation methods of SM can vary by analysing the images at different slice intervals (or interval gaps) along the length of the body. This study compared SM measurements made from MRI images of apparently healthy individuals using different interval gap methods to determine the error associated with each technique. It was anticipated that the results would inform researchers of optimum interval gap measurements to detect a predetermined minimum change in SM. Methods: A method comparison study was used to compare eight interval gap methods (interval gaps of 40, 50, 60, 70, 80, 100, 120 and 140 mm) against a reference 10 mm interval gap method for measuring SM from twenty MRI image sets acquired from apparently healthy participants. Pearson product-moment correlation analysis was used to determine the association between methods. Total error was calculated as the sum of the bias (systematic error) and the random error (limits of agreement) of the mean differences. Percentage error was used to demonstrate proportional error. Results: Pearson product-moment correlation analysis between the reference method and all interval gap methods demonstrated strong and significant associations (r > 0.99, p < 0.0001). The 40 mm interval gap method was comparable with the 10 mm interval reference method and had a low error (total error 0.95 kg, −3.4%). Analysis methods using wider interval gap techniques demonstrated larger errors than reported for dual-energy x-ray absorptiometry (DXA), a technique which is more available, less expensive, and less time consuming than MRI analysis of SM. Conclusions: Researchers using MRI to measure SM can be confident in using a 40 mm interval gap technique when analysing the images to detect minimum changes less than 1 kg. The use of wider intervals will introduce error that is no better

  13. Comparison of some effects of modification of a polylactide surface layer by chemical, plasma, and laser methods

    Energy Technology Data Exchange (ETDEWEB)

    Moraczewski, Krzysztof, E-mail: kmm@ukw.edu.pl [Department of Materials Engineering, Kazimierz Wielki University, Department of Materials Engineering, ul. Chodkiewicza 30, 85-064 Bydgoszcz (Poland); Rytlewski, Piotr [Department of Materials Engineering, Kazimierz Wielki University, Department of Materials Engineering, ul. Chodkiewicza 30, 85-064 Bydgoszcz (Poland); Malinowski, Rafał [Institute for Engineering of Polymer Materials and Dyes, ul. M. Skłodowskiej–Curie 55, 87-100 Toruń (Poland); Żenkiewicz, Marian [Department of Materials Engineering, Kazimierz Wielki University, Department of Materials Engineering, ul. Chodkiewicza 30, 85-064 Bydgoszcz (Poland)

    2015-08-15

    Highlights: • We modified polylactide surface layer with chemical, plasma or laser methods. • We tested selected properties and surface structure of modified samples. • We stated that the plasma treatment appears to be the most beneficial. - Abstract: The article presents the results of studies and comparison of selected properties of the modified PLA surface layer. The modification was carried out with three methods. In the chemical method, a 0.25 M solution of sodium hydroxide in water and ethanol was utilized. In the plasma method, a 50 W generator was used, which produced plasma in the air atmosphere under reduced pressure. In the laser method, a pulsed ArF excimer laser with fluency of 60 mJ/cm{sup 2} was applied. Polylactide samples were examined by using the following techniques: scanning electron microscopy (SEM), atomic force microscopy (AFM), goniometry and X-ray photoelectron spectroscopy (XPS). Images of surfaces of the modified samples were recorded, contact angles were measured, and surface free energy was calculated. Qualitative and quantitative analyses of chemical composition of the PLA surface layer were performed as well. Based on the survey it was found that the best modification results are obtained using the plasma method.

  14. Evaluating methods for estimating home ranges using GPS collars: A comparison using proboscis monkeys (Nasalis larvatus).

    Science.gov (United States)

    Stark, Danica J; Vaughan, Ian P; Ramirez Saldivar, Diana A; Nathan, Senthilvel K S S; Goossens, Benoit

    2017-01-01

    The development of GPS tags for tracking wildlife has revolutionised the study of home ranges, habitat use and behaviour. Concomitantly, there have been rapid developments in methods for estimating habitat use from GPS data. In combination, these changes can cause challenges in choosing the best methods for estimating home ranges. In primatology, this issue has received little attention, as there have been few GPS collar-based studies to date. However, as advancing technology is making collaring studies more feasible, there is a need for the analysis to advance alongside the technology. Here, using a high quality GPS collaring data set from 10 proboscis monkeys (Nasalis larvatus), we aimed to: 1) compare home range estimates from the most commonly used method in primatology, the grid-cell method, with three recent methods designed for large and/or temporally correlated GPS data sets; 2) evaluate how well these methods identify known physical barriers (e.g. rivers); and 3) test the robustness of the different methods to data containing either less frequent or random losses of GPS fixes. Biased random bridges had the best overall performance, combining a high level of agreement between the raw data and estimated utilisation distribution with a relatively low sensitivity to reduced fixed frequency or loss of data. It estimated the home range of proboscis monkeys to be 24-165 ha (mean 80.89 ha). The grid-cell method and approaches based on local convex hulls had some advantages including simplicity and excellent barrier identification, respectively, but lower overall performance. With the most suitable model, or combination of models, it is possible to understand more fully the patterns, causes, and potential consequences that disturbances could have on an animal, and accordingly be used to assist in the management and restoration of degraded landscapes.

  15. Evaluating methods for estimating home ranges using GPS collars: A comparison using proboscis monkeys (Nasalis larvatus.

    Directory of Open Access Journals (Sweden)

    Danica J Stark

    Full Text Available The development of GPS tags for tracking wildlife has revolutionised the study of home ranges, habitat use and behaviour. Concomitantly, there have been rapid developments in methods for estimating habitat use from GPS data. In combination, these changes can cause challenges in choosing the best methods for estimating home ranges. In primatology, this issue has received little attention, as there have been few GPS collar-based studies to date. However, as advancing technology is making collaring studies more feasible, there is a need for the analysis to advance alongside the technology. Here, using a high quality GPS collaring data set from 10 proboscis monkeys (Nasalis larvatus, we aimed to: 1 compare home range estimates from the most commonly used method in primatology, the grid-cell method, with three recent methods designed for large and/or temporally correlated GPS data sets; 2 evaluate how well these methods identify known physical barriers (e.g. rivers; and 3 test the robustness of the different methods to data containing either less frequent or random losses of GPS fixes. Biased random bridges had the best overall performance, combining a high level of agreement between the raw data and estimated utilisation distribution with a relatively low sensitivity to reduced fixed frequency or loss of data. It estimated the home range of proboscis monkeys to be 24-165 ha (mean 80.89 ha. The grid-cell method and approaches based on local convex hulls had some advantages including simplicity and excellent barrier identification, respectively, but lower overall performance. With the most suitable model, or combination of models, it is possible to understand more fully the patterns, causes, and potential consequences that disturbances could have on an animal, and accordingly be used to assist in the management and restoration of degraded landscapes.

  16. Evaluating resective surgery targets in epilepsy patients: A comparison of quantitative EEG methods.

    Science.gov (United States)

    Müller, Michael; Schindler, Kaspar; Goodfellow, Marc; Pollo, Claudio; Rummel, Christian; Steimer, Andreas

    2018-05-18

    Quantitative analysis of intracranial EEG is a promising tool to assist clinicians in the planning of resective brain surgery in patients suffering from pharmacoresistant epilepsies. Quantifying the accuracy of such tools, however, is nontrivial as a ground truth to verify predictions about hypothetical resections is missing. As one possibility to address this, we use customized hypotheses tests to examine the agreement of the methods on a common set of patients. One method uses machine learning techniques to enable the predictive modeling of EEG time series. The other estimates nonlinear interrelation between EEG channels. Both methods were independently shown to distinguish patients with excellent post-surgical outcome (Engel class I) from those without improvement (Engel class IV) when assessing the electrodes associated with the tissue that was actually resected during brain surgery. Using the AND and OR conjunction of both methods we evaluate the performance gain that can be expected when combining them. Both methods' assessments correlate strongly positively with the similarity between a hypothetical resection and the corresponding actual resection in class I patients. Moreover, the Spearman rank correlation between the methods' patient rankings is significantly positive. To our best knowledge, this is the first study comparing surgery target assessments from fundamentally differing techniques. Although conceptually completely independent, there is a relation between the predictions obtained from both methods. Their broad consensus supports their application in clinical practice to provide physicians additional information in the process of presurgical evaluation. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. A comparison between two methods of measuring total fat in the Iranian soldiers

    Directory of Open Access Journals (Sweden)

    J. Rahmani

    2017-04-01

    Full Text Available Background: Constant checkup and control of body fat mass is an important parameter for the health and efficiency of individuals in any society. This parameter is especially crucial in army soldiers since physical fitness is a key role in reaching high physical performance, health, and survival in war. Objective: This study was designed to compare two methods of measuring fat, the method of circumference-based military equations (CBEs to estimate body fat mass compared to the method of skinfold thickness-based equation (SBE in Iranian soldiers. Methods: This cross-sectional study was conducted on 246 Iranian soldiers were recruited in Tehran (2016. Height, waist, and neck circumference were measured and the total body fat mass was calculated using CBEs. Then, the results of using Pierson’s correlation and Bland-Altman methods were compared with Jackson and Pollock’s skinfold thickness measurement. Findings: The total body fat mass of the soldiers using CBEs was 18.94±6.30% and using Jakson and Pollock’s skinfold thickness formula was 17.43±4.45%. The correlation between the two methods was r=0.984 and SEE was 1.1% (P<0.001. Conclusion: The more body fat makes the error waist circumference greater. The error is so much that don’t use this method to measure fat.

  18. Computer game-based and traditional learning method: a comparison regarding students’ knowledge retention

    Directory of Open Access Journals (Sweden)

    Rondon Silmara

    2013-02-01

    Full Text Available Abstract Background Educational computer games are examples of computer-assisted learning objects, representing an educational strategy of growing interest. Given the changes in the digital world over the last decades, students of the current generation expect technology to be used in advancing their learning requiring a need to change traditional passive learning methodologies to an active multisensory experimental learning methodology. The objective of this study was to compare a computer game-based learning method with a traditional learning method, regarding learning gains and knowledge retention, as means of teaching head and neck Anatomy and Physiology to Speech-Language and Hearing pathology undergraduate students. Methods Students were randomized to participate to one of the learning methods and the data analyst was blinded to which method of learning the students had received. Students’ prior knowledge (i.e. before undergoing the learning method, short-term knowledge retention and long-term knowledge retention (i.e. six months after undergoing the learning method were assessed with a multiple choice questionnaire. Students’ performance was compared considering the three moments of assessment for both for the mean total score and for separated mean scores for Anatomy questions and for Physiology questions. Results Students that received the game-based method performed better in the pos-test assessment only when considering the Anatomy questions section. Students that received the traditional lecture performed better in both post-test and long-term post-test when considering the Anatomy and Physiology questions. Conclusions The game-based learning method is comparable to the traditional learning method in general and in short-term gains, while the traditional lecture still seems to be more effective to improve students’ short and long-term knowledge retention.

  19. Comparison of first order analysis and Monte Carlo methods in evaluating groundwater model uncertainty: a case study from an iron ore mine in the Pilbara Region of Western Australia

    Science.gov (United States)

    Firmani, G.; Matta, J.

    2012-04-01

    The expansion of mining in the Pilbara region of Western Australia is resulting in the need to develop better water strategies to make below water table resources accessible, manage surplus water and deal with water demands for processing ore and construction. In all these instances, understanding the local and regional hydrogeology is fundamental to allow sustainable mining; minimising the impacts to the environment. An understanding of the uncertainties of the hydrogeology is necessary to quantify the risks and make objective decisions rather than relying on subjective judgements. The aim of this paper is to review some of the methods proposed by the published literature and find approaches that can be practically implemented in an attempt to estimate model uncertainties. In particular, this paper adopts two general probabilistic approaches that address the parametric uncertainty estimation and its propagation in predictive scenarios: the first order analysis and Monte Carlo simulations. A case example application of the two techniques is also presented for the dewatering strategy of a large below water table open cut iron ore mine in the Pilbara region of Western Australia. This study demonstrates the weakness of the deterministic approach, as the coefficients of variation of some model parameters were greater than 1.0; and suggests a review of the model calibration method and conceptualisation. The uncertainty propagation into predictive scenarios was calculated assuming the parameters with a coefficient of variation higher than 0.25 as deterministic, due to computational difficulties to achieve an accurate result with the Monte Carlo method. The conclusion of this case study was that the first order analysis appears to be a successful and simple tool when the coefficients of variation of calibrated parameters are less than 0.25.

  20. Microscopy outperformed in a comparison of five methods for detecting Trichomonas vaginalis in symptomatic women.

    Science.gov (United States)

    Nathan, B; Appiah, J; Saunders, P; Heron, D; Nichols, T; Brum, R; Alexander, S; Baraitser, P; Ison, C

    2015-03-01

    In the UK, despite its low sensitivity, wet mount microscopy is often the only method of detecting Trichomonas vaginalis infection. A study was conducted in symptomatic women to compare the performance of five methods for detecting T. vaginalis: an in-house polymerase chain reaction (PCR); Aptima T. vaginalis kit; OSOM ®Trichomonas Rapid Test; culture and microscopy. Symptomatic women underwent routine testing; microscopy and further swabs were taken for molecular testing, OSOM and culture. A true positive was defined as a sample that was positive for T. vaginalis by two or more different methods. Two hundred and forty-six women were recruited: 24 patients were positive for T. vaginalis by two or more different methods. Of these 24 patients, 21 patients were detected by real-time PCR (sensitivity 88%); 22 patients were detected by the Aptima T. vaginalis kit (sensitivity 92%); 22 patients were detected by OSOM (sensitivity 92%); nine were detected by wet mount microscopy (sensitivity 38%); and 21 were detected by culture (sensitivity 88%). Two patients were positive by just one method and were not considered true positives. All the other detection methods had a sensitivity to detect T. vaginalis that was significantly greater than wet mount microscopy, highlighting the number of cases that are routinely missed even in symptomatic women if microscopy is the only diagnostic method available. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  1. COMPARISON OF RECENTLY USED PHACOEMULSIFICATION SYSTEMS USING A HEALTH TECHNOLOGY ASSESSMENT METHOD.

    Science.gov (United States)

    Huang, Jiannan; Wang, Qi; Zhao, Caimin; Ying, Xiaohua; Zou, Haidong

    2017-01-01

    To compare the recently used phacoemulsification systems using a health technology assessment (HTA) model. A self-administered questionnaire, which included questions to gauge on the opinions of the recently used phacoemulsification systems, was distributed to the chief cataract surgeons in the departments of ophthalmology of eighteen tertiary hospitals in Shanghai, China. A series of senile cataract patients undergoing phacoemulsification surgery were enrolled in the study. The surgical results and the average costs related to their surgeries were all recorded and compared for the recently used phacoemulsification systems. The four phacoemulsification systems currently used in Shanghai are the Infiniti Vision, Centurion Vision, WhiteStar Signature, and Stellaris Vision Enhancement systems. All of the doctors confirmed that the systems they used would help cataract patients recover vision. A total of 150 cataract patients who underwent phacoemulsification surgery were enrolled in the present study. A significant difference was found among the four groups in cumulative dissipated energy, with the lowest value found in the Centurion group. No serious complications were observed and a positive trend in visual acuity was found in all four groups after cataract surgery. The highest total cost of surgery was associated with procedures conducted using the Centurion Vision system, and significant differences between systems were mainly because of the cost of the consumables used in the different surgeries. This HTA comparison of four recently used phacoemulsification systems found that each of system offers a satisfactory vision recovery outcome, but differs in surgical efficacy and costs.

  2. Comparison of a Material Point Method and a Galerkin Meshfree Method for the Simulation of Cohesive-Frictional Materials

    Directory of Open Access Journals (Sweden)

    Ilaria Iaconeta

    2017-09-01

    Full Text Available The simulation of large deformation problems, involving complex history-dependent constitutive laws, is of paramount importance in several engineering fields. Particular attention has to be paid to the choice of a suitable numerical technique such that reliable results can be obtained. In this paper, a Material Point Method (MPM and a Galerkin Meshfree Method (GMM are presented and verified against classical benchmarks in solid mechanics. The aim is to demonstrate the good behavior of the methods in the simulation of cohesive-frictional materials, both in static and dynamic regimes and in problems dealing with large deformations. The vast majority of MPM techniques in the literatrue are based on some sort of explicit time integration. The techniques proposed in the current work, on the contrary, are based on implicit approaches, which can also be easily adapted to the simulation of static cases. The two methods are presented so as to highlight the similarities to rather than the differences from “standard” Updated Lagrangian (UL approaches commonly employed by the Finite Elements (FE community. Although both methods are able to give a good prediction, it is observed that, under very large deformation of the medium, GMM lacks robustness due to its meshfree natrue, which makes the definition of the meshless shape functions more difficult and expensive than in MPM. On the other hand, the mesh-based MPM is demonstrated to be more robust and reliable for extremely large deformation cases.

  3. Search and foraging behaviors from movement data: A comparison of methods.

    Science.gov (United States)

    Bennison, Ashley; Bearhop, Stuart; Bodey, Thomas W; Votier, Stephen C; Grecian, W James; Wakefield, Ewan D; Hamer, Keith C; Jessopp, Mark

    2018-01-01

    Search behavior is often used as a proxy for foraging effort within studies of animal movement, despite it being only one part of the foraging process, which also includes prey capture. While methods for validating prey capture exist, many studies rely solely on behavioral annotation of animal movement data to identify search and infer prey capture attempts. However, the degree to which search correlates with prey capture is largely untested. This study applied seven behavioral annotation methods to identify search behavior from GPS tracks of northern gannets ( Morus bassanus ), and compared outputs to the occurrence of dives recorded by simultaneously deployed time-depth recorders. We tested how behavioral annotation methods vary in their ability to identify search behavior leading to dive events. There was considerable variation in the number of dives occurring within search areas across methods. Hidden Markov models proved to be the most successful, with 81% of all dives occurring within areas identified as search. k -Means clustering and first passage time had the highest rates of dives occurring outside identified search behavior. First passage time and hidden Markov models had the lowest rates of false positives, identifying fewer search areas with no dives. All behavioral annotation methods had advantages and drawbacks in terms of the complexity of analysis and ability to reflect prey capture events while minimizing the number of false positives and false negatives. We used these results, with consideration of analytical difficulty, to provide advice on the most appropriate methods for use where prey capture behavior is not available. This study highlights a need to critically assess and carefully choose a behavioral annotation method suitable for the research question being addressed, or resulting species management frameworks established.

  4. Comparison of two down-scaling methods for climate study and climate change on the mountain areas in France

    International Nuclear Information System (INIS)

    Piazza, Marie; Page, Christian; Sanchez-Gomez, Emilia; Terray, Laurent; Deque, Michel

    2013-01-01

    Mountain regions are highly vulnerable to climate change and are likely to be among the areas most impacted by global warming. But climate projections for the end of the 21. century are developed with general circulation models of climate, which do not present a sufficient horizontal resolution to accurately evaluate the impacts of warming on these regions. Several techniques are then used to perform a spatial down-scaling (on the order of 10 km). There are two categories of down-scaling methods: dynamical methods that require significant computational resources for the achievement of regional climate simulations at high resolution, and statistical methods that require few resources but an observation dataset over a long period and of good quality. In this study, climate simulations of the global atmospheric model ARPEGE projections over France are down-scaled according to a dynamical method, performed with the ALADIN-Climate regional model, and a statistical method performed with the software DSClim developed at CERFACS. The two down-scaling methods are presented and the results on the climate of the French mountains are evaluated for the current climate. Both methods give similar results for average snowfall. However extreme events of total precipitation (droughts, intense precipitation events) are largely underestimated by the statistical method. Then, the results of both methods are compared for two future climate projections, according to the greenhouse gas emissions scenario A1B of IPCC. The two methods agree on fewer frost days, a significant decrease in the amounts of solid precipitation and an average increase in the percentage of dry days of more than 10%. The results obtained on Corsica are more heterogeneous but they are questionable because the reduced spatial domain is probably not very relevant regarding statistical sampling. (authors)

  5. A comparison of Probability Of Detection (POD) data determined using different statistical methods

    Science.gov (United States)

    Fahr, A.; Forsyth, D.; Bullock, M.

    1993-12-01

    Different statistical methods have been suggested for determining probability of detection (POD) data for nondestructive inspection (NDI) techniques. A comparative assessment of various methods of determining POD was conducted using results of three NDI methods obtained by inspecting actual aircraft engine compressor disks which contained service induced cracks. The study found that the POD and 95 percent confidence curves as a function of crack size as well as the 90/95 percent crack length vary depending on the statistical method used and the type of data. The distribution function as well as the parameter estimation procedure used for determining POD and the confidence bound must be included when referencing information such as the 90/95 percent crack length. The POD curves and confidence bounds determined using the range interval method are very dependent on information that is not from the inspection data. The maximum likelihood estimators (MLE) method does not require such information and the POD results are more reasonable. The log-logistic function appears to model POD of hit/miss data relatively well and is easy to implement. The log-normal distribution using MLE provides more realistic POD results and is the preferred method. Although it is more complicated and slower to calculate, it can be implemented on a common spreadsheet program.

  6. Radioimmunoassay for chicken avidin. Comparison with a (/sup 14/C)biotin-binding method

    Energy Technology Data Exchange (ETDEWEB)

    Kulomaa, M S; Elo, H A; Tuohimaa, P J [Tampere Univ. of Tech. (Finland)

    1978-11-01

    A double-antibody solid-phase radioimmunoassay for chicken avidin is reported. Avidin was labelled with /sup 125/I by the chloramine-T method. The bound and free avidin were separated with a second antibody bound to a solid matrix. In the logit-log scale the standard curve was linear from 1-2 to 100-200ng of avidin/ml. Cross-reaction of ovalbumin was less than 0.015%. Saturation of biotin-binding sites of avidin with an excess of biotin decreased radioimmunoassay values by about 15%. Recovery studies indicated that avidin can be assayed from all chicken tissues studied with radioimmunoassay, whereas the (/sup 14/C)biotin/bentonite method gave poor recoveries for avidin in the liver and kidney. Radioimmunoassay and the (/sup 14/C)biotin/bentonite method gave similar concentrations for oviduct avidin.

  7. Characterization of arterial stenosis using 3D imaging: comparison between three imaging techniques (MRA, spiral CTA and 3D DSA) and four display methods (MIP, SR, MPVR, VA) in a phantom study

    International Nuclear Information System (INIS)

    Bendib, K.; Poirier, C.; Croisille, P.; Roux, J.P.; Devel, D.; Amiel, M.

    1999-01-01

    Introduction: accurate assessment of arterial stenosis is a major public health issue for the diagnosis and treatment of cardiovascular diseases. The number of imaging techniques and types of software for display of imaging data is increasing. Few studies that compare these different techniques are available in the literature. Materials and methods: using phantoms to reproduce the main types of arterial stenosis, the authors compared three 3D acquisition techniques (MRA, CTA, and 3D DSA) and four types of display methods (MIP, SR, MPVR, and VA). The degree, the shape, and the location of different types of stenoses were analyzed by three experienced observers during two successive readings. Intra- and inter-observer reproducibility were assessed. The results of the various acquisition techniques and display methods also were compared to the digital reference data (CFAO) of the physical phantoms. Results: the degree of intra- and inter-observer reproducibility for the assessment of shape and location of the stenoses was good. Visual assessment of the degree of stenosis showed significant differences between two observers as well as in two readings by one observer. The 3D DSA was the most accurate technique for assessing the degree of stenosis. CTA provided better results than MRA. MPVR provided an accurate assessment of the degree of the stenosis. 3D DSA and CTA assessed stenosis form and localization adequately, with no significant difference; both methods appeared to be more accurate than MRA. SR provided the best information on the eccentric nature of the stenosis. The shape was very well assessed by VA and MPVR. Conclusions: even though 3D DSA is the most accurate acquisition technique for visualization, the combined use of SR and MPVR appears to be the best compromise to describe the morphology and degree of stenosis. Further improvements in automatic 3D image processing could offer a better understanding and increased possibilities for assessing arterial

  8. A multi-center comparison of diagnostic methods for the biochemical evaluation of suspected mitochondrial disorders

    NARCIS (Netherlands)

    Rodenburg, R.J.T.; Schoonderwoerd, G.C.; Tiranti, V.; Taylor, R.W.; Rotig, A.; Valente, L.; Invernizzi, F.; Chretien, D.; He, L.; Backx, G.P.; Janssen, K.J.; Chinnery, P.F.; Smeets, H.J.M.; Coo, I.F. de; Heuvel, L.P. van den

    2013-01-01

    A multicenter comparison of mitochondrial respiratory chain and complex V enzyme activity tests was performed. The average reproducibility of the enzyme assays is 16% in human muscle samples. In a blinded diagnostic accuracy test in patient fibroblasts and SURF1 knock-out mouse muscle, each lab made

  9. Measuring diet cost at the individual level: a comparison of three methods.

    Science.gov (United States)

    Monsivais, P; Perrigue, M M; Adams, S L; Drewnowski, A

    2013-11-01

    Household-level food spending data are not suitable for population-based studies of the economics of nutrition. This study compared three methods of deriving diet cost at the individual level. Adult men and women (n=164) completed 4-day diet diaries and a food frequency questionnaire (FFQ). Food expenditures over 4 weeks and supermarket prices for 384 foods were obtained. Diet costs (US$/day) were estimated using: (1) diet diaries and expenditures; (2) diet diaries and supermarket prices; and (3) FFQs and supermarket prices. Agreement between the three methods was assessed on the basis of Pearson correlations and limits of agreement. Income-related differences in diet costs were estimated using general linear models. Diet diaries yielded mean (s.d.) diet costs of $10.04 (4.27) based on Method 1 and $8.28 (2.32) based on Method 2. FFQs yielded mean diet costs of $7.66 (2.72) based on Method 3. Correlations between energy intakes and costs were highest for Method 3 (r(2)=0.66), lower for Method 2 (r(2)=0.24) and lowest for Method 1 (r(2)=0.06). Cost estimates were significantly associated with household incomes. The weak association between food expenditures and food intake using Method 1 makes it least suitable for diet and health research. However, merging supermarket food prices with standard dietary assessment tools can provide estimates of individual diet cost that are more closely associated with food consumed. The derivation of individual diet cost can provide insights into some of the economic determinants of food choice, diet quality and health.

  10. Computer game-based and traditional learning method: a comparison regarding students' knowledge retention.

    Science.gov (United States)

    Rondon, Silmara; Sassi, Fernanda Chiarion; Furquim de Andrade, Claudia Regina

    2013-02-25

    Educational computer games are examples of computer-assisted learning objects, representing an educational strategy of growing interest. Given the changes in the digital world over the last decades, students of the current generation expect technology to be used in advancing their learning requiring a need to change traditional passive learning methodologies to an active multisensory experimental learning methodology. The objective of this study was to compare a computer game-based learning method with a traditional learning method, regarding learning gains and knowledge retention, as means of teaching head and neck Anatomy and Physiology to Speech-Language and Hearing pathology undergraduate students. Students were randomized to participate to one of the learning methods and the data analyst was blinded to which method of learning the students had received. Students' prior knowledge (i.e. before undergoing the learning method), short-term knowledge retention and long-term knowledge retention (i.e. six months after undergoing the learning method) were assessed with a multiple choice questionnaire. Students' performance was compared considering the three moments of assessment for both for the mean total score and for separated mean scores for Anatomy questions and for Physiology questions. Students that received the game-based method performed better in the pos-test assessment only when considering the Anatomy questions section. Students that received the traditional lecture performed better in both post-test and long-term post-test when considering the Anatomy and Physiology questions. The game-based learning method is comparable to the traditional learning method in general and in short-term gains, while the traditional lecture still seems to be more effective to improve students' short and long-term knowledge retention.

  11. Technical brief: a comparison of two methods of euthanasia on retinal dopamine levels.

    Science.gov (United States)

    Hwang, Christopher K; Iuvone, P Michael

    2013-01-01

    Mice are commonly used in biomedical research, and euthanasia is an important part of mouse husbandry. Approved, humane methods of euthanasia are designed to minimize the potential for pain or discomfort, but may also influence the measurement of experimental variables. We compared the effects of two approved methods of mouse euthanasia on the levels of retinal dopamine. We examined the level of retinal dopamine, a commonly studied neuromodulator, following euthanasia by carbon dioxide (CO₂)-induced asphyxiation or by cervical dislocation. We found that the level of retinal dopamine in mice euthanized through CO₂ overdose substantially differed from that in mice euthanized through cervical dislocation. The use of CO₂ as a method of euthanasia could result in an experimental artifact that could compromise results when studying labile biologic processes.

  12. Measuring digit lengths with 3D digital stereophotogrammetry: A comparison across methods.

    Science.gov (United States)

    Gremba, Allison; Weinberg, Seth M

    2018-05-09

    We compared digital 3D stereophotogrammetry to more traditional measurement methods (direct anthropometry and 2D scanning) to capture digit lengths and ratios. The length of the second and fourth digits was measured by each method and the second-to-fourth ratio was calculated. For each digit measurement, intraobserver agreement was calculated for each of the three collection methods. Further, measurements from the three methods were compared directly to one another. Agreement statistics included the intraclass correlation coefficient (ICC) and technical error of measurement (TEM). Intraobserver agreement statistics for the digit length measurements were high for all three methods; ICC values exceeded 0.97 and TEM values were below 1 mm. For digit ratio, intraobserver agreement was also acceptable for all methods, with direct anthropometry exhibiting lower agreement (ICC = 0.87) compared to indirect methods. For the comparison across methods, the overall agreement was high for digit length measurements (ICC values ranging from 0.93 to 0.98; TEM values below 2 mm). For digit ratios, high agreement was observed between the two indirect methods (ICC = 0.93), whereas indirect methods showed lower agreement when compared to direct anthropometry (ICC < 0.75). Digit measurements and derived ratios from 3D stereophotogrammetry showed high intraobserver agreement (similar to more traditional methods) suggesting that landmarks could be placed reliably on 3D hand surface images. While digit length measurements were found to be comparable across all three methods, ratios derived from direct anthropometry tended to be higher than those calculated indirectly from 2D or 3D images. © 2018 Wiley Periodicals, Inc.

  13. Steel Rack Connections: Identification of Most Influential Factors and a Comparison of Stiffness Design Methods.

    Directory of Open Access Journals (Sweden)

    S N R Shah

    Full Text Available Steel pallet rack (SPR beam-to-column connections (BCCs are largely responsible to avoid the sway failure of frames in the down-aisle direction. The overall geometry of beam end connectors commercially used in SPR BCCs is different and does not allow a generalized analytic approach for all types of beam end connectors; however, identifying the effects of the configuration, profile and sizes of the connection components could be the suitable approach for the practical design engineers in order to predict the generalized behavior of any SPR BCC. This paper describes the experimental behavior of SPR BCCs tested using a double cantilever test set-up. Eight sets of specimens were identified based on the variation in column thickness, beam depth and number of tabs in the beam end connector in order to investigate the most influential factors affecting the connection performance. Four tests were repeatedly performed for each set to bring uniformity to the results taking the total number of tests to thirty-two. The moment-rotation (M-θ behavior, load-strain relationship, major failure modes and the influence of selected parameters on connection performance were investigated. A comparative study to calculate the connection stiffness was carried out using the initial stiffness method, the slope to half-ultimate moment method and the equal area method. In order to find out the more appropriate method, the mean stiffness of all the tested connections and the variance in values of mean stiffness according to all three methods were calculated. The calculation of connection stiffness by means of the initial stiffness method is considered to overestimate the values when compared to the other two methods. The equal area method provided more consistent values of stiffness and lowest variance in the data set as compared to the other two methods.

  14. Steel Rack Connections: Identification of Most Influential Factors and a Comparison of Stiffness Design Methods

    Science.gov (United States)

    Shah, S. N. R.; Sulong, N. H. Ramli; Shariati, Mahdi; Jumaat, M. Z.

    2015-01-01

    Steel pallet rack (SPR) beam-to-column connections (BCCs) are largely responsible to avoid the sway failure of frames in the down-aisle direction. The overall geometry of beam end connectors commercially used in SPR BCCs is different and does not allow a generalized analytic approach for all types of beam end connectors; however, identifying the effects of the configuration, profile and sizes of the connection components could be the suitable approach for the practical design engineers in order to predict the generalized behavior of any SPR BCC. This paper describes the experimental behavior of SPR BCCs tested using a double cantilever test set-up. Eight sets of specimens were identified based on the variation in column thickness, beam depth and number of tabs in the beam end connector in order to investigate the most influential factors affecting the connection performance. Four tests were repeatedly performed for each set to bring uniformity to the results taking the total number of tests to thirty-two. The moment-rotation (M-θ) behavior, load-strain relationship, major failure modes and the influence of selected parameters on connection performance were investigated. A comparative study to calculate the connection stiffness was carried out using the initial stiffness method, the slope to half-ultimate moment method and the equal area method. In order to find out the more appropriate method, the mean stiffness of all the tested connections and the variance in values of mean stiffness according to all three methods were calculated. The calculation of connection stiffness by means of the initial stiffness method is considered to overestimate the values when compared to the other two methods. The equal area method provided more consistent values of stiffness and lowest variance in the data set as compared to the other two methods. PMID:26452047

  15. Trends in Suicide Methods and Rates among Older Adults in South Korea: A Comparison with Japan.

    Science.gov (United States)

    Park, Subin; Lee, Hochang Benjamin; Lee, Su Yeon; Lee, Go Eun; Ahn, Myung Hee; Yi, Ki Kyoung; Hong, Jin Pyo

    2016-03-01

    Lethality of the chosen method during a suicide attempt is a strong risk factor for completion of suicide. We examined whether annual changes in the pattern of suicide methods is related to annual changes in suicide rates among older adults in South Korea and Japan. We analyzed annual the World Health Organization data on rates and methods of suicide from 2000 to 2011 in South Korea and Japan. For Korean older adults, there was a significant positive correlation between suicide rate and the rate of hanging or the rate of jumping, and a significant negative correlation between suicide rate and the rate of poisoning. Among older adults in Japan, annual changes in the suicide rate and the pattern of suicide methods were less conspicuous, and no correlation was found between them. The results of the present study suggest that the increasing use of lethal suicide methods has contributed to the rise in suicide rates among older adults in South Korea. Targeted efforts to reduce the social acceptability and accessibility of lethal suicide methods might lead to lower suicide rate among older adults in South Korea.

  16. Revealing barriers and facilitators to use a new genetic test: comparison of three user involvement methods.

    Science.gov (United States)

    Rhebergen, Martijn D F; Visser, Maaike J; Verberk, Maarten M; Lenderink, Annet F; van Dijk, Frank J H; Kezic, Sanja; Hulshof, Carel T J

    2012-10-01

    We compared three common user involvement methods in revealing barriers and facilitators from intended users that might influence their use of a new genetic test. The study was part of the development of a new genetic test on the susceptibility to hand eczema for nurses. Eighty student nurses participated in five focus groups (n = 33), 15 interviews (n = 15) or questionnaires (n = 32). For each method, data were collected until saturation. We compared the mean number of items and relevant remarks that could influence the use of the genetic test obtained per method, divided by the number of participants in that method. Thematic content analysis was performed using MAXQDA software. The focus groups revealed 30 unique items compared to 29 in the interviews and 21 in the questionnaires. The interviews produced more items and relevant remarks per participant (1.9 and 8.4 pp) than focus groups (0.9 and 4.8 pp) or questionnaires (0.7 and 2.3 pp). All three involvement methods revealed relevant barriers and facilitators to use a new genetic test. Focus groups and interviews revealed substantially more items than questionnaires. Furthermore, this study suggests a preference for the use of interviews because the number of items per participant was higher than for focus groups and questionnaires. This conclusion may be valid for other genetic tests as well.

  17. Quantitative Analysis of Ductile Iron Microstructure – A Comparison of Selected Methods for Assessment

    Directory of Open Access Journals (Sweden)

    Mrzygłód B.

    2013-09-01

    Full Text Available Stereological description of dispersed microstructure is not an easy task and remains the subject of continuous research. In its practical aspect, a correct stereological description of this type of structure is essential for the analysis of processes of coagulation and spheroidisation, or for studies of relationships between structure and properties. One of the most frequently used methods for an estimation of the density Nv and size distribution of particles is the Scheil - Schwartz - Saltykov method. In this article, the authors present selected methods for quantitative assessment of ductile iron microstructure, i.e. the Scheil - Schwartz - Saltykov method, which allows a quantitative description of three-dimensional sets of solids using measurements and counts performed on two-dimensional cross-sections of these sets (microsections and quantitative description of three-dimensional sets of solids by X-ray computed microtomography, which is an interesting alternative for structural studies compared to traditional methods of microstructure imaging since, as a result, the analysis provides a three-dimensional imaging of microstructures examined.

  18. Environmental DNA method for estimating salamander distribution in headwater streams, and a comparison of water sampling methods.

    Science.gov (United States)

    Katano, Izumi; Harada, Ken; Doi, Hideyuki; Souma, Rio; Minamoto, Toshifumi

    2017-01-01

    Environmental DNA (eDNA) has recently been used for detecting the distribution of macroorganisms in various aquatic habitats. In this study, we applied an eDNA method to estimate the distribution of the Japanese clawed salamander, Onychodactylus japonicus, in headwater streams. Additionally, we compared the detection of eDNA and hand-capturing methods used for determining the distribution of O. japonicus. For eDNA detection, we designed a qPCR primer/probe set for O. japonicus using the 12S rRNA region. We detected the eDNA of O. japonicus at all sites (with the exception of one), where we also observed them by hand-capturing. Additionally, we detected eDNA at two sites where we were unable to observe individuals using the hand-capturing method. Moreover, we found that eDNA concentrations and detection rates of the two water sampling areas (stream surface and under stones) were not significantly different, although the eDNA concentration in the water under stones was more varied than that on the surface. We, therefore, conclude that eDNA methods could be used to determine the distribution of macroorganisms inhabiting headwater systems by using samples collected from the surface of the water.

  19. A comparison of quantitative methods for clinical imaging with hyperpolarized (13)C-pyruvate.

    Science.gov (United States)

    Daniels, Charlie J; McLean, Mary A; Schulte, Rolf F; Robb, Fraser J; Gill, Andrew B; McGlashan, Nicholas; Graves, Martin J; Schwaiger, Markus; Lomas, David J; Brindle, Kevin M; Gallagher, Ferdia A

    2016-04-01

    Dissolution dynamic nuclear polarization (DNP) enables the metabolism of hyperpolarized (13)C-labelled molecules, such as the conversion of [1-(13)C]pyruvate to [1-(13)C]lactate, to be dynamically and non-invasively imaged in tissue. Imaging of this exchange reaction in animal models has been shown to detect early treatment response and correlate with tumour grade. The first human DNP study has recently been completed, and, for widespread clinical translation, simple and reliable methods are necessary to accurately probe the reaction in patients. However, there is currently no consensus on the most appropriate method to quantify this exchange reaction. In this study, an in vitro system was used to compare several kinetic models, as well as simple model-free methods. Experiments were performed using a clinical hyperpolarizer, a human 3 T MR system, and spectroscopic imaging sequences. The quantitative methods were compared in vivo by using subcutaneous breast tumours in rats to examine the effect of pyruvate inflow. The two-way kinetic model was the most accurate method for characterizing the exchange reaction in vitro, and the incorporation of a Heaviside step inflow profile was best able to describe the in vivo data. The lactate time-to-peak and the lactate-to-pyruvate area under the curve ratio were simple model-free approaches that accurately represented the full reaction, with the time-to-peak method performing indistinguishably from the best kinetic model. Finally, extracting data from a single pixel was a robust and reliable surrogate of the whole region of interest. This work has identified appropriate quantitative methods for future work in the analysis of human hyperpolarized (13)C data. © 2016 The Authors. NMR in Biomedicine published by John Wiley & Sons Ltd.

  20. A Comparison of Psychiatry and Internal Medicine: A Bibliometric Study

    Science.gov (United States)

    Stone, Karina; Whitham, Elizabeth A.; Ghaemi, S. Nassir

    2012-01-01

    Objective: Psychiatric education needs to expose students to a broad range of topics. One resource for psychiatric education, both during initial training and in later continuing medical education, is the scientific literature, as published in psychiatric journals. The authors assessed current research trends in psychiatric journals, as compared…

  1. A comparison of spatial rainfall estimation techniques: A case study ...

    African Journals Online (AJOL)

    Two geostatistical interpolation techniques (kriging and cokriging) were evaluated against inverse distance weighted (IDW) and global polynomial interpolation (GPI). Of the four spatial interpolators, kriging and cokriging produced results with the least root mean square error (RMSE). A digital elevation model (DEM) was ...

  2. Comparison of some effects of modification of a polylactide surface layer by chemical, plasma, and laser methods

    Science.gov (United States)

    Moraczewski, Krzysztof; Rytlewski, Piotr; Malinowski, Rafał; Żenkiewicz, Marian

    2015-08-01

    The article presents the results of studies and comparison of selected properties of the modified PLA surface layer. The modification was carried out with three methods. In the chemical method, a 0.25 M solution of sodium hydroxide in water and ethanol was utilized. In the plasma method, a 50 W generator was used, which produced plasma in the air atmosphere under reduced pressure. In the laser method, a pulsed ArF excimer laser with fluency of 60 mJ/cm2 was applied. Polylactide samples were examined by using the following techniques: scanning electron microscopy (SEM), atomic force microscopy (AFM), goniometry and X-ray photoelectron spectroscopy (XPS). Images of surfaces of the modified samples were recorded, contact angles were measured, and surface free energy was calculated. Qualitative and quantitative analyses of chemical composition of the PLA surface layer were performed as well. Based on the survey it was found that the best modification results are obtained using the plasma method.

  3. System Accuracy Evaluation of Four Systems for Self-Monitoring of Blood Glucose Following ISO 15197 Using a Glucose Oxidase and a Hexokinase-Based Comparison Method.

    Science.gov (United States)

    Link, Manuela; Schmid, Christina; Pleus, Stefan; Baumstark, Annette; Rittmeyer, Delia; Haug, Cornelia; Freckmann, Guido

    2015-04-14

    The standard ISO (International Organization for Standardization) 15197 is widely accepted for the accuracy evaluation of systems for self-monitoring of blood glucose (SMBG). Accuracy evaluation was performed for 4 SMBG systems (Accu-Chek Aviva, ContourXT, GlucoCheck XL, GlucoMen LX PLUS) with 3 test strip lots each. To investigate a possible impact of the comparison method on system accuracy data, 2 different established methods were used. The evaluation was performed in a standardized manner following test procedures described in ISO 15197:2003 (section 7.3). System accuracy was assessed by applying ISO 15197:2003 and in addition ISO 15197:2013 criteria (section 6.3.3). For each system, comparison measurements were performed with a glucose oxidase (YSI 2300 STAT Plus glucose analyzer) and a hexokinase (cobas c111) method. All 4 systems fulfilled the accuracy requirements of ISO 15197:2003 with the tested lots. More stringent accuracy criteria of ISO 15197:2013 were fulfilled by 3 systems (Accu-Chek Aviva, ContourXT, GlucoMen LX PLUS) when compared to the manufacturer's comparison method and by 2 systems (Accu-Chek Aviva, ContourXT) when compared to the alternative comparison method. All systems showed lot-to-lot variability to a certain degree; 2 systems (Accu-Chek Aviva, ContourXT), however, showed only minimal differences in relative bias between the 3 evaluated lots. In this study, all 4 systems complied with the evaluated test strip lots with accuracy criteria of ISO 15197:2003. Applying ISO 15197:2013 accuracy limits, differences in the accuracy of the tested systems were observed, also demonstrating that the applied comparison method/system and the lot-to-lot variability can have a decisive influence on accuracy data obtained for a SMBG system. © 2015 Diabetes Technology Society.

  4. Probing the parameter space of HD 49933: A comparison between global and local methods

    Energy Technology Data Exchange (ETDEWEB)

    Creevey, O L [Instituto de Astrofisica de Canarias (IAC), E-38200 La Laguna, Tenerife (Spain); Bazot, M, E-mail: orlagh@iac.es, E-mail: bazot@astro.up.pt [Centro de Astrofisica da Universidade do Porto, Rua das Estrelas, 4150-762 Porto (Portugal)

    2011-01-01

    We present two independent methods for studying the global stellar parameter space (mass M, age, chemical composition X{sub 0}, Z{sub 0}) of HD 49933 with seismic data. Using a local minimization and an MCMC algorithm, we obtain consistent results for the determination of the stellar properties: M 1.1-1.2 M{sub sun} Age {approx} 3.0 Gyr, Z{sub 0} {approx} 0.008. A description of the error ellipses can be defined using Singular Value Decomposition techniques, and this is validated by comparing the errors with those from the MCMC method.

  5. Comparison Re-invented: Adaptation of Universal Methods to African Studies (Conference Report Die Wiederentdeckung des Vergleichs: Zur Anwendung universeller Methoden in der Afrikaforschung (Konferenzbericht

    Directory of Open Access Journals (Sweden)

    Franzisca Zanker

    2013-01-01

    Full Text Available Drawing from a combination of specific, empirical research projects with different theoretical backgrounds, a workshop discussed one methodological aspect often somewhat overlooked in African Studies: comparison. Participants addressed several questions, along with presenting overviews of how different disciplines within African Studies approach comparison in their research and naming specific challenges within individual research projects. The questions examined included: Why is explicit comparative research so rare in African Studies? Is comparative research more difficult in the African context than in other regions? Does it benefit our research? Should scholars strive to generalise beyond individual cases? Do studies in our field require an explicit comparative design, or will implicit comparison suffice? Cross-discipline communication should help us to move forward in this methodological debate, though in the end the subject matter and specific research question will lead to the appropriate comparative approach, not the other way round.Mit Blick auf einige empirische Forschungsprojekte mit jeweils unterschiedlichem theoretischem Hintergrund wurde im Rahmen eines Workshops ein methodologischer Aspekt debattiert, der in der Afrikaforschung wenig Beachtung erfährt: der Vergleich. Die Teilnehmer(innen entwickelten Fragestellungen, stellten jeweils dar, inwieweit in den verschiedene Disziplinen der Afrikaforschung der Vergleich als Methode eingesetzt wird, und benannten spezifische Herausforderungen in diesem Zusammenhang für einzelne Forschungsprojekte. Unter anderem wurden folgende Fragen erörtert: Warum ist die explizit vergleichende Methode in der Afrikaforschung so selten? Ist vergleichende Forschung im Kontext Afrikas schwieriger anwendbar als in der Forschung zu anderen Regionen? Verbessert sie unsere Forschungsresultate? Sollten sich Forscher um Generalisierungen jenseits der Einzelfallstudien bemühen? Ist in der Afrikaforschung eine

  6. A comparison of a track shape analysis-based automated slide scanner system with traditional methods

    International Nuclear Information System (INIS)

    Bator, G.; Csordas, A.; Horvath, D.; Somlai, J.; Kovacs, T.

    2015-01-01

    During recent years, CR-39 detector measurements have gained attention due to improvements in image processing methods. An assessment method based on the application of a high-resolution slide scanner and its quality checks is introduced, using commercially available software and hardware. Using the conventional (visual) comparing analysis for 563 detectors, the method was found suitable for high-precision and reliable track analysis. The accuracy of the measurements were not disturbed by any other pseudo-tracks (scratches or contamination) due to the signal shape of the analysis. (author)

  7. A comprehensive comparison of RNA-Seq-based transcriptome analysis from reads to differential gene expression and cross-comparison with microarrays: a case study in Saccharomyces cerevisiae

    DEFF Research Database (Denmark)

    Nookaew, Intawat; Papini, Marta; Pornputtapong, Natapol

    2012-01-01

    RNA-seq, has recently become an attractive method of choice in the studies of transcriptomes, promising several advantages compared with microarrays. In this study, we sought to assess the contribution of the different analytical steps involved in the analysis of RNA-seq data generated with the I......RNA-seq, has recently become an attractive method of choice in the studies of transcriptomes, promising several advantages compared with microarrays. In this study, we sought to assess the contribution of the different analytical steps involved in the analysis of RNA-seq data generated...... gene expression identification derived from the different statistical methods, as well as their integrated analysis results based on gene ontology annotation are in good agreement. Overall, our study provides a useful and comprehensive comparison between the two platforms (RNA-seq and microrrays...

  8. Comparison of active and passive methods for radon exhalation from a high-exposure building material

    International Nuclear Information System (INIS)

    Abbasi, A.; Mirekhtiary, F.

    2013-01-01

    The radon exhalation rates and radon concentrations in granite stones used in Iran were measured by means of a high-resolution high purity Germanium gamma-spectroscopy system (passive method) and an AlphaGUARD model PQ 2000 (active method). For standard rooms (4.0 x 35.0 m area x 32.8 height) where ground and walls have been covered by granite stones, the radon concentration and the radon exhalation rate by two methods were calculated. The activity concentrations of 226 Ra in the selected granite samples ranged from 3.8 to 94.2 Bq kg -1 . The radon exhalation rate from the calculation of the 226 Ra activity concentration was obtained. The radon exhalation rates were 1.31-7.86 Bq m -2 h -1 . The direction measurements using an AlphaGUARD were from 218 to 1306 Bq m -3 with a mean of 625 Bq m -3 . Also, the exhalation rates measured by the passive and active methods were compared and the results of this study were the same, with the active method being 22% higher than the passive method. (authors)

  9. A comparison of different discrimination parameters for the DFT-based PSD method in fast scintillators

    International Nuclear Information System (INIS)

    Liu, G.; Yang, J.; Luo, X.L.; Lin, C.B.; Peng, J.X.; Yang, Y.

    2013-01-01

    Although the discrete Fourier transform (DFT) based pulse shape discrimination (PSD) method, realized by transforming the digitized scintillation pulses into frequency coefficients by using DFT, has been proven to effectively discriminate neutrons and γ rays, its discrimination performance depends strongly on the selection of the discrimination parameter obtained by the combination of these frequency coefficients. In order to thoroughly understand and apply the DFT-based PSD in organic scintillation detectors, a comparison of three different discrimination parameters, i.e. the amplitude of zero-frequency component, the amplitude difference between the amplitude of zero-frequency component and the amplitude of base-frequency component, and the ratio of the amplitude of base-frequency component to the amplitude of zero-frequency component, is described in this paper. An experimental setup consisting of an Americium–Beryllium (Am–Be) source, a BC501A liquid scintillator detector, and a 5Gsample/s 8-bit oscilloscope was built to assess the performance of the DFT-based PSD with each of these discrimination parameters in terms of the figure-of-merit (based on the separation of the event distributions). The third technique, which uses the ratio of the amplitude of base-frequency component to the amplitude of zero-frequency component as the discrimination parameter, is observed to provide the best discrimination performance in this research. - Highlights: • The spectrum difference between neutron pulse and γ-ray pulse was investigated. • The DFT-based PSD with different parameter definitions was assessed. • The way of using the ratio of magnitude spectrum provides the best performance. • The performance differences were explained from noise suppression features

  10. A Comparison of Methods to Test Mediation and Other Intervening Variable Effects

    Science.gov (United States)

    MacKinnon, David P.; Lockwood, Chondra M.; Hoffman, Jeanne M.; West, Stephen G.; Sheets, Virgil

    2010-01-01

    A Monte Carlo study compared 14 methods to test the statistical significance of the intervening variable effect. An intervening variable (mediator) transmits the effect of an independent variable to a dependent variable. The commonly used R. M. Baron and D. A. Kenny (1986) approach has low statistical power. Two methods based on the distribution of the product and 2 difference-in-coefficients methods have the most accurate Type I error rates and greatest statistical power except in 1 important case in which Type I error rates are too high. The best balance of Type I error and statistical power across all cases is the test of the joint significance of the two effects comprising the intervening variable effect. PMID:11928892

  11. Hotel Classification Systems: A Comparison of International Case Studies

    Directory of Open Access Journals (Sweden)

    Roberta Minazzi,

    2010-12-01

    Full Text Available Over the last few decades we have witnessed an increasing interest of scholars andespecially operators in service quality in the lodging business. Firstly, it is important to observe thatthe diverseness of the hospitality industry also affects the classification of hotel quality. We canactually find many programmes, classifications and seals of quality promoted by public authoritiesand private companies that create confusion in the consumer perceptions of hotel quality. Moreover,new electronic distribution channels and their ratings are becoming a new way to gather informationabout a hotel and its quality. Secondly, a point that can cause complications is that different countriesand regions can choose differing approaches depending on the features of the classification (numberof levels, symbols used, etc. and the nature of the programme (public, private. Considering theseassumptions and the recent changes in the Italian hotel classification system, this paper aims toanalyse the situation in Italy, underlining both its positive and negative aspects and comparing it withother European and North American cases. Based on a review of literature and tourism laws as wellas personal interviews with public authorities and exponents of the private sectors, we were able toidentify critical issues and trends in hotel classification systems. The comparison of case studiesshows a heterogeneous situation. Points in common are the scale and the symbol used but, if weanalyse the requirements of each category, we discover very different circumstances, also sometimesin the same country. A future European classification system could be possible only after astandardization of minimum requirements and criteria at a national level. In this situation brands andonline consumers’ feedbacks become even more considered by the customers in the hospitalityindustry.

  12. Comparison of soil CO2 fluxes by eddy-covariance and chamber methods in fallow periods of a corn-soybean rotation

    Science.gov (United States)

    Soil carbon dioxide (CO2) fluxes are typically measured by eddy-covariance (EC) or chamber (Ch) methods, but a long-term comparison has not been undertaken. This study was conducted to assess the agreement between EC and Ch techniques when measuring CO2 flux during fallow periods of a corn-soybean r...

  13. A comparison of two psychological screening methods currently used for inpatients in a UK burns service.

    Science.gov (United States)

    Shepherd, Laura; Tew, Victoria; Rai, Lovedeep

    2017-12-01

    Various types of psychological screening are currently used in the UK to identify burn patients who are experiencing psychological distress and may need additional support and intervention during their hospital admission. This audit compared two types of psychological screening in 40 burn inpatients. One screening method was an unpublished questionnaire designed to explore multiple areas of potential distress for those who have experienced burns. The other method was an indirect psychological screen via discussions within multi-disciplinary team (MDT) meetings where a Clinical Psychologist was present to guide and prompt psychological discussions. Data was collected between November 2012 and September 2016. Results suggested that both screening methods were similar in identifying patients who benefit from more formal psychological assessment. Indeed, statistical analysis reported no difference between the two screening methods (N=40, p=.424, two-tailed). In conclusion, measuring distress in burns inpatients using a burns-specific questionnaire and psychological discussions within MDT meetings are similar in their ability to identify patients in need of more thorough psychological assessment. However, both screening methods identified patients who were in need of psychological input when the other did not. This suggests that psychological screening of burns inpatients, and the psychological difficulties that they can present with, is complex. The advantages and disadvantages of both methods of screening are discussed. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  14. A Mixed Methods Comparison of Urban and Rural Retail Corner Stores

    Directory of Open Access Journals (Sweden)

    Jared T McGuirt

    2015-08-01

    Full Text Available Efforts to transform corner stores to better meet community dietary needs have mostly occurred in urban areas but are also needed in rural areas. Given important contextual differences between urban and rural areas, it is important to increase our understanding of the elements that might translate successfully to similar interventions involving stores in more rural areas. Thus, an in-depth examination and comparison of corner stores in each setting is needed. A mixed methods approach, including windshield tours, spatial visualization with analysis of frequency distribution, and spatial regression techniques were used to compare a rural North Carolina and large urban (Los Angeles food environment. Important similarities and differences were seen between the two settings in regards to food environment context, spatial distribution of stores, food products available, and the factors predicting corner store density. Urban stores were more likely to have fresh fruits (Pearson chi2 = 27.0423; p < 0.001 and vegetables (Pearson chi2 = 27.0423; p < 0.001. In the urban setting, corner stores in high income areas were more likely to have fresh fruit (Pearson chi2 = 6.00; p = 0.014, while in the rural setting, there was no difference between high and low income area in terms of fresh fruit availability. For the urban area, total population, no vehicle and Hispanic population were significantly positively associated (p < 0.05, and median household income (p < 0.001 and Percent Minority (p < 0.05 were significantly negatively associated with corner store count. For the rural area, total population (p < 0.05 and supermarket count were positively associated (p < 0.001, and median household income negatively associated (P < 0.001, with corner store count. Translational efforts should be informed by these findings, which might influence the success of future interventions and policies in both rural and urban contexts.

  15. Comparison of analytical methods used to measure petroleum hydrocarbons in soils and their application to bioremediation studies

    International Nuclear Information System (INIS)

    Douglas, G.S.; Wong, W.M.; Rigatti, M.J.; McMillen, S.J.

    1995-01-01

    Chemical measurements provide a means to evaluate crude oil and refined product bioremediation effectiveness in field and laboratory studies. These measurements are used to determine the net decrease in product or target compound concentrations in complex soil systems. The analytical methods used to evaluate these constituents will have a direct impact on the ability of the investigator to; (1) detect losses due to biodegradation, (2) understand the processes responsible for the hydrocarbon degradation and, (3) determine the rates of hydrocarbon degradation. This understanding is critical for the testing and design of bioremediation programs. While standard EPA methods are useful for measuring a wide variety of industrial and agrochemicals, they were not designed for the detection and accurate measurement of petroleum compounds. The chemical data generated with these standard methods are usually of limited utility because they lack the chemical specificity required to evaluate hydrocarbon compositional changes in the oil contamination required to evaluate biodegradation. The applications and limitations of standard EPA methodologies (EPA Methods 418.1, 8270, and modified 8015) will be evaluated and compared to several new analytical methods currently being used by the petroleum industry (e.g., gross compositional analysis, TLC-FID analysis, and enhanced EPA Method 8270) to evaluate bioremediation effectiveness in soils

  16. Legal Teaching Methods to Diverse Student Cohorts: A Comparison between the United Kingdom, the United States, Australia and New Zealand

    Science.gov (United States)

    Kraal, Diane

    2017-01-01

    This article makes a comparison across the unique educational settings of law and business schools in the United Kingdom, the United States, Australia and New Zealand to highlight differences in teaching methods necessary for culturally and ethnically mixed student cohorts derived from high migration, student mobility, higher education rankings…

  17. Comparison of different concentration methods for the detection of hepatitis A virus and calicivirus from bottled natural mineral waters

    DEFF Research Database (Denmark)

    Di Pasquale, S.; Paniconi, M; Auricchio, B

    2010-01-01

    To evaluate the efficiency of different recovery methods of viral RNA from bottled water, a comparison was made of 2 positively and 2 negatively charged membranes that were used for absorbing and releasing HAV virus particles during the filtration of viral spiked bottled water. All the 4 membrane...

  18. A new method for the recovery and evidential comparison of footwear impressions using 3D structured light scanning.

    Science.gov (United States)

    Thompson, T J U; Norris, P

    2018-05-01

    Footwear impressions are one of the most common forms of evidence to be found at a crime scene, and can potentially offer the investigator a wealth of intelligence. Our aim is to highlight a new and improved technique for the recovery of footwear impressions, using three-dimensional structured light scanning. Results from this preliminary study demonstrate that this new approach is non-destructive, safe to use and is fast, reliable and accurate. Further, since this is a digital method, there is also the option of digital comparison between items of footwear and footwear impressions, and an increased ability to share recovered footwear impressions between forensic staff thus speeding up the investigation. Copyright © 2018 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.

  19. Rural and urban women entrepreneurs: A comparison of service needs and delivery methods priorities

    Directory of Open Access Journals (Sweden)

    Davis, A.

    2011-01-01

    Full Text Available Women entrepreneurs face a wide variety of barriers and challenges throughout the life and growth of their entrepreneurial venture. This study expands the knowledge base on women entrepreneurs’ needs, specifically their needs in terms of service areas and service delivery method preferences. Twenty three “needed” service areas were identified by 95 Manitoba based women entrepreneurs. The first five included: finding new customers, growth benefits and tools, market expansion, general marketing, and networking skills. This study also examined the differences between urban and rural based entrepreneurs. Two service need areas “how to find mentors and role models” and “legal issues” exhibited statistically significant priority differences. Service delivery methods did not produce any statistically significant differences. Overall, this study concludes that regardless of location, women entrepreneurs’ training and support needs are not significantly that different. The effects of entrepreneurial stage and years in business on entrepreneurial support needs are also examined.

  20. Comparison of Three Different Methods for Pile Integrity Testing on a Cylindrical Homogeneous Polyamide Specimen

    Science.gov (United States)

    Lugovtsova, Y. D.; Soldatov, A. I.

    2016-01-01

    Three different methods for pile integrity testing are proposed to compare on a cylindrical homogeneous polyamide specimen. The methods are low strain pile integrity testing, multichannel pile integrity testing and testing with a shaker system. Since the low strain pile integrity testing is well-established and standardized method, the results from it are used as a reference for other two methods.

  1. The health benefits of yoga and exercise: a review of comparison studies.

    Science.gov (United States)

    Ross, Alyson; Thomas, Sue

    2010-01-01

    Exercise is considered an acceptable method for improving and maintaining physical and emotional health. A growing body of evidence supports the belief that yoga benefits physical and mental health via down-regulation of the hypothalamic-pituitary-adrenal (HPA) axis and the sympathetic nervous system (SNS). The purpose of this article is to provide a scholarly review of the literature regarding research studies comparing the effects of yoga and exercise on a variety of health outcomes and health conditions. Using PubMed((R)) and the key word "yoga," a comprehensive search of the research literature from core scientific and nursing journals yielded 81 studies that met inclusion criteria. These studies subsequently were classified as uncontrolled (n = 30), wait list controlled (n = 16), or comparison (n = 35). The most common comparison intervention (n = 10) involved exercise. These studies were included in this review. In the studies reviewed, yoga interventions appeared to be equal or superior to exercise in nearly every outcome measured except those involving physical fitness. The studies comparing the effects of yoga and exercise seem to indicate that, in both healthy and diseased populations, yoga may be as effective as or better than exercise at improving a variety of health-related outcome measures. Future clinical trials are needed to examine the distinctions between exercise and yoga, particularly how the two modalities may differ in their effects on the SNS/HPA axis. Additional studies using rigorous methodologies are needed to examine the health benefits of the various types of yoga.

  2. Comparison of modern and traditional methods of soilsorption complex measurement : the basis of long -term studies and modelling

    Directory of Open Access Journals (Sweden)

    Kučera Aleš

    2014-03-01

    Full Text Available This paper presents the correlations between two different analytical methods of assessing soil nutrient contents. Soil nutrient content measurements measured using the flame atomic absorption spectrometry (FAAS method, which uses barium chloride extraction, were compared with those of the now-unused Gedroiz method, which uses ammonium chloride extraction (calcium by titration, magnesium, potassium and sodium by weighing. Natural forest soils from the Ukrainian Carpathians at the localities of Javorník and Pop Ivan were used. Despite the risk of analysis errors during the complicated analytical procedure, the results showed a high level of correlation between different nutrient content measurements across the whole soil profile. This allows concentration values given in different studies to be linearly recalculated on results of modern method. In this way, results can be used to study soil’s chemical changes over time from the soil samples that were analysed in the past using labour-intensive and time-consuming methods with a higher risk of analytic error.

  3. A new reprojection method based on a comparison of popular reprojection models

    International Nuclear Information System (INIS)

    Guedouar, R.; Zarrad, B.

    2010-01-01

    Reprojection is of interest in several applications such as iterative tomographic reconstruction, dose-calculation in radiotherapy and 3D-display volume-rendering. All numerical implementations of reprojection result in varying degrees of errors. Popular reprojection methods were compared using 128x128 2D-noiseless-numerical uniform-disk and Shepp-Logan phantoms to evaluate errors in external-border, interior and overall regions of the reprojected sinogram, in order to investigate a new method for increasing their accuracy on all regions. Root mean squared errors (RMSE) (with calculated projection as reference) were used. No method gives the least errors in all regions. Smoothing projectors decrease noise but introduce important errors in the border region, whereas other projectors increase errors inside the reprojected sinogram. The overall-error is a trade-off between interior and border errors. The conventional ray-driven (R1) and the bilinear-interpolated pixel driven with 5x5 sub-pixels (L5) provide the highest accuracy for border and interior regions, respectively. These results motivate the implementation of a new reprojection method that uses R1 to project pixels near borders in the sinogram and L5 for the others. Feature borders were localized using the spatial derivatives of sinogram count profile. This method provides the least errors for all regions. The performance of the new method was investigated on iterative reconstruction tasks with MLEM. Results show that the new method can promise more accuracy in terms of RMSE and aliasing reduction of the reconstructed images.

  4. A comparison of gantry-mounted x-ray-based real-time target tracking methods.

    Science.gov (United States)

    Montanaro, Tim; Nguyen, Doan Trang; Keall, Paul J; Booth, Jeremy; Caillet, Vincent; Eade, Thomas; Haddad, Carol; Shieh, Chun-Chien

    2018-03-01

    Most modern radiotherapy machines are built with a 2D kV imaging system. Combining this imaging system with a 2D-3D inference method would allow for a ready-made option for real-time 3D tumor tracking. This work investigates and compares the accuracy of four existing 2D-3D inference methods using both motion traces inferred from external surrogates and measured internally from implanted beacons. Tumor motion data from 160 fractions (46 thoracic/abdominal patients) of Synchrony traces (inferred traces), and 28 fractions (7 lung patients) of Calypso traces (internal traces) from the LIGHT SABR trial (NCT02514512) were used in this study. The motion traces were used as the ground truth. The ground truth trajectories were used in silico to generate 2D positions projected on the kV detector. These 2D traces were then passed to the 2D-3D inference methods: interdimensional correlation, Gaussian probability density function (PDF), arbitrary-shape PDF, and the Kalman filter. The inferred 3D positions were compared with the ground truth to determine tracking errors. The relationships between tracking error and motion magnitude, interdimensional correlation, and breathing periodicity index (BPI) were also investigated. Larger tracking errors were observed from the Calypso traces, with RMS and 95th percentile 3D errors of 0.84-1.25 mm and 1.72-2.64 mm, compared to 0.45-0.68 mm and 0.74-1.13 mm from the Synchrony traces. The Gaussian PDF method was found to be the most accurate, followed by the Kalman filter, the interdimensional correlation method, and the arbitrary-shape PDF method. Tracking error was found to strongly and positively correlate with motion magnitude for both the Synchrony and Calypso traces and for all four methods. Interdimensional correlation and BPI were found to negatively correlate with tracking error only for the Synchrony traces. The Synchrony traces exhibited higher interdimensional correlation than the Calypso traces especially in the anterior

  5. Application of the neutron gamma method to a study of water seepage under a rice plantation

    International Nuclear Information System (INIS)

    Puard, M.; Couchat, P.; Moutonnet, P.

    1980-01-01

    In order to determine the share of percolation in the pollution by pesticides (particularly Lindane) being carried down in the drainage water of rice plantations, an application of the neutron gamma method under rice cultivation in the Camargue is suggested. A preliminary laboratory study enabled a comparison to be made between deuteriated water (DHO) and tritiated water (THO) used as water tracers in the determination of the dispersive phenomena and retention in a column of saturated soil [fr

  6. Stroke Education in an Emergency Department Waiting Room: a Comparison of Methods

    Directory of Open Access Journals (Sweden)

    Yu-Feng Yvonne Chan1

    2015-03-01

    Full Text Available Background: Since the emergency department (ED waiting room hosts a large, captive audience of patients and visitors, it may be an ideal location for conduct-ing focused stroke education. The aim of this study was to assess the effective-ness of various stroke education methods.Methods: Patients and visitors of an urban ED waiting room were randomized into one of the following groups: video, brochure, one-to-one teaching, combi-nation of these three methods, or control group. We administered a 13-question multiple-choice test to assess stroke knowledge prior to, immediately after, and at 1 month post-education to patients and visitors in the ED waiting room.Results: Of 4 groups receiving education, all significantly improved their test scores immediately post intervention (test scores 9.4±2.5-10.3±2.0, P<0.01. At 1 month, the combination group retained the most knowledge (9.4±2.4 exceed-ing pre-intervention and control scores (both 6.7±2.6, P<0.01.Conclusion: Among the various stroke education methods delivered in the ED waiting room, the combination method resulted in the highest knowledge reten-tion at 1-month post intervention.

  7. a Comparison of Tree Segmentation Methods Using Very High Density Airborne Laser Scanner Data

    Science.gov (United States)

    Pirotti, F.; Kobal, M.; Roussel, J. R.

    2017-09-01

    Developments of LiDAR technology are decreasing the unit cost per single point (e.g. single-photo counting). This brings to the possibility of future LiDAR datasets having very dense point clouds. In this work, we process a very dense point cloud ( 200 points per square meter), using three different methods for segmenting single trees and extracting tree positions and other metrics of interest in forestry, such as tree height distribution and canopy area distribution. The three algorithms are tested at decreasing densities, up to a lowest density of 5 point per square meter. Accuracy assessment is done using Kappa, recall, precision and F-Score metrics comparing results with tree positions from groundtruth measurements in six ground plots where tree positions and heights were surveyed manually. Results show that one method provides better Kappa and recall accuracy results for all cases, and that different point densities, in the range used in this study, do not affect accuracy significantly. Processing time is also considered; the method with better accuracy is several times slower than the other two methods and increases exponentially with point density. Best performer gave Kappa = 0.7. The implications of metrics for determining the accuracy of results of point positions' detection is reported. Motives for the different performances of the three methods is discussed and further research direction is proposed.

  8. Tooth-size discrepancy: A comparison between manual and digital methods

    Directory of Open Access Journals (Sweden)

    Gabriele Dória Cabral Correia

    2014-08-01

    Full Text Available INTRODUCTION: Technological advances in Dentistry have emerged primarily in the area of diagnostic tools. One example is the 3D scanner, which can transform plaster models into three-dimensional digital models. OBJECTIVE: This study aimed to assess the reliability of tooth size-arch length discrepancy analysis measurements performed on three-dimensional digital models, and compare these measurements with those obtained from plaster models. MATERIAL AND METHODS: To this end, plaster models of lower dental arches and their corresponding three-dimensional digital models acquired with a 3Shape R700T scanner were used. All of them had lower permanent dentition. Four different tooth size-arch length discrepancy calculations were performed on each model, two of which by manual methods using calipers and brass wire, and two by digital methods using linear measurements and parabolas. RESULTS: Data were statistically assessed using Friedman test and no statistically significant differences were found between the two methods (P > 0.05, except for values found by the linear digital method which revealed a slight, non-significant statistical difference. CONCLUSIONS: Based on the results, it is reasonable to assert that any of these resources used by orthodontists to clinically assess tooth size-arch length discrepancy can be considered reliable.

  9. Method for a national tariff comparison for natural gas, electricity and heat. Set-up and presentation

    International Nuclear Information System (INIS)

    1998-05-01

    Several groups (within distribution companies and outside those companies) have a need for information and data on energy tariffs. It is the opinion of the ad-hoc working group that a comparison of tariffs on the basis of standard cases is the most practical method to meet the information demand of all the parties involved. Those standard cases are formulated and presented for prices of electricity, natural gas and heat, including applied consumption parameters. A comparison of such tariffs must be made periodically

  10. The next GUM and its proposals: a comparison study

    Science.gov (United States)

    Damasceno, J. C.; Couto, P. R. G.

    2018-03-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) is currently under revision. New proposals for its implementation were circulated in the form of a draft document. Two of the main changes are explored in this work using a Brinell hardness model example. Changes in the evaluation of uncertainty for repeated indications and in the construction of coverage intervals are compared with the classic GUM and with Monte Carlo simulation method.

  11. A comparison of different methods for in-situ determination of heat losses form district heating pipes

    Energy Technology Data Exchange (ETDEWEB)

    Boehm, Benny [Technical Univ. of Denmark, Dept. of Energy Engineering (Denmark)

    1996-11-01

    A comparison of different methods for in-situ determination of heat losses has been carried out on a 273 mm transmission line in Copenhagen. Instrumentation includes temperature sensors, heat flux meters and an infrared camera. The methods differ with regard to time consumption and costs of applying the specific method, demand on accuracy of temperature measurements, sensitivity to computational parameters, e.g. the thermal conductivity of the soil, response to transients in water temperature and the ground, and steady state assumptions in the model used in the interpretation of the measurements. Several of the applied methods work well. (au)

  12. Formation of oiliness and sebum output--comparison of a lipid-absorbant and occlusive-tape method with photometry.

    Science.gov (United States)

    Serup, J

    1991-07-01

    Sebum output and the development of oiliness formation were studied in 24 acne-prone volunteers. Sebutape and photometric measurement by the Sebumeter were compared. Tapes were taken 1, 2 and 3 h after degreasing. Tapes were scored, and light transmission was measured by a special densitometric device. Sebumeter recordings were performed after 3 h. Densitometric evaluation of tapes was accurate with a high correlation to scoring. Sebutape and Sebumeter assessments correlated. However, in some individuals the tape method overestimated the sebum output. Right-left comparison indicated that the tape method was less reproducible, particularly after longer sampling periods. It is suggested that Sebutape, due to water occlusion and temperature insulation during the sampling period, interferes with sebum droplet formation and spreading. However, such systematic interference may be advantageous since sweating and heat are important clinical prerequisites in the formation of oiliness. Thus, it is suggested that the Sebutape is a specialized method for the determination of 'oiliness', the very last phase of sebum output, in which sebum droplets spread over the skin surface. The tape is not automatically comparable to other methods for determining sebum output from the follicular reservoir.

  13. Parallelizing the spectral transform method: A comparison of alternative parallel algorithms

    International Nuclear Information System (INIS)

    Foster, I.; Worley, P.H.

    1993-01-01

    The spectral transform method is a standard numerical technique for solving partial differential equations on the sphere and is widely used in global climate modeling. In this paper, we outline different approaches to parallelizing the method and describe experiments that we are conducting to evaluate the efficiency of these approaches on parallel computers. The experiments are conducted using a testbed code that solves the nonlinear shallow water equations on a sphere, but are designed to permit evaluation in the context of a global model. They allow us to evaluate the relative merits of the approaches as a function of problem size and number of processors. The results of this study are guiding ongoing work on PCCM2, a parallel implementation of the Community Climate Model developed at the National Center for Atmospheric Research

  14. Perceptions of Teaching Methods for Preclinical Oral Surgery: A Comparison with Learning Styles

    OpenAIRE

    Omar, Esam

    2017-01-01

    Purpose: Dental extraction is a routine part of clinical dental practice. For this reason, understanding the way how students? extraction knowledge and skills development are important. Problem Statement and Objectives: To date, there is no accredited statement about the most effective method for the teaching of exodontia to dental students. Students have different abilities and preferences regarding how they learn and process information. This is defined as learning style. In this study, the...

  15. A comparison of methods used to calculate normal background concentrations of potentially toxic elements for urban soil

    Energy Technology Data Exchange (ETDEWEB)

    Rothwell, Katherine A., E-mail: k.rothwell@ncl.ac.uk; Cooke, Martin P., E-mail: martin.cooke@ncl.ac.uk

    2015-11-01

    To meet the requirements of regulation and to provide realistic remedial targets there is a need for the background concentration of potentially toxic elements (PTEs) in soils to be considered when assessing contaminated land. In England, normal background concentrations (NBCs) have been published for several priority contaminants for a number of spatial domains however updated regulatory guidance places the responsibility on Local Authorities to set NBCs for their jurisdiction. Due to the unique geochemical nature of urban areas, Local Authorities need to define NBC values specific to their area, which the national data is unable to provide. This study aims to calculate NBC levels for Gateshead, an urban Metropolitan Borough in the North East of England, using freely available data. The ‘median + 2MAD’, boxplot upper whisker and English NBC (according to the method adopted by the British Geological Survey) methods were compared for test PTEs lead, arsenic and cadmium. Due to the lack of systematically collected data for Gateshead in the national soil chemistry database, the use of site investigation (SI) data collected during the planning process was investigated. 12,087 SI soil chemistry data points were incorporated into a database and 27 comparison samples were taken from undisturbed locations across Gateshead. The SI data gave high resolution coverage of the area and Mann–Whitney tests confirmed statistical similarity for the undisturbed comparison samples and the SI data. SI data was successfully used to calculate NBCs for Gateshead and the median + 2MAD method was selected as most appropriate by the Local Authority according to the precautionary principle as it consistently provided the most conservative NBC values. The use of this data set provides a freely available, high resolution source of data that can be used for a range of environmental applications. - Highlights: • The use of site investigation data is proposed for land contamination studies

  16. Support vector methods for survival analysis: a comparison between ranking and regression approaches.

    Science.gov (United States)

    Van Belle, Vanya; Pelckmans, Kristiaan; Van Huffel, Sabine; Suykens, Johan A K

    2011-10-01

    To compare and evaluate ranking, regression and combined machine learning approaches for the analysis of survival data. The literature describes two approaches based on support vector machines to deal with censored observations. In the first approach the key idea is to rephrase the task as a ranking problem via the concordance index, a problem which can be solved efficiently in a context of structural risk minimization and convex optimization techniques. In a second approach, one uses a regression approach, dealing with censoring by means of inequality constraints. The goal of this paper is then twofold: (i) introducing a new model combining the ranking and regression strategy, which retains the link with existing survival models such as the proportional hazards model via transformation models; and (ii) comparison of the three techniques on 6 clinical and 3 high-dimensional datasets and discussing the relevance of these techniques over classical approaches fur survival data. We compare svm-based survival models based on ranking constraints, based on regression constraints and models based on both ranking and regression constraints. The performance of the models is compared by means of three different measures: (i) the concordance index, measuring the model's discriminating ability; (ii) the logrank test statistic, indicating whether patients with a prognostic index lower than the median prognostic index have a significant different survival than patients with a prognostic index higher than the median; and (iii) the hazard ratio after normalization to restrict the prognostic index between 0 and 1. Our results indicate a significantly better performance for models including regression constraints above models only based on ranking constraints. This work gives empirical evidence that svm-based models using regression constraints perform significantly better than svm-based models based on ranking constraints. Our experiments show a comparable performance for methods

  17. A method comparison of total and HMW adiponectin : HMW/total adiponectin ratio varies versus total adiponectin, independent of clinical condition

    NARCIS (Netherlands)

    van Andel, Merel; Drent, Madeleine L; van Herwaarden, Antonius E; Ackermans, Mariëtte T; Heijboer, Annemieke C

    BACKGROUND: Total and high-molecular-weight (HMW) adiponectin have been associated with endocrine and cardiovascular pathology. As no gold standard is available, the discussion about biological relevance of isoforms is complicated. In our study we perform a method comparison between two commercially

  18. A method comparison of total and HMW adiponectin: HMW/total adiponectin ratio varies versus total adiponectin, independent of clinical condition

    NARCIS (Netherlands)

    van Andel, Merel; Drent, Madeleine L.; van Herwaarden, Antonius E.; Ackermans, Mariëtte T.; Heijboer, Annemieke C.

    2017-01-01

    Background: Total and high-molecular-weight (HMW) adiponectin have been associated with endocrine and cardiovascular pathology. As no gold standard is available, the discussion about biological relevance of isoforms is complicated. In our study we perform a method comparison between two commercially

  19. Shape sensing methods: Review and experimental comparison on a wing-shaped plate

    Science.gov (United States)

    Gherlone, Marco; Cerracchio, Priscilla; Mattone, Massimiliano

    2018-05-01

    Shape sensing, i.e., the reconstruction of the displacement field of a structure from some discrete surface strain measurements, is a fundamental capability for the structural health management of critical components. In this paper, a review of the shape sensing methodologies available in the open literature and of the different applications is provided. Then, for the first time, an experimental comparative study is presented among the main approaches in order to highlight their relative merits in presence of uncertainties affecting real applications. These approaches are, namely, the inverse Finite Element Method, the Modal Method and Ko's Displacement Theory. A brief description of these methods is followed by the presentation of the experimental test results. A cantilevered, wing-shaped aluminum plate is let deform under its own weight, leading to bending and twisting. Using the experimental strain measurements as input data, the deflection field of the plate is reconstructed using the three aforementioned approaches and compared with the actual measured deflection. The inverse Finite Element Method is proven to be slightly more accurate and particularly attractive because it is versatile with respect to the boundary conditions and it does not require any information about material properties and loading conditions.

  20. Construction of a coarse-grain quasi-classical trajectory method. II. Comparison against the direct molecular simulation method

    Science.gov (United States)

    Macdonald, R. L.; Grover, M. S.; Schwartzentruber, T. E.; Panesi, M.

    2018-02-01

    This work presents the analysis of non-equilibrium energy transfer and dissociation of nitrogen molecules (N2(g+1Σ) ) using two different approaches: the direct molecular simulation (DMS) method and the coarse-grain quasi-classical trajectory (CG-QCT) method. The two methods are used to study thermochemical relaxation in a zero-dimensional isochoric and isothermal reactor in which the nitrogen molecules are heated to several thousand degrees Kelvin, forcing the system into strong non-equilibrium. The analysis considers thermochemical relaxation for temperatures ranging from 10 000 to 25 000 K. Both methods make use of the same potential energy surface for the N2(g+1Σ ) -N2(g+1Σ ) system taken from the NASA Ames quantum chemistry database. Within the CG-QCT method, the rovibrational energy levels of the electronic ground state of the nitrogen molecule are lumped into a reduced number of bins. Two different grouping strategies are used: the more conventional vibrational-based grouping, widely used in the literature, and energy-based grouping. The analysis of both the internal state populations and concentration profiles show excellent agreement between the energy-based grouping and the DMS solutions. During the energy transfer process, discrepancies arise between the energy-based grouping and DMS solution due to the increased importance of mode separation for low energy states. By contrast, the vibrational grouping, traditionally considered state-of-the-art, captures well the behavior of the energy relaxation but fails to consistently predict the dissociation process. The deficiency of the vibrational grouping model is due to the assumption of strict mode separation and equilibrium of rotational energy states. These assumptions result in errors predicting the energy contribution to dissociation from the rotational and vibrational modes, with rotational energy actually contributing 30%-40% of the energy required to dissociate a molecule. This work confirms the

  1. [Comparison of susceptibility artifacts generated by microchips with different geometry at 1.5 Tesla magnet resonance imaging. A phantom pilot study referring to the ASTM standard test method F2119-07].

    Science.gov (United States)

    Dengg, S; Kneissl, S

    2013-01-01

    Ferromagnetic material in microchips, used for animal identification, causes local signal increase, signal void or distortion (susceptibility artifact) on MR images. To measure the impact of microchip geometry on the artifact's size, an MRI phantom study was performed. Microchips of the labels Datamars®, Euro-I.D.® and Planet-ID® (n  =  15) were placed consecutively in a phantom and examined with respect to the ASTM Standard Test Method F2119-07 using spin echo (TR 500 ms, TE 20 ms), gradient echo (TR 300 ms, TE 15 ms, flip angel 30°) and otherwise constant imaging parameters (slice thickness 3 mm, field of view 250 x 250 mm, acquisition matrix 256 x 256 pixel, bandwidth 32 kHz) at 1.5 Tesla. Image acquisition was undertaken with a microchip positioned in the x- and z-direction and in each case with a phase-encoding direction in the y- and z-direction. The artifact size was determined with a) a measurement according to the test method F2119-07 using a homogeneous point operation, b) signal intensity measurement according to Matsuura et al. and c) pixel counts in the artifact according to Port and Pomper. There was a significant difference in artifact size between the three microchips tested (Wilcoxon p = 0.032). A two- to three-fold increase in microchip volume generated an up to 76% larger artifact, depending on the sequence type, phase-encoding direction and chip position to B0. The smaller the microchip geometry, the less is the susceptibility artifact. Spin echoes (SE) generated smaller artifacts than gradient echoes (GE). In relation to the spatial measurement of the artifact, the switch in phase-encoding direction had less influence on the artifact size in GE- than in SE-sequences. However, the artifact shape and direction of SE-sequences can be changed by altering the phase. The artifact size, caused by the microchip, plays a major clinical role in the evaluation of MRI from the head, shoulder and neck regions.

  2. Predicting congenital heart defects: A comparison of three data mining methods.

    Directory of Open Access Journals (Sweden)

    Yanhong Luo

    Full Text Available Congenital heart defects (CHD is one of the most common birth defects in China. Many studies have examined risk factors for CHD, but their predictive abilities have not been evaluated. In particular, few studies have attempted to predict risks of CHD from, necessarily unbalanced, population-based cross-sectional data. Therefore, we developed and validated machine learning models for predicting, before and during pregnancy, women's risks of bearing children with CHD. We compared the results of these models in a large-scale, comprehensive population-based retrospective cross-sectional epidemiological survey of birth defects in six counties in Shanxi Province, China, covering 2006 to 2008. This contained 78 cases of CHD among 33831 live births. We constructed nine synthetic variables to use in the models: maternal age, annual per capita income, family history, maternal history of illness, nutrition and folic acid deficiency, maternal illness in pregnancy, medication use in pregnancy, environmental risk factors in pregnancy, and unhealthy maternal lifestyle in pregnancy. The machine learning algorithms Weighted Support Vector Machine (WSVM and Weighted Random Forest (WRF were trained on, and a logistic regression (Logit was fitted to, two-thirds of the data. Their predictive abilities were then tested in the remaining data. True positive rate (TPR, true negative rate (TNR, accuracy (ACC, area under the curves (AUC, G-means, and Weighted accuracy (WTacc were used to compare the classification performance of the models. Median values, from repeating the data partitioning 1000 times, were used in all comparisons. The TPR and TNR of the three classifiers were above 0.65 and 0.93, respectively, better than any reported in the literature. TPR, wtACC, AUC and G were highest for WSVM, showing that it performed best. All three models are precise enough to identify groups at high risk of CHD. They should all be considered for future investigations of other

  3. A comparison of two search methods for determining the scope of systematic reviews and health technology assessments.

    Science.gov (United States)

    Forsetlund, Louise; Kirkehei, Ingvild; Harboe, Ingrid; Odgaard-Jensen, Jan

    2012-01-01

    This study aims to compare two different search methods for determining the scope of a requested systematic review or health technology assessment. The first method (called the Direct Search Method) included performing direct searches in the Cochrane Database of Systematic Reviews (CDSR), Database of Abstracts of Reviews of Effects (DARE) and the Health Technology Assessments (HTA). Using the comparison method (called the NHS Search Engine) we performed searches by means of the search engine of the British National Health Service, NHS Evidence. We used an adapted cross-over design with a random allocation of fifty-five requests for systematic reviews. The main analyses were based on repeated measurements adjusted for the order in which the searches were conducted. The Direct Search Method generated on average fewer hits (48 percent [95 percent confidence interval {CI} 6 percent to 72 percent], had a higher precision (0.22 [95 percent CI, 0.13 to 0.30]) and more unique hits than when searching by means of the NHS Search Engine (50 percent [95 percent CI, 7 percent to 110 percent]). On the other hand, the Direct Search Method took longer (14.58 minutes [95 percent CI, 7.20 to 21.97]) and was perceived as somewhat less user-friendly than the NHS Search Engine (-0.60 [95 percent CI, -1.11 to -0.09]). Although the Direct Search Method had some drawbacks such as being more time-consuming and less user-friendly, it generated more unique hits than the NHS Search Engine, retrieved on average fewer references and fewer irrelevant results.

  4. A comparison of sample preparation methods for extracting volatile organic compounds (VOCs) from equine faeces using HS-SPME.

    Science.gov (United States)

    Hough, Rachael; Archer, Debra; Probert, Christopher

    2018-01-01

    Disturbance to the hindgut microbiota can be detrimental to equine health. Metabolomics provides a robust approach to studying the functional aspect of hindgut microorganisms. Sample preparation is an important step towards achieving optimal results in the later stages of analysis. The preparation of samples is unique depending on the technique employed and the sample matrix to be analysed. Gas chromatography mass spectrometry (GCMS) is one of the most widely used platforms for the study of metabolomics and until now an optimised method has not been developed for equine faeces. To compare a sample preparation method for extracting volatile organic compounds (VOCs) from equine faeces. Volatile organic compounds were determined by headspace solid phase microextraction gas chromatography mass spectrometry (HS-SPME-GCMS). Factors investigated were the mass of equine faeces, type of SPME fibre coating, vial volume and storage conditions. The resultant method was unique to those developed for other species. Aliquots of 1000 or 2000 mg in 10 ml or 20 ml SPME headspace were optimal. From those tested, the extraction of VOCs should ideally be performed using a divinylbenzene-carboxen-polydimethysiloxane (DVB-CAR-PDMS) SPME fibre. Storage of faeces for up to 12 months at - 80 °C shared a greater percentage of VOCs with a fresh sample than the equivalent stored at - 20 °C. An optimised method for extracting VOCs from equine faeces using HS-SPME-GCMS has been developed and will act as a standard to enable comparisons between studies. This work has also highlighted storage conditions as an important factor to consider in experimental design for faecal metabolomics studies.

  5. A comparison of statistical methods for genomic selection in a mice population

    Directory of Open Access Journals (Sweden)

    Neves Haroldo HR

    2012-11-01

    Full Text Available Abstract Background The availability of high-density panels of SNP markers has opened new perspectives for marker-assisted selection strategies, such that genotypes for these markers are used to predict the genetic merit of selection candidates. Because the number of markers is often much larger than the number of phenotypes, marker effect estimation is not a trivial task. The objective of this research was to compare the predictive performance of ten different statistical methods employed in genomic selection, by analyzing data from a heterogeneous stock mice population. Results For the five traits analyzed (W6W: weight at six weeks, WGS: growth slope, BL: body length, %CD8+: percentage of CD8+ cells, CD4+/ CD8+: ratio between CD4+ and CD8+ cells, within-family predictions were more accurate than across-family predictions, although this superiority in accuracy varied markedly across traits. For within-family prediction, two kernel methods, Reproducing Kernel Hilbert Spaces Regression (RKHS and Support Vector Regression (SVR, were the most accurate for W6W, while a polygenic model also had comparable performance. A form of ridge regression assuming that all markers contribute to the additive variance (RR_GBLUP figured among the most accurate for WGS and BL, while two variable selection methods ( LASSO and Random Forest, RF had the greatest predictive abilities for %CD8+ and CD4+/ CD8+. RF, RKHS, SVR and RR_GBLUP outperformed the remainder methods in terms of bias and inflation of predictions. Conclusions Methods with large conceptual differences reached very similar predictive abilities and a clear re-ranking of methods was observed in function of the trait analyzed. Variable selection methods were more accurate than the remainder in the case of %CD8+ and CD4+/CD8+ and these traits are likely to be influenced by a smaller number of QTL than the remainder. Judged by their overall performance across traits and computational requirements, RR

  6. A shipboard comparison of analytic methods for ballast water compliance monitoring

    Science.gov (United States)

    Bradie, Johanna; Broeg, Katja; Gianoli, Claudio; He, Jianjun; Heitmüller, Susanne; Curto, Alberto Lo; Nakata, Akiko; Rolke, Manfred; Schillak, Lothar; Stehouwer, Peter; Vanden Byllaardt, Julie; Veldhuis, Marcel; Welschmeyer, Nick; Younan, Lawrence; Zaake, André; Bailey, Sarah

    2018-03-01

    Promising approaches for indicative analysis of ballast water samples have been developed that require study in the field to examine their utility for determining compliance with the International Convention for the Control and Management of Ships' Ballast Water and Sediments. To address this gap, a voyage was undertaken on board the RV Meteor, sailing the North Atlantic Ocean from Mindelo (Cape Verde) to Hamburg (Germany) during June 4-15, 2015. Trials were conducted on local sea water taken up by the ship's ballast system at multiple locations along the trip, including open ocean, North Sea, and coastal water, to evaluate a number of analytic methods that measure the numeric concentration or biomass of viable organisms according to two size categories (≥ 50 μm in minimum dimension: 7 techniques, ≥ 10 μm and scientific approaches (e.g. flow cytometry). Several promising indicative methods were identified that showed high correlation with microscopy, but allow much quicker processing and require less expert knowledge. This study is the first to concurrently use a large number of analytic tools to examine a variety of ballast water samples on board an operational ship in the field. Results are useful to identify the merits of each method and can serve as a basis for further improvement and development of tools and methodologies for ballast water compliance monitoring.

  7. A Comparison of Two Approaches for the Ruggedness Testing of an Analytical Method

    International Nuclear Information System (INIS)

    Maestroni, Britt

    2016-01-01

    As part of an initiative under the “Red Analitica de Latino America y el Caribe” (RALACA) network the FAO/IAEA Food and Environmental Protection Laboratory validated a multi-residue method for pesticides in potato. One of the parameters to be assessed was the intra laboratory robustness or ruggedness. The objective of this work was to implement a worked example for RALACA laboratories to test for the robustness (ruggedness) of an analytical method. As a conclusion to this study, it is evident that there is a need for harmonization of the definition of the terms robustness/ruggedness, the limits, the methodology and the statistical treatment of the generated data. A worked example for RALACA laboratories to test for the robustness (ruggedness) of an analytical method will soon be posted on the RALACA website (www.red-ralaca.net). This study was carried out with collaborators from LVA (Austria), University of Antwerp (Belgium), University of Leuwen (The Netherlands), Universidad de la Republica (Uruguay) and Agilent technologies.

  8. Comparison of different base flow separation methods in a lowland catchment

    Directory of Open Access Journals (Sweden)

    S. Uhlenbrook

    2009-11-01

    Full Text Available Assessment of water resources available in different storages and moving along different pathways in a catchment is important for its optimal use and protection, and also for the prediction of floods and low flows. Moreover, understanding of the runoff generation processes is essential for assessing the impacts of climate and land use changes on the hydrological response of a catchment. Many methods for base flow separation exist, but hardly one focuses on the specific behaviour of temperate lowland areas. This paper presents the results of a base flow separation study carried out in a lowland area in the Netherlands. In this study, field observations of precipitation, groundwater and surface water levels and discharges, together with tracer analysis are used to understand the runoff generation processes in the catchment. Several tracer and non-tracer based base flow separation methods were applied to the discharge time series, and their results are compared.

    The results show that groundwater levels react fast to precipitation events in this lowland area with shallow groundwater tables. Moreover, a good correlation was found between groundwater levels and discharges suggesting that most of the measured discharge also during floods comes from groundwater storage. It was estimated using tracer hydrological approaches that approximately 90% of the total discharge is groundwater displaced by event water mainly infiltrating in the northern part of the catchment, and only the remaining 10% is surface runoff. The impact of remote recharge causing displacement of near channel groundwater during floods could also be motivated with hydraulic approximations. The results show further that when base flow separation is meant to identify groundwater contributions to stream flow, process based methods (e.g. the rating curve method; Kliner and Knezek, 1974 are more reliable than other simple non-tracer based methods. Also, the recursive filtering method

  9. A comparison of two methods to assess audience-induced changes in male mate choice

    Directory of Open Access Journals (Sweden)

    Madlen ZIEGE, Carmen HENNIGE-SCHULZ, Frauke MUECKSCH,David BIERBACH, Ralph TIEDEMANN, Bruno STREIT, Martin PLATH

    2012-02-01

    Full Text Available Multidirectional communicative interactions in social networks can have a profound effect on mate choice behavior. Male Atlantic molly Poecilia mexicana exhibit weaker mating preferences when an audience male is presented. This could be a male strategy to reduce sperm competition risk: interacting more equally with different females may be advantageous because rivals might copy mate choice decisions. In line with this hypothesis, a previous study found males to show a strong audience effect when being observed while exercising mate choice, but not when the rival was presented only before the choice tests. Audience effects on mate choice decisions have been quantified in poeciliid fishes using association preference designs, but it remains unknown if patterns found from measuring association times translate into actual mating behavior. Thus, we created five audience treatments simulating different forms of perceived sperm competition risk and determined focal males’ mating preferences by scoring pre-mating (nipping and mating behavior (gonopodial thrusting. Nipping did not reflect the pattern that was found when association preferences were measured, while a very similar pattern was uncovered in thrusting behavior. The strongest response was observed when the audience could eavesdrop on the focal male’s behavior. A reduction in the strength of focal males’ preferences was also seen after the rival male had an opportunity to mate with the focal male’s preferred mate. In comparison, the reduction of mating preferences in response to an audience was greater when measuring association times than actual mating behavior. While measuring direct sexual interactions between the focal male and both stimulus females not only the male’s motivational state is reflected but also females’ behavior such as avoidance of male sexual harassment [Current Zoology 58 (1: 84–94, 2012].

  10. A comparison of two methods to assess audience-induced changes in male mate choice

    Institute of Scientific and Technical Information of China (English)

    Madlen ZIEGE; Carmen HENNIGE-SCHULZ; Frauke MUECKSCH; David BIERBACH; Ralph TIEDEMANN; Bruno STREIT; Martin PLATH

    2012-01-01

    Multidirectional communicative interactions in social networks can have a profound effect on mate choice behavior.Male Atlantic molly Poecilia mexicana exhibit weaker mating preferences when an audience male is presented.This could be a male strategy to reduce sperm competition risk:interacting more equally with different females may be advantageous because rivals might copy mate choice decisions.In line with this hypothesis,a previous study found males to show a strong audience effect when being observed while exercising mate choice,but not when the rival was presented only before the choice tests.Audience effects on mate choice decisions have been quantified in poeciliid fishes using association preference designs,but it remains unknown if patterns found from measuring association times translate into actual mating behavior.Thus,we createl five audience treatments simulating different forms of perceived sperm competition risk and determined focal males' mating preferences by scoring pre-mating (nipping) and mating behavior (gonopodial thrusting).Nipping did not reflect the pattern that was found when association preferences were measured,while a very similar pattern was uncovered in thrusting behavior.The strongest response was observed when the audience could eavesdrop on the focal male's behavior.A reduction in the strength of focal males' preferences was also seen after the rival male had an opportunity to mate with the focal male's preferred mate.In comparison,the reduction of mating preferences in response to an audience was greater when measuring association times than actual mating behavior.While measuring direct sexual interactions between the focal male and both stimulus females not only the male's motivational state is reflected but also females' behavior such as avoidance of male sexual harassment [Current Zoology 58 (1):84-94,2012].

  11. [Detection of RAS genes mutation using the Cobas® method in a private laboratory of pathology: Medical and economical study in comparison to a public platform of molecular biology of cancer].

    Science.gov (United States)

    Albertini, Anne-Flore; Raoux, Delphine; Neumann, Frédéric; Rossat, Stéphane; Tabet, Farid; Pedeutour, Florence; Duranton-Tanneur, Valérie; Kubiniek, Valérie; Vire, Olivier; Weinbreck, Nicolas

    In France, determination of the mutation status of RAS genes for predictive response to anti-EGFR targeted treatments is carried out by public platforms of molecular biology of cancer created by the French National Cancer Institute. This study aims to demonstrate the feasibility of these analyses by a private pathology laboratory (MEDIPATH) as per the requirements of accreditation. We retrospectively studied the mutation status of KRAS and NRAS genes in 163 cases of colorectal metastatic cancer using the Cobas ® technique. We compared our results to those prospectively obtained through pyrosequencing and allelic discrimination by the genetic laboratory of solid tumors at the Nice University Hospital (PACA-EST regional platform). The results of both series were identical: 98.7% positive correlation; negative correlation of 93.1%; overall correlation of 95.7% (Kappa=0.92). This study demonstrates the feasibility of molecular analysis in a private pathology laboratory. As this practice requires a high level of guarantee, its accreditation, according to the NF-EN-ISO15189 quality compliance French standard, is essential. Conducting molecular analysis in this context avoids the steps of routing the sample and the result between the pathology laboratory and the platform, which reduces the overall time of rendering the result. In conclusion, the transfer of some analysis from these platforms to private pathology laboratories would allow the platforms to be discharged from a part of routine testing and therefore concentrate their efforts to the development of new analyses constantly required to access personalized medicine. Copyright © 2017. Published by Elsevier Masson SAS.

  12. A comparative study of different methods for calculating electronic transition rates

    Science.gov (United States)

    Kananenka, Alexei A.; Sun, Xiang; Schubert, Alexander; Dunietz, Barry D.; Geva, Eitan

    2018-03-01

    We present a comprehensive comparison of the following mixed quantum-classical methods for calculating electronic transition rates: (1) nonequilibrium Fermi's golden rule, (2) mixed quantum-classical Liouville method, (3) mean-field (Ehrenfest) mixed quantum-classical method, and (4) fewest switches surface-hopping method (in diabatic and adiabatic representations). The comparison is performed on the Garg-Onuchic-Ambegaokar benchmark charge-transfer model, over a broad range of temperatures and electronic coupling strengths, with different nonequilibrium initial states, in the normal and inverted regimes. Under weak to moderate electronic coupling, the nonequilibrium Fermi's golden rule rates are found to be in good agreement with the rates obtained via the mixed quantum-classical Liouville method that coincides with the fully quantum-mechanically exact results for the model system under study. Our results suggest that the nonequilibrium Fermi's golden rule can serve as an inexpensive yet accurate alternative to Ehrenfest and the fewest switches surface-hopping methods.

  13. An Algorithmic Comparison of the Hyper-Reduction and the Discrete Empirical Interpolation Method for a Nonlinear Thermal Problem

    Directory of Open Access Journals (Sweden)

    Felix Fritzen

    2018-02-01

    Full Text Available A novel algorithmic discussion of the methodological and numerical differences of competing parametric model reduction techniques for nonlinear problems is presented. First, the Galerkin reduced basis (RB formulation is presented, which fails at providing significant gains with respect to the computational efficiency for nonlinear problems. Renowned methods for the reduction of the computing time of nonlinear reduced order models are the Hyper-Reduction and the (Discrete Empirical Interpolation Method (EIM, DEIM. An algorithmic description and a methodological comparison of both methods are provided. The accuracy of the predictions of the hyper-reduced model and the (DEIM in comparison to the Galerkin RB is investigated. All three approaches are applied to a simple uncertainty quantification of a planar nonlinear thermal conduction problem. The results are compared to computationally intense finite element simulations.

  14. Performance comparison of a new hybrid conjugate gradient method under exact and inexact line searches

    Science.gov (United States)

    Ghani, N. H. A.; Mohamed, N. S.; Zull, N.; Shoid, S.; Rivaie, M.; Mamat, M.

    2017-09-01

    Conjugate gradient (CG) method is one of iterative techniques prominently used in solving unconstrained optimization problems due to its simplicity, low memory storage, and good convergence analysis. This paper presents a new hybrid conjugate gradient method, named NRM1 method. The method is analyzed under the exact and inexact line searches in given conditions. Theoretically, proofs show that the NRM1 method satisfies the sufficient descent condition with both line searches. The computational result indicates that NRM1 method is capable in solving the standard unconstrained optimization problems used. On the other hand, the NRM1 method performs better under inexact line search compared with exact line search.

  15. Attenuation correction for hybrid MR/PET scanners: a comparison study

    Energy Technology Data Exchange (ETDEWEB)

    Rota Kops, Elena [Forschungszentrum Jülich GmbH, Jülich (Germany); Ribeiro, Andre Santos [Imperial College London, London (United Kingdom); Caldeira, Liliana [Forschungszentrum Jülich GmbH, Jülich (Germany); Hautzel, Hubertus [Heinrich-Heine-University Düsseldorf, Düsseldorf (Germany); Lukas, Mathias [Technische Universitaet Muenchen, Munich (Germany); Antoch, Gerald [Heinrich-Heine-University Düsseldorf, Düsseldorf (Germany); Lerche, Christoph; Shah, Jon [Forschungszentrum Jülich GmbH, Jülich (Germany)

    2015-05-18

    Attenuation correction of PET data acquired in hybrid MR/PET scanners is still a challenge. Different methods have been adopted by several groups to obtain reliable attenuation maps (mu-maps). In this study we compare three methods: MGH, UCL, Neural-Network. The MGH method is based on an MR/CT template obtained with the SPM8 software. The UCL method uses a database of MR/CT pairs. Both generate mu-maps from MP-RAGE images. The feed-forward neural-network from Juelich (NN-Juelich) requires two UTE images; it generates segmented mu-maps. Data from eight subjects (S1-S8) measured in the Siemens 3T MR-BrainPET scanner were used. Corresponding CT images were acquired. The resulting mu-maps were compared against the CT-based mu-maps for each subject and method<