WorldWideScience

Sample records for methods comparison study

  1. A simple statistical method for catch comparison studies

    DEFF Research Database (Denmark)

    Holst, René; Revill, Andrew

    2009-01-01

    For analysing catch comparison data, we propose a simple method based on Generalised Linear Mixed Models (GLMM) and use polynomial approximations to fit the proportions caught in the test codend. The method provides comparisons of fish catch at length by the two gears through a continuous curve...... with a realistic confidence band. We demonstrate the versatility of this method, on field data obtained from the first known testing in European waters of the Rhode Island (USA) 'Eliminator' trawl. These data are interesting as they include a range of species with different selective patterns. Crown Copyright (C...

  2. a comparison of methods in a behaviour study of the south african ...

    African Journals Online (AJOL)

    A COMPARISON OF METHODS IN A BEHAVIOUR STUDY OF THE ... Three methods are outlined in this paper and the results obtained from each method were .... There was definitely no aggressive response towards the Sky pointing mate.

  3. Study and comparison of different methods control in light water critical facility

    International Nuclear Information System (INIS)

    Michaiel, M.L.; Mahmoud, M.S.

    1980-01-01

    The control of nuclear reactors, may be studied using several control methods, such as control by rod absorbers, by inserting or removing fuel rods (moderator cavities), or by changing reflector thickness. Every method has its advantage, the comparison between these different methods and their effect on the reactivity of a reactor is the purpose of this work. A computer program is written by the authors to calculate the critical radius and worth in any case of the three precedent methods of control

  4. Comparison Study of Subspace Identification Methods Applied to Flexible Structures

    Science.gov (United States)

    Abdelghani, M.; Verhaegen, M.; Van Overschee, P.; De Moor, B.

    1998-09-01

    In the past few years, various time domain methods for identifying dynamic models of mechanical structures from modal experimental data have appeared. Much attention has been given recently to so-called subspace methods for identifying state space models. This paper presents a detailed comparison study of these subspace identification methods: the eigensystem realisation algorithm with observer/Kalman filter Markov parameters computed from input/output data (ERA/OM), the robust version of the numerical algorithm for subspace system identification (N4SID), and a refined version of the past outputs scheme of the multiple-output error state space (MOESP) family of algorithms. The comparison is performed by simulating experimental data using the five mode reduced model of the NASA Mini-Mast structure. The general conclusion is that for the case of white noise excitations as well as coloured noise excitations, the N4SID/MOESP algorithms perform equally well but give better results (improved transfer function estimates, improved estimates of the output) compared to the ERA/OM algorithm. The key computational step in the three algorithms is the approximation of the extended observability matrix of the system to be identified, for N4SID/MOESP, or of the observer for the system to be identified, for the ERA/OM. Furthermore, the three algorithms only require the specification of one dimensioning parameter.

  5. Comparison of gas dehydration methods based on energy ...

    African Journals Online (AJOL)

    Comparison of gas dehydration methods based on energy consumption. ... PROMOTING ACCESS TO AFRICAN RESEARCH ... This study compares three conventional methods of natural gas (Associated Natural Gas) dehydration to carry out ...

  6. Arima model and exponential smoothing method: A comparison

    Science.gov (United States)

    Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri

    2013-04-01

    This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.

  7. Comparison of n-γ discrimination by zero-crossing and digital charge comparison methods

    International Nuclear Information System (INIS)

    Wolski, D.; Moszynski, M.; Ludziejewski, T.; Johnson, A.; Klamra, W.; Skeppstedt, Oe.

    1995-01-01

    A comparative study of the n-γ discrimination done by the digital charge comparison and zero-crossing methods was carried out for a 130 mm in diameter and 130 mm high BC501A liquid scintillator coupled to a 130 mm diameter XP4512B photomultiplier. The high quality of the tested detector was reflected in a photoelectron yield of 2300±100 phe/MeV and excellent n-γ discrimination properties with energy discrimination thresholds corresponding to very low neutron (or electron) energies. The superiority of the Z/C method was demonstrated for the n-γ discrimination method alone, as well as, for the simultaneous separation by the pulse shape discrimination and the time-of-flight methods down to about 30 keV recoil electron energy. The digital charge comparison method fails for a large dynamic range of energy and its separation is weakly improved by time-of-flight method for low energies. (orig.)

  8. Comparison of two methods of quantitation in human studies of biodistribution and radiation dosimetry

    International Nuclear Information System (INIS)

    Smith, T.

    1992-01-01

    A simple method of quantitating organ radioactivity content for dosimetry purposes based on relationships between organ count rate and the initial whole body count rate, has been compared with a more rigorous method of absolute quantitation using a transmission scanning technique. Comparisons were on the basis of organ uptake (% administered activity) and resultant organ radiation doses (mGy MBq -1 ) in 6 normal male volunteers given a 99 Tc m -labelled myocardial perfusion imaging agent intravenously at rest and following exercise. In these studies, estimates of individual organ uptakes by the simple method were in error by between +24 and -16% compared with the more accurate method. However, errors on organ dose values were somewhat less and the effective dose was correct to within 3%. (Author)

  9. ACCELERATION RENDERING METHOD ON RAY TRACING WITH ANGLE COMPARISON AND DISTANCE COMPARISON

    Directory of Open Access Journals (Sweden)

    Liliana liliana

    2007-01-01

    Full Text Available In computer graphics applications, to produce realistic images, a method that is often used is ray tracing. Ray tracing does not only model local illumination but also global illumination. Local illumination count ambient, diffuse and specular effects only, but global illumination also count mirroring and transparency. Local illumination count effects from the lamp(s but global illumination count effects from other object(s too. Objects that are usually modeled are primitive objects and mesh objects. The advantage of mesh modeling is various, interesting and real-like shape. Mesh contains many primitive objects like triangle or square (rare. A problem in mesh object modeling is long rendering time. It is because every ray must be checked with a lot of triangle of the mesh. Added by ray from other objects checking, the number of ray that traced will increase. It causes the increasing of rendering time. To solve this problem, in this research, new methods are developed to make the rendering process of mesh object faster. The new methods are angle comparison and distance comparison. These methods are used to reduce the number of ray checking. The rays predicted will not intersect with the mesh, are not checked weather the ray intersects the mesh. With angle comparison, if using small angle to compare, the rendering process will be fast. This method has disadvantage, if the shape of each triangle is big, some triangles will be corrupted. If the angle to compare is bigger, mesh corruption can be avoided but the rendering time will be longer than without comparison. With distance comparison, the rendering time is less than without comparison, and no triangle will be corrupted.

  10. Aggregation Methods in International Comparisons

    NARCIS (Netherlands)

    B.M. Balk (Bert)

    2001-01-01

    textabstractThis paper reviews the progress that has been made over the past decade in understanding the nature of the various multilateral in- ternational comparison methods. Fifteen methods are discussed and subjected to a system of ten tests. In addition, attention is paid to recently developed

  11. Comparison of Nested-PCR technique and culture method in ...

    African Journals Online (AJOL)

    USER

    2010-04-05

    Apr 5, 2010 ... Full Length Research Paper. Comparison of ... The aim of the present study was to evaluate the diagnostic value of nested PCR in genitourinary ... method. Based on obtained results, the positivity rate of urine samples in this study was 5.0% by using culture and PCR methods and 2.5% for acid fast staining.

  12. Improved methods for the mathematically controlled comparison of biochemical systems

    Directory of Open Access Journals (Sweden)

    Schwacke John H

    2004-06-01

    Full Text Available Abstract The method of mathematically controlled comparison provides a structured approach for the comparison of alternative biochemical pathways with respect to selected functional effectiveness measures. Under this approach, alternative implementations of a biochemical pathway are modeled mathematically, forced to be equivalent through the application of selected constraints, and compared with respect to selected functional effectiveness measures. While the method has been applied successfully in a variety of studies, we offer recommendations for improvements to the method that (1 relax requirements for definition of constraints sufficient to remove all degrees of freedom in forming the equivalent alternative, (2 facilitate generalization of the results thus avoiding the need to condition those findings on the selected constraints, and (3 provide additional insights into the effect of selected constraints on the functional effectiveness measures. We present improvements to the method and related statistical models, apply the method to a previously conducted comparison of network regulation in the immune system, and compare our results to those previously reported.

  13. Gene Expression Profiles for Predicting Metastasis in Breast Cancer: A Cross-Study Comparison of Classification Methods

    Directory of Open Access Journals (Sweden)

    Mark Burton

    2012-01-01

    Full Text Available Machine learning has increasingly been used with microarray gene expression data and for the development of classifiers using a variety of methods. However, method comparisons in cross-study datasets are very scarce. This study compares the performance of seven classification methods and the effect of voting for predicting metastasis outcome in breast cancer patients, in three situations: within the same dataset or across datasets on similar or dissimilar microarray platforms. Combining classification results from seven classifiers into one voting decision performed significantly better during internal validation as well as external validation in similar microarray platforms than the underlying classification methods. When validating between different microarray platforms, random forest, another voting-based method, proved to be the best performing method. We conclude that voting based classifiers provided an advantage with respect to classifying metastasis outcome in breast cancer patients.

  14. Comparison of microstickies measurement methods. Part I, sample preparation and measurement methods

    Science.gov (United States)

    Mahendra R. Doshi; Angeles Blanco; Carlos Negro; Gilles M. Dorris; Carlos C. Castro; Axel Hamann; R. Daniel Haynes; Carl Houtman; Karen Scallon; Hans-Joachim Putz; Hans Johansson; R.A. Venditti; K. Copeland; H.-M. Chang

    2003-01-01

    Recently, we completed a project on the comparison of macrostickies measurement methods. Based on the success of the project, we decided to embark on this new project on comparison of microstickies measurement methods. When we started this project, there were some concerns and doubts principally due to the lack of an accepted definition of microstickies. However, we...

  15. Evaluation of analytical reconstruction with a new gap-filling method in comparison to iterative reconstruction in [11C]-raclopride PET studies

    International Nuclear Information System (INIS)

    Tuna, U.; Johansson, J.; Ruotsalainen, U.

    2014-01-01

    The aim of the study was (1) to evaluate the reconstruction strategies with dynamic [ 11 C]-raclopride human positron emission tomography (PET) studies acquired from ECAT high-resolution research tomograph (HRRT) scanner and (2) to justify for the selected gap-filling method for analytical reconstruction with simulated phantom data. A new transradial bicubic interpolation method has been implemented to enable faster analytical 3D-reprojection (3DRP) reconstructions for the ECAT HRRT PET scanner data. The transradial bicubic interpolation method was compared to the other gap-filling methods visually and quantitatively using the numerical Shepp-Logan phantom. The performance of the analytical 3DRP reconstruction method with this new gap-filling method was evaluated in comparison with the iterative statistical methods: ordinary Poisson ordered subsets expectation maximization (OPOSEM) and resolution modeled OPOSEM methods. The image reconstruction strategies were evaluated using human data at different count statistics and consequently at different noise levels. In the assessments, 14 [ 11 C]-raclopride dynamic PET studies (test-retest studies of 7 healthy subjects) acquired from the HRRT PET scanner were used. Besides the visual comparisons of the methods, we performed regional quantitative evaluations over the cerebellum, caudate and putamen structures. We compared the regional time-activity curves (TACs), areas under the TACs and binding potential (BP ND ) values. The results showed that the new gap-filling method preserves the linearity of the 3DRP method. Results with the 3DRP after gap-filling method exhibited hardly any dependency on the count statistics (noise levels) in the sinograms while we observed changes in the quantitative results with the EM-based methods for different noise contamination in the data. With this study, we showed that 3DRP with transradial bicubic gap-filling method is feasible for the reconstruction of high-resolution PET data with

  16. Comparison of sampling methods for the assessment of indoor microbial exposure

    DEFF Research Database (Denmark)

    Frankel, M; Timm, Michael; Hansen, E W

    2012-01-01

    revealed. This study thus facilitates comparison between methods and may therefore be used as a frame of reference when studying the literature or when conducting further studies on indoor microbial exposure. Results also imply that the relatively simple EDC method for the collection of settled dust may...

  17. Experience from the comparison of two PSA-studies

    International Nuclear Information System (INIS)

    Holmberg, J.; Pulkkinen, U.

    2001-03-01

    Two probabilistic safety assessments (PSA) made for nearly identical reactors units (Forsmark 3 and Oskarshamn 3) have been compared. Two different analysis teams made the PSAs, and the analyses became quite different. The goal of the study is to identify, clarify and explain differences between PSA-studies. The purpose is to understand limitations and uncertainties in PSA, to explain reasons for differences between PSA-studies, and to give recommendations for comparison of PSA-studies and for improving the PSA-methodology. The reviews have been made by reading PSA-documentation, using the computer model and interviewing persons involved in the projects. The method and findings have been discussed within the project group. Both the PSA-project and various parts in the PSA-model have been reviewed. A major finding was that the two projects had different purpose and thus had different resources, scope and even methods in their study. The study shows that comparison of PSA results from different plants is normally not meaningful. It takes a very deep knowledge of the PSA studies to make a comparison of the results and usually one has to ensure that the compared studies have the same scope and are based on the same analysis methods. Harmonisation of the PSA-methodology is recommended in the presentation of results, presentation of methods, scope main limitation and assumption, and definitions for end states, initiating events and common cause failures. This would facilitate the comparison of the studies. Methods for validation of PSA for different application areas should be developed. The developed PSA review standards can be applied for a general validation of a study. The most important way to evaluate the real feasibility of PSA can take place only with practical applications. The PSA-documentation and models can be developed to facilitate the communication between PSA-experts and users. In any application consultation with the PSA-expert is however needed. Many

  18. Comparison of different methods for the solution of sets of linear equations

    International Nuclear Information System (INIS)

    Bilfinger, T.; Schmidt, F.

    1978-06-01

    The application of the conjugate-gradient methods as novel general iterative methods for the solution of sets of linear equations with symmetrical systems matrices led to this paper, where a comparison of these methods with the conventional differently accelerated Gauss-Seidel iteration was carried out. In additon, the direct Cholesky method was also included in the comparison. The studies referred mainly to memory requirement, computing time, speed of convergence, and accuracy of different conditions of the systems matrices, by which also the sensibility of the methods with respect to the influence of truncation errors may be recognized. (orig.) 891 RW [de

  19. Study on Comparison of Bidding and Pricing Behavior Distinction between Estimate Methods

    Science.gov (United States)

    Morimoto, Emi; Namerikawa, Susumu

    The most characteristic trend on bidding and pricing behavior distinction in recent years is the increasing number of bidders just above the criteria for low-price bidding investigations. The contractor's markup is the difference between the bidding price and the execution price. Therefore, the contractor's markup is the difference between criteria for low-price bidding investigations price and the execution price in the public works bid in Japan. Virtually, bidder's strategies and behavior have been controlled by public engineer's budgets. Estimation and bid are inseparably linked in the Japanese public works procurement system. The trial of the unit price-type estimation method begins in 2004. On another front, accumulated estimation method is one of the general methods in public works. So, there are two types of standard estimation methods in Japan. In this study, we did a statistical analysis on the bid information of civil engineering works for the Ministry of Land, Infrastructure, and Transportation in 2008. It presents several issues that bidding and pricing behavior is related to an estimation method (several estimation methods) for public works bid in Japan. The two types of standard estimation methods produce different results that number of bidders (decide on bid-no bid strategy) and distribution of bid price (decide on mark-up strategy).The comparison on the distribution of bid prices showed that the percentage of the bid concentrated on the criteria for low-price bidding investigations have had a tendency to get higher in the large-sized public works by the unit price-type estimation method, comparing with the accumulated estimation method. On one hand, the number of bidders who bids for public works estimated unit-price tends to increase significantly Public works estimated unit-price is likely to have been one of the factors for the construction companies to decide if they participate in the biddings.

  20. Soybean allergen detection methods--a comparison study

    DEFF Research Database (Denmark)

    Pedersen, M. Højgaard; Holzhauser, T.; Bisson, C.

    2008-01-01

    Soybean containing products are widely consumed, thus reliable methods for detection of soy in foods are needed in order to make appropriate risk assessment studies to adequately protect soy allergic patients. Six methods were compared using eight food products with a declared content of soy...

  1. Quantitative comparison of two particle tracking methods in fluorescence microscopy images

    CSIR Research Space (South Africa)

    Mabaso, M

    2013-09-01

    Full Text Available that cannot be analysed efficiently by means of manual analysis. In this study we compare the performance of two computer-based tracking methods for tracking of bright particles in fluorescence microscopy image sequences. The methods under comparison are...

  2. Comparison of Thermal Properties Measured by Different Methods

    International Nuclear Information System (INIS)

    Sundberg, Jan; Kukkonen, Ilmo; Haelldahl, Lars

    2003-04-01

    one or both methods cannot be excluded. For future investigations a set of thermal conductivity standard materials should be selected for testing using the different methods of the laboratories. The material should have thermal properties in the range of typical rocks, be fine-grained and suitable for making samples of different shapes and volumes adjusted to different measurement techniques. Because of large obtained individual variations in the results, comparisons of different methods should continue and include measurements of temperature dependence of thermal properties, especially the specific heat. This should cover the relevant temperature range of about 0-90 deg C. Further comparisons would add to previous studies of temperature dependence of the present rocks

  3. Comparison of Thermal Properties Measured by Different Methods

    Energy Technology Data Exchange (ETDEWEB)

    Sundberg, Jan [Geo Innova AB, Linkoeping (Sweden); Kukkonen, Ilmo [Geological Survey of Finland, Helsinki (Finland); Haelldahl, Lars [Hot Disk AB, Uppsala (Sweden)

    2003-04-01

    one or both methods cannot be excluded. For future investigations a set of thermal conductivity standard materials should be selected for testing using the different methods of the laboratories. The material should have thermal properties in the range of typical rocks, be fine-grained and suitable for making samples of different shapes and volumes adjusted to different measurement techniques. Because of large obtained individual variations in the results, comparisons of different methods should continue and include measurements of temperature dependence of thermal properties, especially the specific heat. This should cover the relevant temperature range of about 0-90 deg C. Further comparisons would add to previous studies of temperature dependence of the present rocks.

  4. Comparison of Standard and Fast Charging Methods for Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Petr Chlebis

    2014-01-01

    Full Text Available This paper describes a comparison of standard and fast charging methods used in the field of electric vehicles and also comparison of their efficiency in terms of electrical energy consumption. The comparison was performed on three-phase buck converter, which was designed for EV’s fast charging station. The results were obtained by both mathematical and simulation methods. The laboratory model of entire physical application, which will be further used for simulation results verification, is being built in these days.

  5. A Novel Group-Fused Sparse Partial Correlation Method for Simultaneous Estimation of Functional Networks in Group Comparison Studies.

    Science.gov (United States)

    Liang, Xiaoyun; Vaughan, David N; Connelly, Alan; Calamante, Fernando

    2018-05-01

    The conventional way to estimate functional networks is primarily based on Pearson correlation along with classic Fisher Z test. In general, networks are usually calculated at the individual-level and subsequently aggregated to obtain group-level networks. However, such estimated networks are inevitably affected by the inherent large inter-subject variability. A joint graphical model with Stability Selection (JGMSS) method was recently shown to effectively reduce inter-subject variability, mainly caused by confounding variations, by simultaneously estimating individual-level networks from a group. However, its benefits might be compromised when two groups are being compared, given that JGMSS is blinded to other groups when it is applied to estimate networks from a given group. We propose a novel method for robustly estimating networks from two groups by using group-fused multiple graphical-lasso combined with stability selection, named GMGLASS. Specifically, by simultaneously estimating similar within-group networks and between-group difference, it is possible to address inter-subject variability of estimated individual networks inherently related with existing methods such as Fisher Z test, and issues related to JGMSS ignoring between-group information in group comparisons. To evaluate the performance of GMGLASS in terms of a few key network metrics, as well as to compare with JGMSS and Fisher Z test, they are applied to both simulated and in vivo data. As a method aiming for group comparison studies, our study involves two groups for each case, i.e., normal control and patient groups; for in vivo data, we focus on a group of patients with right mesial temporal lobe epilepsy.

  6. Comparison of three boosting methods in parent-offspring trios for genotype imputation using simulation study

    Directory of Open Access Journals (Sweden)

    Abbas Mikhchi

    2016-01-01

    Full Text Available Abstract Background Genotype imputation is an important process of predicting unknown genotypes, which uses reference population with dense genotypes to predict missing genotypes for both human and animal genetic variations at a low cost. Machine learning methods specially boosting methods have been used in genetic studies to explore the underlying genetic profile of disease and build models capable of predicting missing values of a marker. Methods In this study strategies and factors affecting the imputation accuracy of parent-offspring trios compared from lower-density SNP panels (5 K to high density (10 K SNP panel using three different Boosting methods namely TotalBoost (TB, LogitBoost (LB and AdaBoost (AB. The methods employed using simulated data to impute the un-typed SNPs in parent-offspring trios. Four different datasets of G1 (100 trios with 5 k SNPs, G2 (100 trios with 10 k SNPs, G3 (500 trios with 5 k SNPs, and G4 (500 trio with 10 k SNPs were simulated. In four datasets all parents were genotyped completely, and offspring genotyped with a lower density panel. Results Comparison of the three methods for imputation showed that the LB outperformed AB and TB for imputation accuracy. The time of computation were different between methods. The AB was the fastest algorithm. The higher SNP densities resulted the increase of the accuracy of imputation. Larger trios (i.e. 500 was better for performance of LB and TB. Conclusions The conclusion is that the three methods do well in terms of imputation accuracy also the dense chip is recommended for imputation of parent-offspring trios.

  7. Enzyme sequence similarity improves the reaction alignment method for cross-species pathway comparison

    Energy Technology Data Exchange (ETDEWEB)

    Ovacik, Meric A. [Chemical and Biochemical Engineering Department, Rutgers University, Piscataway, NJ 08854 (United States); Androulakis, Ioannis P., E-mail: yannis@rci.rutgers.edu [Chemical and Biochemical Engineering Department, Rutgers University, Piscataway, NJ 08854 (United States); Biomedical Engineering Department, Rutgers University, Piscataway, NJ 08854 (United States)

    2013-09-15

    Pathway-based information has become an important source of information for both establishing evolutionary relationships and understanding the mode of action of a chemical or pharmaceutical among species. Cross-species comparison of pathways can address two broad questions: comparison in order to inform evolutionary relationships and to extrapolate species differences used in a number of different applications including drug and toxicity testing. Cross-species comparison of metabolic pathways is complex as there are multiple features of a pathway that can be modeled and compared. Among the various methods that have been proposed, reaction alignment has emerged as the most successful at predicting phylogenetic relationships based on NCBI taxonomy. We propose an improvement of the reaction alignment method by accounting for sequence similarity in addition to reaction alignment method. Using nine species, including human and some model organisms and test species, we evaluate the standard and improved comparison methods by analyzing glycolysis and citrate cycle pathways conservation. In addition, we demonstrate how organism comparison can be conducted by accounting for the cumulative information retrieved from nine pathways in central metabolism as well as a more complete study involving 36 pathways common in all nine species. Our results indicate that reaction alignment with enzyme sequence similarity results in a more accurate representation of pathway specific cross-species similarities and differences based on NCBI taxonomy.

  8. Enzyme sequence similarity improves the reaction alignment method for cross-species pathway comparison

    International Nuclear Information System (INIS)

    Ovacik, Meric A.; Androulakis, Ioannis P.

    2013-01-01

    Pathway-based information has become an important source of information for both establishing evolutionary relationships and understanding the mode of action of a chemical or pharmaceutical among species. Cross-species comparison of pathways can address two broad questions: comparison in order to inform evolutionary relationships and to extrapolate species differences used in a number of different applications including drug and toxicity testing. Cross-species comparison of metabolic pathways is complex as there are multiple features of a pathway that can be modeled and compared. Among the various methods that have been proposed, reaction alignment has emerged as the most successful at predicting phylogenetic relationships based on NCBI taxonomy. We propose an improvement of the reaction alignment method by accounting for sequence similarity in addition to reaction alignment method. Using nine species, including human and some model organisms and test species, we evaluate the standard and improved comparison methods by analyzing glycolysis and citrate cycle pathways conservation. In addition, we demonstrate how organism comparison can be conducted by accounting for the cumulative information retrieved from nine pathways in central metabolism as well as a more complete study involving 36 pathways common in all nine species. Our results indicate that reaction alignment with enzyme sequence similarity results in a more accurate representation of pathway specific cross-species similarities and differences based on NCBI taxonomy

  9. Methacholine challenge test: Comparison of tidal breathing and dosimeter methods in children.

    Science.gov (United States)

    Mazi, Ahlam; Lands, Larry C; Zielinski, David

    2018-02-01

    Methacholine Challenge Test (MCT) is used to confirm, assess the severity and/or rule out asthma. Two MCT methods are described as equivalent by the American Thoracic Society (ATS), the tidal breathing and the dosimeter methods. However, the majority of adult studies suggest that individuals with asthma do not react at the same PC 20 between the two methods. Additionally, the nebulizers used are no longer available and studies suggest current nebulizers are not equivalent to these. Our study investigates the difference in positive MCT tests between three methods in a pediatric population. A retrospective, chart review of all MCT performed with spirometry at the Montreal Children's Hospital from January 2006 to March 2016. A comparison of the percentage positive MCT tests with three methods, tidal breathing, APS dosimeter and dose adjusted DA-dosimeter, was performed at different cutoff points up to 8 mg/mL. A total of 747 subjects performed the tidal breathing method, 920 subjects the APS dosimeter method, and 200 subjects the DA-dosimeter method. At a PC 20 cutoff ≤4 mg/mL, the percentage positive MCT was significantly higher using the tidal breathing method (76.3%) compared to the APS dosimeter (45.1%) and DA-dosimeter (65%) methods (P < 0.0001). The choice of nebulizer and technique significantly impacts the rate of positivity when using MCT to diagnose and assess asthma. Lack of direct comparison of techniques within the same individuals and clinical assessment should be addressed in future studies to standardize MCT methodology in children. © 2017 Wiley Periodicals, Inc.

  10. Steroid hormones in environmental matrices: extraction method comparison.

    Science.gov (United States)

    Andaluri, Gangadhar; Suri, Rominder P S; Graham, Kendon

    2017-11-09

    The U.S. Environmental Protection Agency (EPA) has developed methods for the analysis of steroid hormones in water, soil, sediment, and municipal biosolids by HRGC/HRMS (EPA Method 1698). Following the guidelines provided in US-EPA Method 1698, the extraction methods were validated with reagent water and applied to municipal wastewater, surface water, and municipal biosolids using GC/MS/MS for the analysis of nine most commonly detected steroid hormones. This is the first reported comparison of the separatory funnel extraction (SFE), continuous liquid-liquid extraction (CLLE), and Soxhlet extraction methods developed by the U.S. EPA. Furthermore, a solid phase extraction (SPE) method was also developed in-house for the extraction of steroid hormones from aquatic environmental samples. This study provides valuable information regarding the robustness of the different extraction methods. Statistical analysis of the data showed that SPE-based methods provided better recovery efficiencies and lower variability of the steroid hormones followed by SFE. The analytical methods developed in-house for extraction of biosolids showed a wide recovery range; however, the variability was low (≤ 7% RSD). Soxhlet extraction and CLLE are lengthy procedures and have been shown to provide highly variably recovery efficiencies. The results of this study are guidance for better sample preparation strategies in analytical methods for steroid hormone analysis, and SPE adds to the choice in environmental sample analysis.

  11. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  12. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang

    2017-09-27

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  13. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.

    Science.gov (United States)

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-09-21

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  14. INTER LABORATORY COMBAT HELMET BLUNT IMPACT TEST METHOD COMPARISON

    Science.gov (United States)

    2018-03-26

    data by Instrumentation for Impact  Test , SAE standard J211‐1 [4]. Although the entire curve is collected, the interest of this  project  team  solely...HELMET BLUNT IMPACT TEST METHOD COMPARISON by Tony J. Kayhart Charles A. Hewitt and Jonathan Cyganik March 2018 Final...INTER-LABORATORY COMBAT HELMET BLUNT IMPACT TEST METHOD COMPARISON 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR

  15. Comparison of hardenability calculation methods of the heat-treatable constructional steels

    Energy Technology Data Exchange (ETDEWEB)

    Dobrzanski, L.A.; Sitek, W. [Division of Tool Materials and Computer Techniques in Metal Science, Silesian Technical University, Gliwice (Poland)

    1995-12-31

    Evaluation has been made of the consistency of calculation of the hardenability curves of the selected heat-treatable alloyed constructional steels with the experimental data. The study has been conducted basing on the analysis of present state of knowledge on hardenability calculation employing the neural network methods. Several calculation examples and comparison of the consistency of calculation methods employed are included. (author). 35 refs, 2 figs, 3 tabs.

  16. Structured Feedback Training for Time-Out: Efficacy and Efficiency in Comparison to a Didactic Method.

    Science.gov (United States)

    Jensen, Scott A; Blumberg, Sean; Browning, Megan

    2017-09-01

    Although time-out has been demonstrated to be effective across multiple settings, little research exists on effective methods for training others to implement time-out. The present set of studies is an exploratory analysis of a structured feedback method for training time-out using repeated role-plays. The three studies examined (a) a between-subjects comparison to more a traditional didactic/video modeling method of time-out training, (b) a within-subjects comparison to traditional didactic/video modeling training for another skill, and (c) the impact of structured feedback training on in-home time-out implementation. Though findings are only preliminary and more research is needed, the structured feedback method appears across studies to be an efficient, effective method that demonstrates good maintenance of skill up to 3 months post training. Findings suggest, though do not confirm, a benefit of the structured feedback method over a more traditional didactic/video training model. Implications and further research on the method are discussed.

  17. A comparison of two analytical evaluation methods for educational computer games for young children

    NARCIS (Netherlands)

    Bekker, M.M.; Baauw, E.; Barendregt, W.

    2008-01-01

    In this paper we describe a comparison of two analytical methods for educational computer games for young children. The methods compared in the study are the Structured Expert Evaluation Method (SEEM) and the Combined Heuristic Evaluation (HE) (based on a combination of Nielsen’s HE and the

  18. Comparison of genetic algorithms with conjugate gradient methods

    Science.gov (United States)

    Bosworth, J. L.; Foo, N. Y.; Zeigler, B. P.

    1972-01-01

    Genetic algorithms for mathematical function optimization are modeled on search strategies employed in natural adaptation. Comparisons of genetic algorithms with conjugate gradient methods, which were made on an IBM 1800 digital computer, show that genetic algorithms display superior performance over gradient methods for functions which are poorly behaved mathematically, for multimodal functions, and for functions obscured by additive random noise. Genetic methods offer performance comparable to gradient methods for many of the standard functions.

  19. Direct comparison of phase-sensitive vibrational sum frequency generation with maximum entropy method: case study of water.

    Science.gov (United States)

    de Beer, Alex G F; Samson, Jean-Sebastièn; Hua, Wei; Huang, Zishuai; Chen, Xiangke; Allen, Heather C; Roke, Sylvie

    2011-12-14

    We present a direct comparison of phase sensitive sum-frequency generation experiments with phase reconstruction obtained by the maximum entropy method. We show that both methods lead to the same complex spectrum. Furthermore, we discuss the strengths and weaknesses of each of these methods, analyzing possible sources of experimental and analytical errors. A simulation program for maximum entropy phase reconstruction is available at: http://lbp.epfl.ch/. © 2011 American Institute of Physics

  20. Validation and comparison of two sampling methods to assess dermal exposure to drilling fluids and crude oil.

    Science.gov (United States)

    Galea, Karen S; McGonagle, Carolyn; Sleeuwenhoek, Anne; Todd, David; Jiménez, Araceli Sánchez

    2014-06-01

    Dermal exposure to drilling fluids and crude oil is an exposure route of concern. However, there have been no published studies describing sampling methods or reporting dermal exposure measurements. We describe a study that aimed to evaluate a wipe sampling method to assess dermal exposure to an oil-based drilling fluid and crude oil, as well as to investigate the feasibility of using an interception cotton glove sampler for exposure on the hands/wrists. A direct comparison of the wipe and interception methods was also completed using pigs' trotters as a surrogate for human skin and a direct surface contact exposure scenario. Overall, acceptable recovery and sampling efficiencies were reported for both methods, and both methods had satisfactory storage stability at 1 and 7 days, although there appeared to be some loss over 14 days. The methods' comparison study revealed significantly higher removal of both fluids from the metal surface with the glove samples compared with the wipe samples (on average 2.5 times higher). Both evaluated sampling methods were found to be suitable for assessing dermal exposure to oil-based drilling fluids and crude oil; however, the comparison study clearly illustrates that glove samplers may overestimate the amount of fluid transferred to the skin. Further comparison of the two dermal sampling methods using additional exposure situations such as immersion or deposition, as well as a field evaluation, is warranted to confirm their appropriateness and suitability in the working environment. © The Author 2014. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  1. Description of comparison method for evaluating spatial or temporal homogeneity of environmental monitoring data

    International Nuclear Information System (INIS)

    Mecozzi, M.; Cicero, A.M.

    1995-01-01

    In this paper a comparison method to verify the homogeneity/inhomogeneity of environmental monitoring data is described. The comparison method is based on the simultaneous application of three statistical tests: One-Way ANOVA, Kruskal Wallis and One-Way IANOVA. Robust tests such as IANOVA and Kruskal Wallis can be more efficient than the usual ANOVA methods because are resistant against the presence of outliers and divergences from the normal distribution of the data. The evidences of the study are that the validation of the result about presence/absence of homogeneity in the data set is obtained when it is confirmed by two tests at least

  2. A new comparison method for dew-point generators

    Science.gov (United States)

    Heinonen, Martti

    1999-12-01

    A new method for comparing dew-point generators was developed at the Centre for Metrology and Accreditation. In this method, the generators participating in a comparison are compared with a transportable saturator unit using a dew-point comparator. The method was tested by constructing a test apparatus and by comparing it with the MIKES primary dew-point generator several times in the dew-point temperature range from -40 to +75 °C. The expanded uncertainty (k = 2) of the apparatus was estimated to be between 0.05 and 0.07 °C and the difference between the comparator system and the generator is well within these limits. In particular, all of the results obtained in the range below 0 °C are within ±0.03 °C. It is concluded that a new type of a transfer standard with characteristics most suitable for dew-point comparisons can be developed on the basis of the principles presented in this paper.

  3. Comparison study on cell calculation method of fast reactor

    International Nuclear Information System (INIS)

    Chiba, Gou

    2002-10-01

    Effective cross sections obtained by cell calculations are used in core calculations in current deterministic methods. Therefore, it is important to calculate the effective cross sections accurately and several methods have been proposed. In this study, some of the methods are compared to each other using a continuous energy Monte Carlo method as a reference. The result shows that the table look-up method used in Japan Nuclear Cycle Development Institute (JNC) sometimes has a difference over 10% in effective microscopic cross sections and be inferior to the sub-group method. The problem was overcome by introducing a new nuclear constant system developed in JNC, in which the ultra free energy group library is used. The system can also deal with resonance interaction effects between nuclides which are not able to be considered by other methods. In addition, a new method was proposed to calculate effective cross section accurately for power reactor fuel subassembly where the new nuclear constant system cannot be applied. This method uses the sub-group method and the ultra fine energy group collision probability method. The microscopic effective cross sections obtained by this method agree with the reference values within 5% difference. (author)

  4. Comparison results on preconditioned SOR-type iterative method for Z-matrices linear systems

    Science.gov (United States)

    Wang, Xue-Zhong; Huang, Ting-Zhu; Fu, Ying-Ding

    2007-09-01

    In this paper, we present some comparison theorems on preconditioned iterative method for solving Z-matrices linear systems, Comparison results show that the rate of convergence of the Gauss-Seidel-type method is faster than the rate of convergence of the SOR-type iterative method.

  5. Comparison of different methods for thoron progeny measurement

    International Nuclear Information System (INIS)

    Bi Lei; Zhu Li; Shang Bing; Cui Hongxing; Zhang Qingzhao

    2009-01-01

    Four popular methods for thoron progeny measurement were discussed, including the aspects of detector,principle, precondition, calculation advantages and disadvantages. Comparison experiments were made in mine and houses with high background in Yunnan Province. Since indoor thoron progeny changes with time obviously and with no rule, α track method is recommended in the area of radiation protection for environmental detection and assessment. (authors)

  6. Comparison of Heuristic Methods Applied for Optimal Operation of Water Resources

    Directory of Open Access Journals (Sweden)

    Alireza Borhani Dariane

    2009-01-01

    Full Text Available Water resources optimization problems are usually complex and hard to solve using the ordinary optimization methods, or they are at least  not economically efficient. A great number of studies have been conducted in quest of suitable methods capable of handling such problems. In recent years, some new heuristic methods such as genetic and ant algorithms have been introduced in systems engineering. Preliminary applications of these methods in water resources problems have shown that some of them are powerful tools, capable of solving complex problems. In this paper, the application of such heuristic methods as Genetic Algorithm (GA and Ant Colony Optimization (ACO have been studied for optimizing reservoir operation. The Dez Dam reservoir inIranwas chosen for a case study. The methods were applied and compared using short-term (one year and long-term models. Comparison of the results showed that GA outperforms both DP and ACO in finding true global optimum solutions and operating rules.

  7. Assessment and comparison of methods for solar ultraviolet radiation measurements

    Energy Technology Data Exchange (ETDEWEB)

    Leszczynski, K

    1995-06-01

    In the study, the different methods to measure the solar ultraviolet radiation are compared. The methods included are spectroradiometric, erythemally weighted broadband and multi-channel measurements. The comparison of the different methods is based on a literature review and assessments of optical characteristics of the spectroradiometer Optronic 742 of the Finnish Centre for Radiation and Nuclear Safety (STUK) and of the erythemally weighted Robertson-Berger type broadband radiometers Solar Light models 500 and 501 of the Finnish Meteorological Institute and STUK. An introduction to the sources of error in solar UV measurements, to methods for radiometric characterization of UV radiometers together with methods for error reduction are presented. Reviews on experiences from world-wide UV monitoring efforts and instrumentation as well as on the results from international UV radiometer intercomparisons are also presented. (62 refs.).

  8. Assessment and comparison of methods for solar ultraviolet radiation measurements

    International Nuclear Information System (INIS)

    Leszczynski, K.

    1995-06-01

    In the study, the different methods to measure the solar ultraviolet radiation are compared. The methods included are spectroradiometric, erythemally weighted broadband and multi-channel measurements. The comparison of the different methods is based on a literature review and assessments of optical characteristics of the spectroradiometer Optronic 742 of the Finnish Centre for Radiation and Nuclear Safety (STUK) and of the erythemally weighted Robertson-Berger type broadband radiometers Solar Light models 500 and 501 of the Finnish Meteorological Institute and STUK. An introduction to the sources of error in solar UV measurements, to methods for radiometric characterization of UV radiometers together with methods for error reduction are presented. Reviews on experiences from world-wide UV monitoring efforts and instrumentation as well as on the results from international UV radiometer intercomparisons are also presented. (62 refs.)

  9. Comparison study on qualitative and quantitative risk assessment methods for urban natural gas pipeline network.

    Science.gov (United States)

    Han, Z Y; Weng, W G

    2011-05-15

    In this paper, a qualitative and a quantitative risk assessment methods for urban natural gas pipeline network are proposed. The qualitative method is comprised of an index system, which includes a causation index, an inherent risk index, a consequence index and their corresponding weights. The quantitative method consists of a probability assessment, a consequences analysis and a risk evaluation. The outcome of the qualitative method is a qualitative risk value, and for quantitative method the outcomes are individual risk and social risk. In comparison with previous research, the qualitative method proposed in this paper is particularly suitable for urban natural gas pipeline network, and the quantitative method takes different consequences of accidents into consideration, such as toxic gas diffusion, jet flame, fire ball combustion and UVCE. Two sample urban natural gas pipeline networks are used to demonstrate these two methods. It is indicated that both of the two methods can be applied to practical application, and the choice of the methods depends on the actual basic data of the gas pipelines and the precision requirements of risk assessment. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.

  10. Comparison of infusion pumps calibration methods

    Science.gov (United States)

    Batista, Elsa; Godinho, Isabel; do Céu Ferreira, Maria; Furtado, Andreia; Lucas, Peter; Silva, Claudia

    2017-12-01

    Nowadays, several types of infusion pump are commonly used for drug delivery, such as syringe pumps and peristaltic pumps. These instruments present different measuring features and capacities according to their use and therapeutic application. In order to ensure the metrological traceability of these flow and volume measuring equipment, it is necessary to use suitable calibration methods and standards. Two different calibration methods can be used to determine the flow error of infusion pumps. One is the gravimetric method, considered as a primary method, commonly used by National Metrology Institutes. The other calibration method, a secondary method, relies on an infusion device analyser (IDA) and is typically used by hospital maintenance offices. The suitability of the IDA calibration method was assessed by testing several infusion instruments at different flow rates using the gravimetric method. In addition, a measurement comparison between Portuguese Accredited Laboratories and hospital maintenance offices was performed under the coordination of the Portuguese Institute for Quality, the National Metrology Institute. The obtained results were directly related to the used calibration method and are presented in this paper. This work has been developed in the framework of the EURAMET projects EMRP MeDD and EMPIR 15SIP03.

  11. Comparison of estimation methods for fitting weibull distribution to ...

    African Journals Online (AJOL)

    Comparison of estimation methods for fitting weibull distribution to the natural stand of Oluwa Forest Reserve, Ondo State, Nigeria. ... Journal of Research in Forestry, Wildlife and Environment ... The result revealed that maximum likelihood method was more accurate in fitting the Weibull distribution to the natural stand.

  12. An interlaboratory comparison of methods for measuring rock matrix porosity

    International Nuclear Information System (INIS)

    Rasilainen, K.; Hellmuth, K.H.; Kivekaes, L.; Ruskeeniemi, T.; Melamed, A.; Siitari-Kauppi, M.

    1996-09-01

    An interlaboratory comparison study was conducted for the available Finnish methods of rock matrix porosity measurements. The aim was first to compare different experimental methods for future applications, and second to obtain quality assured data for the needs of matrix diffusion modelling. Three different versions of water immersion techniques, a tracer elution method, a helium gas through-diffusion method, and a C-14-PMMA method were tested. All methods selected for this study were established experimental tools in the respective laboratories, and they had already been individually tested. Rock samples for the study were obtained from a homogeneous granitic drill core section from the natural analogue site at Palmottu. The drill core section was cut into slabs that were expected to be practically identical. The subsamples were then circulated between the different laboratories using a round robin approach. The circulation was possible because all methods were non-destructive, except the C-14-PMMA method, which was always the last method to be applied. The possible effect of drying temperature on the measured porosity was also preliminarily tested. These measurements were done in the order of increasing drying temperature. Based on the study, it can be concluded that all methods are comparable in their accuracy. The selection of methods for future applications can therefore be based on practical considerations. Drying temperature seemed to have very little effect on the measured porosity, but a more detailed study is needed for definite conclusions. (author) (4 refs.)

  13. Comparisons of ratchetting analysis methods using RCC-M, RCC-MR and ASME codes

    International Nuclear Information System (INIS)

    Yang Yu; Cabrillat, M.T.

    2005-01-01

    The present paper compares the simplified ratcheting analysis methods used in RCC-M, RCC-MR and ASME with some examples. Firstly, comparisons of the methods in RCC-M and efficiency diagram in RCC-MR are investigated. A special method is used to describe these two methods with curves in one coordinate, and the different conservation is demonstrated. RCC-M method is also be interpreted by SR (second ratio) and v (efficiency index) which is used in RCC-MR. Hence, we can easily compare the previous two methods by defining SR as abscissa and v as ordinate and plotting two curves of them. Secondly, comparisons of the efficiency curve in RCC-MR and methods in ASME-NH APPENDIX T are investigated, with significant creep. At last, two practical evaluations are performed to show the comparisons of aforementioned methods. (authors)

  14. A comparison of interface tracking methods

    International Nuclear Information System (INIS)

    Kothe, D.B.; Rider, W.J.

    1995-01-01

    In this Paper we provide a direct comparison of several important algorithms designed to track fluid interfaces. In the process we propose improved criteria by which these methods are to be judged. We compare and contrast the behavior of the following interface tracking methods: high order monotone capturing schemes, level set methods, volume-of-fluid (VOF) methods, and particle-based (particle-in-cell, or PIC) methods. We compare these methods by first applying a set of standard test problems, then by applying a new set of enhanced problems designed to expose the limitations and weaknesses of each method. We find that the properties of these methods are not adequately assessed until they axe tested with flows having spatial and temporal vorticity gradients. Our results indicate that the particle-based methods are easily the most accurate of those tested. Their practical use, however, is often hampered by their memory and CPU requirements. Particle-based methods employing particles only along interfaces also have difficulty dealing with gross topology changes. Full PIC methods, on the other hand, do not in general have topology restrictions. Following the particle-based methods are VOF volume tracking methods, which are reasonably accurate, physically based, robust, low in cost, and relatively easy to implement. Recent enhancements to the VOF methods using multidimensional interface reconstruction and improved advection provide excellent results on a wide range of test problems

  15. An interactive website for analytical method comparison and bias estimation.

    Science.gov (United States)

    Bahar, Burak; Tuncel, Ayse F; Holmes, Earle W; Holmes, Daniel T

    2017-12-01

    Regulatory standards mandate laboratories to perform studies to ensure accuracy and reliability of their test results. Method comparison and bias estimation are important components of these studies. We developed an interactive website for evaluating the relative performance of two analytical methods using R programming language tools. The website can be accessed at https://bahar.shinyapps.io/method_compare/. The site has an easy-to-use interface that allows both copy-pasting and manual entry of data. It also allows selection of a regression model and creation of regression and difference plots. Available regression models include Ordinary Least Squares, Weighted-Ordinary Least Squares, Deming, Weighted-Deming, Passing-Bablok and Passing-Bablok for large datasets. The server processes the data and generates downloadable reports in PDF or HTML format. Our website provides clinical laboratories a practical way to assess the relative performance of two analytical methods. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  16. Comparison between MRI-based attenuation correction methods for brain PET in dementia patients

    International Nuclear Information System (INIS)

    Cabello, Jorge; Lukas, Mathias; Pyka, Thomas; Nekolla, Stephan G.; Ziegler, Sibylle I.; Rota Kops, Elena; Shah, N. Jon; Ribeiro, Andre; Yakushev, Igor

    2016-01-01

    The combination of Positron Emission Tomography (PET) with magnetic resonance imaging (MRI) in hybrid PET/MRI scanners offers a number of advantages in investigating brain structure and function. A critical step of PET data reconstruction is attenuation correction (AC). Accounting for bone in attenuation maps (μ-map) was shown to be important in brain PET studies. While there are a number of MRI-based AC methods, no systematic comparison between them has been performed so far. The aim of this work was to study the different performance obtained by some of the recent methods presented in the literature. To perform such a comparison, we focused on [ 18 F]-Fluorodeoxyglucose-PET/MRI neurodegenerative dementing disorders, which are known to exhibit reduced levels of glucose metabolism in certain brain regions. Four novel methods were used to calculate μ-maps from MRI data of 15 patients with Alzheimer's dementia (AD). The methods cover two atlas-based methods, a segmentation method, and a hybrid template/segmentation method. Additionally, the Dixon-based and a UTE-based method, offered by a vendor, were included in the comparison. Performance was assessed at three levels: tissue identification accuracy in the μ-map, quantitative accuracy of reconstructed PET data in specific brain regions, and precision in diagnostic images at identifying hypometabolic areas. Quantitative regional errors of -20-10 % were obtained using the vendor's AC methods, whereas the novel methods produced errors in a margin of ±5 %. The obtained precision at identifying areas with abnormally low levels of glucose uptake, potentially regions affected by AD, were 62.9 and 79.5 % for the two vendor AC methods, the former ignoring bone and the latter including bone information. The precision increased to 87.5-93.3 % in average for the four new methods, exhibiting similar performances. We confirm that the AC methods based on the Dixon and UTE sequences provided by the vendor are inferior

  17. Comparison between MRI-based attenuation correction methods for brain PET in dementia patients

    Energy Technology Data Exchange (ETDEWEB)

    Cabello, Jorge; Lukas, Mathias; Pyka, Thomas; Nekolla, Stephan G.; Ziegler, Sibylle I. [Technische Universitaet Muenchen, Nuklearmedizinische Klinik und Poliklinik, Klinikum rechts der Isar, Munich (Germany); Rota Kops, Elena; Shah, N. Jon [Forschungszentrum Juelich GmbH, Institute of Neuroscience and Medicine 4, Medical Imaging Physics, Juelich (Germany); Ribeiro, Andre [Forschungszentrum Juelich GmbH, Institute of Neuroscience and Medicine 4, Medical Imaging Physics, Juelich (Germany); Institute of Biophysics and Biomedical Engineering, Lisbon (Portugal); Yakushev, Igor [Technische Universitaet Muenchen, Nuklearmedizinische Klinik und Poliklinik, Klinikum rechts der Isar, Munich (Germany); Institute TUM Neuroimaging Center (TUM-NIC), Munich (Germany)

    2016-11-15

    The combination of Positron Emission Tomography (PET) with magnetic resonance imaging (MRI) in hybrid PET/MRI scanners offers a number of advantages in investigating brain structure and function. A critical step of PET data reconstruction is attenuation correction (AC). Accounting for bone in attenuation maps (μ-map) was shown to be important in brain PET studies. While there are a number of MRI-based AC methods, no systematic comparison between them has been performed so far. The aim of this work was to study the different performance obtained by some of the recent methods presented in the literature. To perform such a comparison, we focused on [{sup 18}F]-Fluorodeoxyglucose-PET/MRI neurodegenerative dementing disorders, which are known to exhibit reduced levels of glucose metabolism in certain brain regions. Four novel methods were used to calculate μ-maps from MRI data of 15 patients with Alzheimer's dementia (AD). The methods cover two atlas-based methods, a segmentation method, and a hybrid template/segmentation method. Additionally, the Dixon-based and a UTE-based method, offered by a vendor, were included in the comparison. Performance was assessed at three levels: tissue identification accuracy in the μ-map, quantitative accuracy of reconstructed PET data in specific brain regions, and precision in diagnostic images at identifying hypometabolic areas. Quantitative regional errors of -20-10 % were obtained using the vendor's AC methods, whereas the novel methods produced errors in a margin of ±5 %. The obtained precision at identifying areas with abnormally low levels of glucose uptake, potentially regions affected by AD, were 62.9 and 79.5 % for the two vendor AC methods, the former ignoring bone and the latter including bone information. The precision increased to 87.5-93.3 % in average for the four new methods, exhibiting similar performances. We confirm that the AC methods based on the Dixon and UTE sequences provided by the vendor are

  18. Comparison methods between methane and hydrogen combustion for useful transfer in furnaces

    International Nuclear Information System (INIS)

    Ghiea, V.V.

    2009-01-01

    The advantages and disadvantages of hydrogen use by industrial combustion are critically presented. Greenhouse effect due natural water vapors from atmosphere and these produced by hydrogen industrial combustion is critically analyzed, together with problems of gas fuels containing hydrogen as the relative largest component. A comparison method between methane and hydrogen combustion for pressure loss in burner feeding pipe, is conceived. It is deduced the ratio of radiation useful heat transfer characteristics and convection heat transfer coefficients from combustion gases at industrial furnaces and heat recuperators for hydrogen and methane combustion, establishing specific comparison methods. Using criterial equations special processed for convection heat transfer determination, a calculation generalizing formula is established. The proposed comparison methods are general valid for different gaseous fuels. (author)

  19. Comparison of urine analysis using manual and sedimentation methods.

    Science.gov (United States)

    Kurup, R; Leich, M

    2012-06-01

    Microscopic examination of urine sediment is an essential part in the evaluation of renal and urinary tract diseases. Traditionally, urine sediments are assessed by microscopic examination of centrifuged urine. However the current method used by the Georgetown Public Hospital Corporation Medical Laboratory involves uncentrifuged urine. To encourage high level of care, the results provided to the physician must be accurate and reliable for proper diagnosis. The aim of this study is to determine whether the centrifuge method is more clinically significant than the uncentrifuged method. In this study, a comparison between the results obtained from centrifuged and uncentrifuged methods were performed. A total of 167 urine samples were randomly collected and analysed during the period April-May 2010 at the Medical Laboratory, Georgetown Public Hospital Corporation. The urine samples were first analysed microscopically by the uncentrifuged, and then by the centrifuged method. The results obtained from both methods were recorded in a log book. These results were then entered into a database created in Microsoft Excel, and analysed for differences and similarities using this application. Analysis was further done in SPSS software to compare the results using Pearson ' correlation. When compared using Pearson's correlation coefficient analysis, both methods showed a good correlation between urinary sediments with the exception of white bloods cells. The centrifuged method had a slightly higher identification rate for all of the parameters. There is substantial agreement between the centrifuged and uncentrifuged methods. However the uncentrifuged method provides for a rapid turnaround time.

  20. Comparative study between EDXRF and ASTM E572 methods using two-way ANOVA

    Science.gov (United States)

    Krummenauer, A.; Veit, H. M.; Zoppas-Ferreira, J.

    2018-03-01

    Comparison with reference method is one of the necessary requirements for the validation of non-standard methods. This comparison was made using the experiment planning technique with two-way ANOVA. In ANOVA, the results obtained using the EDXRF method, to be validated, were compared with the results obtained using the ASTM E572-13 standard test method. Fisher's tests (F-test) were used to comparative study between of the elements: molybdenum, niobium, copper, nickel, manganese, chromium and vanadium. All F-tests of the elements indicate that the null hypothesis (Ho) has not been rejected. As a result, there is no significant difference between the methods compared. Therefore, according to this study, it is concluded that the EDXRF method was approved in this method comparison requirement.

  1. An algebraic topological method for multimodal brain networks comparison

    Directory of Open Access Journals (Sweden)

    Tiago eSimas

    2015-07-01

    Full Text Available Understanding brain connectivity is one of the most important issues in neuroscience. Nonetheless, connectivity data can reflect either functional relationships of brain activities or anatomical connections between brain areas. Although both representations should be related, this relationship is not straightforward. We have devised a powerful method that allows different operations between networks that share the same set of nodes, by embedding them in a common metric space, enforcing transitivity to the graph topology. Here, we apply this method to construct an aggregated network from a set of functional graphs, each one from a different subject. Once this aggregated functional network is constructed, we use again our method to compare it with the structural connectivity to identify particular brain regions that differ in both modalities (anatomical and functional. Remarkably, these brain regions include functional areas that form part of the classical resting state networks. We conclude that our method -based on the comparison of the aggregated functional network- reveals some emerging features that could not be observed when the comparison is performed with the classical averaged functional network.

  2. Studying the method of linearization of exponential calibration curves

    International Nuclear Information System (INIS)

    Bunzh, Z.A.

    1989-01-01

    The results of study of the method for linearization of exponential calibration curves are given. The calibration technique and comparison of the proposed method with piecewise-linear approximation and power series expansion, are given

  3. Comparison of parameter-adapted segmentation methods for fluorescence micrographs.

    Science.gov (United States)

    Held, Christian; Palmisano, Ralf; Häberle, Lothar; Hensel, Michael; Wittenberg, Thomas

    2011-11-01

    Interpreting images from fluorescence microscopy is often a time-consuming task with poor reproducibility. Various image processing routines that can help investigators evaluate the images are therefore useful. The critical aspect for a reliable automatic image analysis system is a robust segmentation algorithm that can perform accurate segmentation for different cell types. In this study, several image segmentation methods were therefore compared and evaluated in order to identify the most appropriate segmentation schemes that are usable with little new parameterization and robustly with different types of fluorescence-stained cells for various biological and biomedical tasks. The study investigated, compared, and enhanced four different methods for segmentation of cultured epithelial cells. The maximum-intensity linking (MIL) method, an improved MIL, a watershed method, and an improved watershed method based on morphological reconstruction were used. Three manually annotated datasets consisting of 261, 817, and 1,333 HeLa or L929 cells were used to compare the different algorithms. The comparisons and evaluations showed that the segmentation performance of methods based on the watershed transform was significantly superior to the performance of the MIL method. The results also indicate that using morphological opening by reconstruction can improve the segmentation of cells stained with a marker that exhibits the dotted surface of cells. Copyright © 2011 International Society for Advancement of Cytometry.

  4. Investigation of Industrialised Building System Performance in Comparison to Conventional Construction Method

    Directory of Open Access Journals (Sweden)

    Othuman Mydin M.A.

    2014-03-01

    Full Text Available Conventional construction methods are still widely practised, although many researches have indicated that this method is less effective compared to the IBS construction method. The existence of the IBS has added to the many techniques within the construction industry. This study is aimed at making comparisons between the two construction approaches. Case studies were conducted at four sites in the state of Penang, Malaysia. Two projects were IBS-based while the remaining two deployed the conventional method of construction. Based on an analysis of the results, it can be concluded that the IBS approach has more to offer compared to the conventional method. Among these advantages are shorter construction periods, reduced overall costs, less labour needs, better site conditions and the production of higher quality components.

  5. Matrix diffusion studies by electrical conductivity methods. Comparison between laboratory and in-situ measurements

    International Nuclear Information System (INIS)

    Ohlsson, Y.; Neretnieks, I.

    1998-01-01

    Traditional laboratory diffusion experiments in rock material are time consuming, and quite small samples are generally used. Electrical conductivity measurements, on the other hand, provide a fast means for examining transport properties in rock and allow measurements on larger samples as well. Laboratory measurements using electrical conductivity give results that compare well to those from traditional diffusion experiments. The measurement of the electrical resistivity in the rock surrounding a borehole is a standard method for the detection of water conducting fractures. If these data could be correlated to matrix diffusion properties, in-situ diffusion data from large areas could be obtained. This would be valuable because it would make it possible to obtain data very early in future investigations of potentially suitable sites for a repository. This study compares laboratory electrical conductivity measurements with in-situ resistivity measurements from a borehole at Aespoe. The laboratory samples consist mainly of Aespoe diorite and fine-grained granite and the rock surrounding the borehole of Aespoe diorite, Smaaland granite and fine-grained granite. The comparison shows good agreement between laboratory measurements and in-situ data

  6. Comparison between PET template-based method and MRI-based method for cortical quantification of florbetapir (AV-45) uptake in vivo

    Energy Technology Data Exchange (ETDEWEB)

    Saint-Aubert, L.; Nemmi, F.; Peran, P. [Inserm, Imagerie Cerebrale et Handicaps neurologiques UMR 825, Centre Hospitalier Universitaire de Toulouse, Toulouse (France); Centre Hospitalier Universitaire de Toulouse, Universite de Toulouse, UPS, Imagerie Cerebrale et Handicaps Neurologiques UMR 825, Toulouse (France); Barbeau, E.J. [Universite de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, France, CNRS, CerCo, Toulouse (France); Service de Neurologie, Pole Neurosciences, Centre Hospitalier Universitaire de Toulouse, Toulouse (France); Payoux, P. [Inserm, Imagerie Cerebrale et Handicaps neurologiques UMR 825, Centre Hospitalier Universitaire de Toulouse, Toulouse (France); Centre Hospitalier Universitaire de Toulouse, Universite de Toulouse, UPS, Imagerie Cerebrale et Handicaps Neurologiques UMR 825, Toulouse (France); Service de Medecine Nucleaire, Pole Imagerie, Centre Hospitalier Universitaire de Toulouse, Toulouse (France); Chollet, F.; Pariente, J. [Inserm, Imagerie Cerebrale et Handicaps neurologiques UMR 825, Centre Hospitalier Universitaire de Toulouse, Toulouse (France); Centre Hospitalier Universitaire de Toulouse, Universite de Toulouse, UPS, Imagerie Cerebrale et Handicaps Neurologiques UMR 825, Toulouse (France); Service de Neurologie, Pole Neurosciences, Centre Hospitalier Universitaire de Toulouse, Toulouse (France)

    2014-05-15

    Florbetapir (AV-45) has been shown to be a reliable tool for assessing in vivo amyloid load in patients with Alzheimer's disease from the early stages. However, nonspecific white matter binding has been reported in healthy subjects as well as in patients with Alzheimer's disease. To avoid this issue, cortical quantification might increase the reliability of AV-45 PET analyses. In this study, we compared two quantification methods for AV-45 binding, a classical method relying on PET template registration (route 1), and a MRI-based method (route 2) for cortical quantification. We recruited 22 patients at the prodromal stage of Alzheimer's disease and 17 matched controls. AV-45 binding was assessed using both methods, and target-to-cerebellum mean global standard uptake values (SUVr) were obtained for each of them, together with SUVr in specific regions of interest. Quantification using the two routes was compared between the clinical groups (intragroup comparison), and between groups for each route (intergroup comparison). Discriminant analysis was performed. In the intragroup comparison, differences in uptake values were observed between route 1 and route 2 in both groups. In the intergroup comparison, AV-45 uptake was higher in patients than controls in all regions of interest using both methods, but the effect size of this difference was larger using route 2. In the discriminant analysis, route 2 showed a higher specificity (94.1 % versus 70.6 %), despite a lower sensitivity (77.3 % versus 86.4 %), and D-prime values were higher for route 2. These findings suggest that, although both quantification methods enabled patients at early stages of Alzheimer's disease to be well discriminated from controls, PET template-based quantification seems adequate for clinical use, while the MRI-based cortical quantification method led to greater intergroup differences and may be more suitable for use in current clinical research. (orig.)

  7. A comparison of three clustering methods for finding subgroups in MRI, SMS or clinical data

    DEFF Research Database (Denmark)

    Kent, Peter; Jensen, Rikke K; Kongsted, Alice

    2014-01-01

    ). There is a scarcity of head-to-head comparisons that can inform the choice of which clustering method might be suitable for particular clinical datasets and research questions. Therefore, the aim of this study was to perform a head-to-head comparison of three commonly available methods (SPSS TwoStep CA, Latent Gold...... LCA and SNOB LCA). METHODS: The performance of these three methods was compared: (i) quantitatively using the number of subgroups detected, the classification probability of individuals into subgroups, the reproducibility of results, and (ii) qualitatively using subjective judgments about each program...... classify individuals into those subgroups. CONCLUSIONS: Our subjective judgement was that Latent Gold offered the best balance of sensitivity to subgroups, ease of use and presentation of results with these datasets but we recognise that different clustering methods may suit other types of data...

  8. A Simplified Version of the Fuzzy Decision Method and its Comparison with the Paraconsistent Decision Method

    Science.gov (United States)

    de Carvalho, Fábio Romeu; Abe, Jair Minoro

    2010-11-01

    Two recent non-classical logics have been used to make decision: fuzzy logic and paraconsistent annotated evidential logic Et. In this paper we present a simplified version of the fuzzy decision method and its comparison with the paraconsistent one. Paraconsistent annotated evidential logic Et, introduced by Da Costa, Vago and Subrahmanian (1991), is capable of handling uncertain and contradictory data without becoming trivial. It has been used in many applications such as information technology, robotics, artificial intelligence, production engineering, decision making etc. Intuitively, one Et logic formula is type p(a, b), in which a and b belong to [0, 1] (real interval) and represent respectively the degree of favorable evidence (or degree of belief) and the degree of contrary evidence (or degree of disbelief) found in p. The set of all pairs (a; b), called annotations, when plotted, form the Cartesian Unitary Square (CUS). This set, containing a similar order relation of real number, comprises a network, called lattice of the annotations. Fuzzy logic was introduced by Zadeh (1965). It tries to systematize the knowledge study, searching mainly to study the fuzzy knowledge (you don't know what it means) and distinguish it from the imprecise one (you know what it means, but you don't know its exact value). This logic is similar to paraconsistent annotated one, since it attributes a numeric value (only one, not two values) to each proposition (then we can say that it is an one-valued logic). This number translates the intensity (the degree) with which the preposition is true. Let's X a set and A, a subset of X, identified by the function f(x). For each element x∈X, you have y = f(x)∈[0, 1]. The number y is called degree of pertinence of x in A. Decision making theories based on these logics have shown to be powerful in many aspects regarding more traditional methods, like the one based on Statistics. In this paper we present a first study for a simplified

  9. A cross-benchmark comparison of 87 learning to rank methods

    NARCIS (Netherlands)

    Tax, N.; Bockting, S.; Hiemstra, D.

    2015-01-01

    Learning to rank is an increasingly important scientific field that comprises the use of machine learning for the ranking task. New learning to rank methods are generally evaluated on benchmark test collections. However, comparison of learning to rank methods based on evaluation results is hindered

  10. Proper comparison among methods using a confusion matrix

    CSIR Research Space (South Africa)

    Salmon

    2015-07-01

    Full Text Available -1 IGARSS 2015, Milan, Italy, 26-31 July 2015 Proper comparison among methods using a confusion matrix 1,2 B.P. Salmon, 2,3 W. Kleynhans, 2,3 C.P. Schwegmann and 1J.C. Olivier 1School of Engineering and ICT, University of Tasmania, Australia 2...

  11. A comparison of methods for teaching receptive language to toddlers with autism.

    Science.gov (United States)

    Vedora, Joseph; Grandelski, Katrina

    2015-01-01

    The use of a simple-conditional discrimination training procedure, in which stimuli are initially taught in isolation with no other comparison stimuli, is common in early intensive behavioral intervention programs. Researchers have suggested that this procedure may encourage the development of faulty stimulus control during training. The current study replicated previous work that compared the simple-conditional and the conditional-only methods to teach receptive labeling of pictures to young children with autism spectrum disorder. Both methods were effective, but the conditional-only method required fewer sessions to mastery. © Society for the Experimental Analysis of Behavior.

  12. What Is Social Comparison and How Should We Study It?

    Science.gov (United States)

    Wood, Joanne V.

    1996-01-01

    Examines frequently used measures and procedures in social comparison research. The question of whether a method truly captures social comparison requires a clear understanding of what social comparison is; hence a definition of social comparison is proposed, multiple ancillary processes in social comparison are identified, and definitional…

  13. Assessing the Accuracy of Generalized Inferences From Comparison Group Studies Using a Within-Study Comparison Approach: The Methodology.

    Science.gov (United States)

    Jaciw, Andrew P

    2016-06-01

    Various studies have examined bias in impact estimates from comparison group studies (CGSs) of job training programs, and in education, where results are benchmarked against experimental results. Such within-study comparison (WSC) approaches investigate levels of bias in CGS-based impact estimates, as well as the success of various design and analytic strategies for reducing bias. This article reviews past literature and summarizes conditions under which CGSs replicate experimental benchmark results. It extends the framework to, and develops the methodology for, situations where results from CGSs are generalized to untreated inference populations. Past research is summarized; methods are developed to examine bias in program impact estimates based on cross-site comparisons in a multisite trial that are evaluated against site-specific experimental benchmarks. Students in Grades K-3 in 79 schools in Tennessee; students in Grades 4-8 in 82 schools in Alabama. Grades K-3 Stanford Achievement Test (SAT) in reading and math scores; Grades 4-8 SAT10 reading scores. Past studies show that bias in CGS-based estimates can be limited through strong design, with local matching, and appropriate analysis involving pretest covariates and variables that represent selection processes. Extension of the methodology to investigate accuracy of generalized estimates from CGSs shows bias from confounders and effect moderators. CGS results, when extrapolated to untreated inference populations, may be biased due to variation in outcomes and impact. Accounting for effects of confounders or moderators may reduce bias. © The Author(s) 2016.

  14. Comparison of multiple-criteria decision-making methods - results of simulation study

    Directory of Open Access Journals (Sweden)

    Michał Adamczak

    2016-12-01

    Full Text Available Background: Today, both researchers and practitioners have many methods for supporting the decision-making process. Due to the conditions in which supply chains function, the most interesting are multi-criteria methods. The use of sophisticated methods for supporting decisions requires the parameterization and execution of calculations that are often complex. So is it efficient to use sophisticated methods? Methods: The authors of the publication compared two popular multi-criteria decision-making methods: the  Weighted Sum Model (WSM and the Analytic Hierarchy Process (AHP. A simulation study reflects these two decision-making methods. Input data for this study was a set of criteria weights and the value of each in terms of each criterion. Results: The iGrafx Process for Six Sigma simulation software recreated how both multiple-criteria decision-making methods (WSM and AHP function. The result of the simulation was a numerical value defining the preference of each of the alternatives according to the WSM and AHP methods. The alternative producing a result of higher numerical value  was considered preferred, according to the selected method. In the analysis of the results, the relationship between the values of the parameters and the difference in the results presented by both methods was investigated. Statistical methods, including hypothesis testing, were used for this purpose. Conclusions: The simulation study findings prove that the results obtained with the use of two multiple-criteria decision-making methods are very similar. Differences occurred more frequently in lower-value parameters from the "value of each alternative" group and higher-value parameters from the "weight of criteria" group.

  15. A comparison of two instructional methods for drawing Lewis Structures

    Science.gov (United States)

    Terhune, Kari

    Two instructional methods for teaching Lewis structures were compared -- the Direct Octet Rule Method (DORM) and the Commonly Accepted Method (CAM). The DORM gives the number of bonds and the number of nonbonding electrons immediately, while the CAM involves moving electron pairs from nonbonding to bonding electrons, if necessary. The research question was as follows: Will high school chemistry students draw more accurate Lewis structures using the DORM or the CAM? Students in Regular Chemistry 1 (N = 23), Honors Chemistry 1 (N = 51) and Chemistry 2 (N = 15) at an urban high school were the study participants. An identical pretest and posttest was given before and after instruction. Students were given instruction with either the DORM (N = 45), the treatment method, or the CAM (N = 44), the control for two days. After the posttest, 15 students were interviewed, using a semistructured interview process. The pretest/posttest consisted of 23 numerical response questions and 2 to 6 free response questions that were graded using a rubric. A two-way ANOVA showed a significant interaction effect between the groups and the methods, F (1, 70) = 10.960, p = 0.001. Post hoc comparisons using the Bonferroni pairwise comparison showed that Reg Chem 1 students demonstrated larger gain scores when they had been taught the CAM (Mean difference = 3.275, SE = 1.324, p Chemistry 1 students performed better with the DORM, perhaps due to better math skills, enhanced working memory, and better metacognitive skills. Regular Chemistry 1 students performed better with the CAM, perhaps because it is more visual. Teachers may want to use the CAM or a direct-pairing method to introduce the topic and use the DORM in advanced classes when a correct structure is needed quickly.

  16. COMPARISON OF TWO METHODS FOR THE DETECTION OF LISTERIA MONOCYTOGENES

    Directory of Open Access Journals (Sweden)

    G. Tantillo

    2013-02-01

    Full Text Available The aim of this study was to compare the performance of the conventional methods for detection of Listeria monocytogenes in food using media Oxford and ALOA (Agar Listeria acc. to Ottaviani & Agosti in according to the ISO 11290-1 to a new chromogenic medium “CHROMagar Listeria” standardized in 2005 AFNOR ( CHR – 21/1-12/01. A total of 40 pre-packed ready-to-eat food samples were examined. Using two methods six samples were found positive for Listeria monocytogenes but the medium “CHROMagar Listeria” was more selective in comparison with the others. In conclusion this study has demonstrated that isolation medium able to target specifically the detection of L. monocytogenes such as “CHROMagar Listeria” is highly recommendable because of that detection time is significantly reduced and the analysis cost is less expensive.

  17. Comparison of deterministic and stochastic methods for time-dependent Wigner simulations

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Sihong, E-mail: sihong@math.pku.edu.cn [LMAM and School of Mathematical Sciences, Peking University, Beijing 100871 (China); Sellier, Jean Michel, E-mail: jeanmichel.sellier@parallel.bas.bg [IICT, Bulgarian Academy of Sciences, Acad. G. Bonchev str. 25A, 1113 Sofia (Bulgaria)

    2015-11-01

    Recently a Monte Carlo method based on signed particles for time-dependent simulations of the Wigner equation has been proposed. While it has been thoroughly validated against physical benchmarks, no technical study about its numerical accuracy has been performed. To this end, this paper presents the first step towards the construction of firm mathematical foundations for the signed particle Wigner Monte Carlo method. An initial investigation is performed by means of comparisons with a cell average spectral element method, which is a highly accurate deterministic method and utilized to provide reference solutions. Several different numerical tests involving the time-dependent evolution of a quantum wave-packet are performed and discussed in deep details. In particular, this allows us to depict a set of crucial criteria for the signed particle Wigner Monte Carlo method to achieve a satisfactory accuracy.

  18. Comparison of sine dwell and broadband methods for modal testing

    Science.gov (United States)

    Chen, Jay-Chung

    1989-01-01

    The objectives of modal tests for large complex spacecraft structural systems are outlined. The comparison criteria for the modal test methods, namely, the broadband excitation and the sine dwell methods, are established. Using the Galileo spacecraft modal test and the Centaur G Prime upper stage vehicle modal test as examples, the relative advantage or disadvantage of each method is examined. The usefulness or shortcomings of the methods are given from a practical engineering viewpoint.

  19. Comparison of measurement methods with a mixed effects procedure accounting for replicated evaluations (COM3PARE): method comparison algorithm implementation for head and neck IGRT positional verification.

    Science.gov (United States)

    Roy, Anuradha; Fuller, Clifton D; Rosenthal, David I; Thomas, Charles R

    2015-08-28

    Comparison of imaging measurement devices in the absence of a gold-standard comparator remains a vexing problem; especially in scenarios where multiple, non-paired, replicated measurements occur, as in image-guided radiotherapy (IGRT). As the number of commercially available IGRT presents a challenge to determine whether different IGRT methods may be used interchangeably, an unmet need conceptually parsimonious and statistically robust method to evaluate the agreement between two methods with replicated observations. Consequently, we sought to determine, using an previously reported head and neck positional verification dataset, the feasibility and utility of a Comparison of Measurement Methods with the Mixed Effects Procedure Accounting for Replicated Evaluations (COM3PARE), a unified conceptual schema and analytic algorithm based upon Roy's linear mixed effects (LME) model with Kronecker product covariance structure in a doubly multivariate set-up, for IGRT method comparison. An anonymized dataset consisting of 100 paired coordinate (X/ measurements from a sequential series of head and neck cancer patients imaged near-simultaneously with cone beam CT (CBCT) and kilovoltage X-ray (KVX) imaging was used for model implementation. Software-suggested CBCT and KVX shifts for the lateral (X), vertical (Y) and longitudinal (Z) dimensions were evaluated for bias, inter-method (between-subject variation), intra-method (within-subject variation), and overall agreement using with a script implementing COM3PARE with the MIXED procedure of the statistical software package SAS (SAS Institute, Cary, NC, USA). COM3PARE showed statistically significant bias agreement and difference in inter-method between CBCT and KVX was observed in the Z-axis (both p - value<0.01). Intra-method and overall agreement differences were noted as statistically significant for both the X- and Z-axes (all p - value<0.01). Using pre-specified criteria, based on intra-method agreement, CBCT was deemed

  20. Study of n-γ discrimination by digital charge comparison method for a large volume liquid scintillator

    International Nuclear Information System (INIS)

    Moszynski, M.; Costa, G.J.; Guillaume, G.; Heusch, B.; Huck, A.; Ring, C.; Bizard, G.; Durand, D.; Peter, J.; Tamain, B.; El Masri, Y.; Hanappe, F.

    1992-01-01

    The study of the n-γ discrimination for a large 41 volume BC501A liquid scintillator coupled to a 130 mm diameter XP4512B photomultiplier was carried out by digital charge comparison method. A very good n-γ discrimination down to 100 keV of recoil electron energy was achieved. The measured relative intensity of the charge integrated at the slow component of the scintillation pulse and the photoelectron yield of the tested counter allow the factor of merit of the n-γ discrimination spectra to be calculated and to be compared with those measured experimentally. This shows that the main limitation of the n-γ discrimination is associated with the statistical fluctuation of the photoelectron number at the slow component. A serious effect of the distortion in the cable used to send the photomultiplier pulse to the electronics for the n-γ discrimination was studied. This suggests that the length of RG58 cable should be limited to about 40 m to preserve a high quality n-γ discrimination. (orig.)

  1. Comparison of optical methods for surface roughness characterization

    International Nuclear Information System (INIS)

    Feidenhans’l, Nikolaj A; Hansen, Poul-Erik; Madsen, Morten H; Petersen, Jan C; Pilný, Lukáš; Bissacco, Giuliano; Taboryski, Rafael

    2015-01-01

    We report a study of the correlation between three optical methods for characterizing surface roughness: a laboratory scatterometer measuring the bi-directional reflection distribution function (BRDF instrument), a simple commercial scatterometer (rBRDF instrument), and a confocal optical profiler. For each instrument, the effective range of spatial surface wavelengths is determined, and the common bandwidth used when comparing the evaluated roughness parameters. The compared roughness parameters are: the root-mean-square (RMS) profile deviation (Rq), the RMS profile slope (Rdq), and the variance of the scattering angle distribution (Aq). The twenty-two investigated samples were manufactured with several methods in order to obtain a suitable diversity of roughness patterns.Our study shows a one-to-one correlation of both the Rq and the Rdq roughness values when obtained with the BRDF and the confocal instruments, if the common bandwidth is applied. Likewise, a correlation is observed when determining the Aq value with the BRDF and the rBRDF instruments.Furthermore, we show that it is possible to determine the Rq value from the Aq value, by applying a simple transfer function derived from the instrument comparisons. The presented method is validated for surfaces with predominantly 1D roughness, i.e. consisting of parallel grooves of various periods, and a reflectance similar to stainless steel. The Rq values are predicted with an accuracy of 38% at the 95% confidence interval. (paper)

  2. Quantitative Imaging Biomarkers: A Review of Statistical Methods for Computer Algorithm Comparisons

    Science.gov (United States)

    2014-01-01

    Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. PMID:24919829

  3. Quantitative imaging biomarkers: a review of statistical methods for computer algorithm comparisons.

    Science.gov (United States)

    Obuchowski, Nancy A; Reeves, Anthony P; Huang, Erich P; Wang, Xiao-Feng; Buckler, Andrew J; Kim, Hyun J Grace; Barnhart, Huiman X; Jackson, Edward F; Giger, Maryellen L; Pennello, Gene; Toledano, Alicia Y; Kalpathy-Cramer, Jayashree; Apanasovich, Tatiyana V; Kinahan, Paul E; Myers, Kyle J; Goldgof, Dmitry B; Barboriak, Daniel P; Gillies, Robert J; Schwartz, Lawrence H; Sullivan, Daniel C

    2015-02-01

    Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  4. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    Science.gov (United States)

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Measurement of Capsaicinoids in Chiltepin Hot Pepper: A Comparison Study between Spectrophotometric Method and High Performance Liquid Chromatography Analysis

    Directory of Open Access Journals (Sweden)

    Alberto González-Zamora

    2015-01-01

    Full Text Available Direct spectrophotometric determination of capsaicinoids content in Chiltepin pepper was investigated as a possible alternative to HPLC analysis. Capsaicinoids were extracted from Chiltepin in red ripe and green fruit with acetonitrile and evaluated quantitatively using the HPLC method with capsaicin and dihydrocapsaicin standards. Three samples of different treatment were analyzed for their capsaicinoids content successfully by these methods. HPLC-DAD revealed that capsaicin, dihydrocapsaicin, and nordihydrocapsaicin comprised up to 98% of total capsaicinoids detected. The absorbance of the diluted samples was read on a spectrophotometer at 215–300 nm and monitored at 280 nm. We report herein the comparison between traditional UV assays and HPLC-DAD methods for the determination of the molar absorptivity coefficient of capsaicin (ε280=3,410 and ε280=3,720 M−1 cm−1 and dihydrocapsaicin (ε280=4,175 and ε280=4,350 M−1 cm−1, respectively. Statistical comparisons were performed using the regression analyses (ordinary linear regression and Deming regression and Bland-Altman analysis. Comparative data for pungency was determined spectrophotometrically and by HPLC on samples ranging from 29.55 to 129 mg/g with a correlation of 0.91. These results indicate that the two methods significantly agree. The described spectrophotometric method can be routinely used for total capsaicinoids analysis and quality control in agricultural and pharmaceutical analysis.

  6. Results of an interlaboratory comparison of analytical methods for contaminants of emerging concern in water.

    Science.gov (United States)

    Vanderford, Brett J; Drewes, Jörg E; Eaton, Andrew; Guo, Yingbo C; Haghani, Ali; Hoppe-Jones, Christiane; Schluesener, Michael P; Snyder, Shane A; Ternes, Thomas; Wood, Curtis J

    2014-01-07

    An evaluation of existing analytical methods used to measure contaminants of emerging concern (CECs) was performed through an interlaboratory comparison involving 25 research and commercial laboratories. In total, 52 methods were used in the single-blind study to determine method accuracy and comparability for 22 target compounds, including pharmaceuticals, personal care products, and steroid hormones, all at ng/L levels in surface and drinking water. Method biases ranged from caffeine, NP, OP, and triclosan had false positive rates >15%. In addition, some methods reported false positives for 17β-estradiol and 17α-ethynylestradiol in unspiked drinking water and deionized water, respectively, at levels higher than published predicted no-effect concentrations for these compounds in the environment. False negative rates were also generally contamination, misinterpretation of background interferences, and/or inappropriate setting of detection/quantification levels for analysis at low ng/L levels. The results of both comparisons were collectively assessed to identify parameters that resulted in the best overall method performance. Liquid chromatography-tandem mass spectrometry coupled with the calibration technique of isotope dilution were able to accurately quantify most compounds with an average bias of <10% for both matrixes. These findings suggest that this method of analysis is suitable at environmentally relevant levels for most of the compounds studied. This work underscores the need for robust, standardized analytical methods for CECs to improve data quality, increase comparability between studies, and help reduce false positive and false negative rates.

  7. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method

    International Nuclear Information System (INIS)

    Shidahara, Miho; Kato, Takashi; Kawatsu, Shoji; Yoshimura, Kumiko; Ito, Kengo; Watabe, Hiroshi; Kim, Kyeong Min; Iida, Hidehiro; Kato, Rikio

    2005-01-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99m Tc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I AC μb with Chang's attenuation correction factor. The scatter component image is estimated by convolving I AC μb with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99m Tc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine. (orig.)

  8. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method.

    Science.gov (United States)

    Shidahara, Miho; Watabe, Hiroshi; Kim, Kyeong Min; Kato, Takashi; Kawatsu, Shoji; Kato, Rikio; Yoshimura, Kumiko; Iida, Hidehiro; Ito, Kengo

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99mTc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I(mub)AC with Chang's attenuation correction factor. The scatter component image is estimated by convolving I(mub)AC with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine.

  9. WTA estimates using the method of paired comparison: tests of robustness

    Science.gov (United States)

    Patricia A. Champ; John B. Loomis

    1998-01-01

    The method of paired comparison is modified to allow choices between two alternative gains so as to estimate willingness to accept (WTA) without loss aversion. The robustness of WTA values for two public goods is tested with respect to sensitivity of theWTA measure to the context of the bundle of goods used in the paired comparison exercise and to the scope (scale) of...

  10. A comparison of non-invasive versus invasive methods of ...

    African Journals Online (AJOL)

    Puneet Khanna

    for Hb estimation from the laboratory [total haemoglobin mass (tHb)] and arterial blood gas (ABG) machine (aHb), using ... A comparison of non-invasive versus invasive methods of haemoglobin estimation in patients undergoing intracranial surgery. 161 .... making decisions for blood transfusions based on these results.

  11. Structural pattern recognition methods based on string comparison for fusion databases

    International Nuclear Information System (INIS)

    Dormido-Canto, S.; Farias, G.; Dormido, R.; Vega, J.; Sanchez, J.; Duro, N.; Vargas, H.; Ratta, G.; Pereira, A.; Portas, A.

    2008-01-01

    Databases for fusion experiments are designed to store several million waveforms. Temporal evolution signals show the same patterns under the same plasma conditions and, therefore, pattern recognition techniques allow the identification of similar plasma behaviours. This article is focused on the comparison of structural pattern recognition methods. A pattern can be composed of simpler sub-patterns, where the most elementary sub-patterns are known as primitives. Selection of primitives is an essential issue in structural pattern recognition methods, because they determine what types of structural components can be constructed. However, it should be noted that there is not a general solution to extract structural features (primitives) from data. So, four different ways to compute the primitives of plasma waveforms are compared: (1) constant length primitives, (2) adaptive length primitives, (3) concavity method and (4) concavity method for noisy signals. Each method defines a code alphabet and, in this way, the pattern recognition problem is carried out via string comparisons. Results of the four methods with the TJ-II stellarator databases will be discussed

  12. Structural pattern recognition methods based on string comparison for fusion databases

    Energy Technology Data Exchange (ETDEWEB)

    Dormido-Canto, S. [Dpto. Informatica y Automatica - UNED 28040, Madrid (Spain)], E-mail: sebas@dia.uned.es; Farias, G.; Dormido, R. [Dpto. Informatica y Automatica - UNED 28040, Madrid (Spain); Vega, J. [Asociacion EURATOM/CIEMAT para Fusion, 28040, Madrid (Spain); Sanchez, J.; Duro, N.; Vargas, H. [Dpto. Informatica y Automatica - UNED 28040, Madrid (Spain); Ratta, G.; Pereira, A.; Portas, A. [Asociacion EURATOM/CIEMAT para Fusion, 28040, Madrid (Spain)

    2008-04-15

    Databases for fusion experiments are designed to store several million waveforms. Temporal evolution signals show the same patterns under the same plasma conditions and, therefore, pattern recognition techniques allow the identification of similar plasma behaviours. This article is focused on the comparison of structural pattern recognition methods. A pattern can be composed of simpler sub-patterns, where the most elementary sub-patterns are known as primitives. Selection of primitives is an essential issue in structural pattern recognition methods, because they determine what types of structural components can be constructed. However, it should be noted that there is not a general solution to extract structural features (primitives) from data. So, four different ways to compute the primitives of plasma waveforms are compared: (1) constant length primitives, (2) adaptive length primitives, (3) concavity method and (4) concavity method for noisy signals. Each method defines a code alphabet and, in this way, the pattern recognition problem is carried out via string comparisons. Results of the four methods with the TJ-II stellarator databases will be discussed.

  13. What Happened to Remote Usability Testing? An Empirical Study of Three Methods

    DEFF Research Database (Denmark)

    Stage, Jan; Andreasen, M. S.; Nielsen, H. V.

    2007-01-01

    The idea of conducting usability tests remotely emerged ten years ago. Since then, it has been studied empirically, and some software organizations employ remote methods. Yet there are still few comparisons involving more than one remote method. This paper presents results from a systematic...... empirical comparison of three methods for remote usability testing and a conventional laboratorybased think-aloud method. The three remote methods are a remote synchronous condition, where testing is conducted in real time but the test monitor is separated spatially from the test subjects, and two remote...

  14. Comparison of DNA Quantification Methods for Next Generation Sequencing.

    Science.gov (United States)

    Robin, Jérôme D; Ludlow, Andrew T; LaRanger, Ryan; Wright, Woodring E; Shay, Jerry W

    2016-04-06

    Next Generation Sequencing (NGS) is a powerful tool that depends on loading a precise amount of DNA onto a flowcell. NGS strategies have expanded our ability to investigate genomic phenomena by referencing mutations in cancer and diseases through large-scale genotyping, developing methods to map rare chromatin interactions (4C; 5C and Hi-C) and identifying chromatin features associated with regulatory elements (ChIP-seq, Bis-Seq, ChiA-PET). While many methods are available for DNA library quantification, there is no unambiguous gold standard. Most techniques use PCR to amplify DNA libraries to obtain sufficient quantities for optical density measurement. However, increased PCR cycles can distort the library's heterogeneity and prevent the detection of rare variants. In this analysis, we compared new digital PCR technologies (droplet digital PCR; ddPCR, ddPCR-Tail) with standard methods for the titration of NGS libraries. DdPCR-Tail is comparable to qPCR and fluorometry (QuBit) and allows sensitive quantification by analysis of barcode repartition after sequencing of multiplexed samples. This study provides a direct comparison between quantification methods throughout a complete sequencing experiment and provides the impetus to use ddPCR-based quantification for improvement of NGS quality.

  15. Comparison of three noninvasive methods for hemoglobin screening of blood donors.

    Science.gov (United States)

    Ardin, Sergey; Störmer, Melanie; Radojska, Stela; Oustianskaia, Larissa; Hahn, Moritz; Gathof, Birgit S

    2015-02-01

    To prevent phlebotomy of anemic individuals and to ensure hemoglobin (Hb) content of the blood units, Hb screening of blood donors before donation is essential. Hb values are mostly evaluated by measurement of capillary blood obtained from fingerstick. Rapid noninvasive methods have recently become available and may be preferred by donors and staff. The aim of this study was to evaluate for the first time all different noninvasive methods for Hb screening. Blood donors were screened for Hb levels in three different trials using three different noninvasive methods (Haemospect [MBR Optical Systems GmbH & Co. KG], NBM 200 [LMB Technology GmbH], Pronto-7 [Masimo Europe Ltd]) in comparison to the established fingerstick method (CompoLab Hb [Fresenius Kabi GmbH]) and to levels obtained from venous samples on a cell counter (Sysmex [Sysmex Europe GmbH]) as reference. The usability of the noninvasive methods was assessed with an especially developed survey. Technical failures occurred by using the Pronto-7 due to nail polish, skin color, or ambient light. The NBM 200 also showed a high sensitivity to ambient light and noticeably lower Hb levels for women than obtained from the Sysmex. The statistical analysis showed the following bias and standard deviation of differences of all methods in comparison to the venous results: Haemospect, -0.22 ± 1.24; NBM, 200 -0.12 ± 1.14; Pronto-7, -0.50 ± 0.99; and CompoLab Hb, -0.53 ± 0.81. Noninvasive Hb tests represent an attractive alternative by eliminating pain and reducing risks of blood contamination. The main problem for generating reliable results seems to be preanalytical variability in sampling. Despite the sensitivity to environmental stress, all methods are suitable for Hb measurement. © 2014 AABB.

  16. Two Methods for Turning and Positioning and the Effect on Pressure Ulcer Development: A Comparison Cohort Study.

    Science.gov (United States)

    Powers, Jan

    2016-01-01

    We evaluated 2 methods for patient positioning on the development of pressure ulcers; specifically, standard of care (SOC) using pillows versus a patient positioning system (PPS). The study also compared turning effectiveness as well as nursing resources related to patient positioning and nursing injuries. A nonrandomized comparison design was used for the study. Sixty patients from a trauma/neurointensive care unit were included in the study. Patients were randomly assigned to 1 of 2 teams per standard bed placement practices at the institution. Patients were identified for enrollment in the study if they were immobile and mechanically ventilated with anticipation of 3 days or more on mechanical ventilation. Patients were excluded if they had a preexisting pressure ulcer. Patients were evaluated daily for the presence of pressure ulcers. Data were collected on the number of personnel required to turn patients. Once completed, the angle of the turn was measured. The occupational health database was reviewed to determine nurse injuries. The final sample size was 59 (SOC = 29; PPS = 30); there were no statistical differences between groups for age (P = .10), body mass index (P = .65), gender (P = .43), Braden Scale score (P = .46), or mobility score (P = .10). There was a statistically significant difference in the number of hospital-acquired pressure ulcers between turning methods (6 in the SOC group vs 1 in the PPS group; P = .042). The number of nurses needed for the SOC method was significantly higher than the PPS (P ≤ 0.001). The average turn angle achieved using the PPS was 31.03°, while the average turn angle achieved using SOC was 22.39°. The difference in turn angle from initial turn to 1 hour after turning in the SOC group was statistically significant (P patients to prevent development of pressure ulcers.

  17. Comparison of vibrational conductivity and radiative energy transfer methods

    Science.gov (United States)

    Le Bot, A.

    2005-05-01

    This paper is concerned with the comparison of two methods well suited for the prediction of the wideband response of built-up structures subjected to high-frequency vibrational excitation. The first method is sometimes called the vibrational conductivity method and the second one is rather known as the radiosity method in the field of acoustics, or the radiative energy transfer method. Both are based on quite similar physical assumptions i.e. uncorrelated sources, mean response and high-frequency excitation. Both are based on analogies with some equations encountered in the field of heat transfer. However these models do not lead to similar results. This paper compares the two methods. Some numerical simulations on a pair of plates joined along one edge are provided to illustrate the discussion.

  18. A Comparison of Moments-Based Logo Recognition Methods

    Directory of Open Access Journals (Sweden)

    Zili Zhang

    2014-01-01

    Full Text Available Logo recognition is an important issue in document image, advertisement, and intelligent transportation. Although there are many approaches to study logos in these fields, logo recognition is an essential subprocess. Among the methods of logo recognition, the descriptor is very vital. The results of moments as powerful descriptors were not discussed before in terms of logo recognition. So it is unclear which moments are more appropriate to recognize which kind of logos. In this paper we find out the relations between logos with different transforms and moments, which moments are fit for logos with different transforms. The open datasets are employed from the University of Maryland. The comparisons based on moments are carried out from the aspects of logos with noise, and rotation, scaling, rotation and scaling.

  19. Comparison of the xenon-133 washout method with the microsphere method in dog brain perfusion studies

    International Nuclear Information System (INIS)

    Heikkitae, J.; Kettunen, R.; Ahonen, A.

    1982-01-01

    The validity of the Xenon-washout method in estimation of regional cerebral blood flow was tested against a radioactive microsphere method in anaesthetized dogs. The two compartmental model seemed not to be well suited for cerebral perfusion studies by Xe-washout method, although bi-exponential analysis of washout curves gave perfusion values correlating with the microsphere method but depending on calculation method

  20. Comparison of three flaw-location methods for automated ultrasonic testing

    International Nuclear Information System (INIS)

    Seiger, H.

    1982-01-01

    Two well-known methods for locating flaws by measurement of the transit time of ultrasonic pulses are examined theoretically. It is shown that neither is sufficiently reliable for use in automated ultrasonic testing. A third method, which takes into account the shape of the sound field from the probe and the uncertainty in measurement of probe-flaw distance and probe position, is introduced. An experimental comparison of the three methods indicates that use of the advanced method results in more accurate location of flaws. (author)

  1. Comparison of the direct enzyme assay method with the membrane ...

    African Journals Online (AJOL)

    Comparison of the direct enzyme assay method with the membrane filtration technique in the quantification and monitoring of microbial indicator organisms – seasonal variations in the activities of coliforms and E. coli, temperature and pH.

  2. Comparison of radioimmunology and serology methods for LH determination

    International Nuclear Information System (INIS)

    Szymanski, W.; Jakowicki, J.

    1976-01-01

    A comparison is presented of LH determinations by immunoassay and radioimmunoassay using the 125 I-labelled HCG double antibody system. The results obtained by both methods are well comparable and in normal and elevated LH levels the serological determinations may be used for estimating concentration levels found by RIA. (L.O.)

  3. Comparison of Video Steganography Methods for Watermark Embedding

    Directory of Open Access Journals (Sweden)

    Griberman David

    2016-05-01

    Full Text Available The paper focuses on the comparison of video steganography methods for the purpose of digital watermarking in the context of copyright protection. Four embedding methods that use Discrete Cosine and Discrete Wavelet Transforms have been researched and compared based on their embedding efficiency and fidelity. A video steganography program has been developed in the Java programming language with all of the researched methods implemented for experiments. The experiments used 3 video containers with different amounts of movement. The impact of the movement has been addressed in the paper as well as the ways of potential improvement of embedding efficiency using adaptive embedding based on the movement amount. Results of the research have been verified using a survey with 17 participants.

  4. A comparison of three time-domain anomaly detection methods

    Energy Technology Data Exchange (ETDEWEB)

    Schoonewelle, H.; Hagen, T.H.J.J. van der; Hoogenboom, J.E. [Delft University of Technology (Netherlands). Interfaculty Reactor Institute

    1996-01-01

    Three anomaly detection methods based on a comparison of signal values with predictions from an autoregressive model are presented. These methods are: the extremes method, the {chi}{sup 2} method and the sequential probability ratio test. The methods are used to detect a change of the standard deviation of the residual noise obtained from applying an autoregressive model. They are fast and can be used in on-line applications. For each method some important anomaly detection parameters are determined by calculation or simulation. These parameters are: the false alarm rate, the average time to alarm and - being of minor importance -the alarm failure rate. Each method is optimized with respect to the average time to alarm for a given value of the false alarm rate. The methods are compared with each other, resulting in the sequential probability ratio test being clearly superior. (author).

  5. A comparison of three time-domain anomaly detection methods

    International Nuclear Information System (INIS)

    Schoonewelle, H.; Hagen, T.H.J.J. van der; Hoogenboom, J.E.

    1996-01-01

    Three anomaly detection methods based on a comparison of signal values with predictions from an autoregressive model are presented. These methods are: the extremes method, the χ 2 method and the sequential probability ratio test. The methods are used to detect a change of the standard deviation of the residual noise obtained from applying an autoregressive model. They are fast and can be used in on-line applications. For each method some important anomaly detection parameters are determined by calculation or simulation. These parameters are: the false alarm rate, the average time to alarm and - being of minor importance -the alarm failure rate. Each method is optimized with respect to the average time to alarm for a given value of the false alarm rate. The methods are compared with each other, resulting in the sequential probability ratio test being clearly superior. (author)

  6. Comparison of Satellite Surveying to Traditional Surveying Methods for the Resources Industry

    Science.gov (United States)

    Osborne, B. P.; Osborne, V. J.; Kruger, M. L.

    Modern ground-based survey methods involve detailed survey, which provides three-space co-ordinates for surveyed points, to a high level of accuracy. The instruments are operated by surveyors, who process the raw results to create survey location maps for the subject of the survey. Such surveys are conducted for a location or region and referenced to the earth global co- ordinate system with global positioning system (GPS) positioning. Due to this referencing the survey is only as accurate as the GPS reference system. Satellite survey remote sensing utilise satellite imagery which have been processed using commercial geographic information system software. Three-space co-ordinate maps are generated, with an accuracy determined by the datum position accuracy and optical resolution of the satellite platform.This paper presents a case study, which compares topographic surveying undertaken by traditional survey methods with satellite surveying, for the same location. The purpose of this study is to assess the viability of satellite remote sensing for surveying in the resources industry. The case study involves a topographic survey of a dune field for a prospective mining project area in Pakistan. This site has been surveyed using modern surveying techniques and the results are compared to a satellite survey performed on the same area.Analysis of the results from traditional survey and from the satellite survey involved a comparison of the derived spatial co- ordinates from each method. In addition, comparisons have been made of costs and turnaround time for both methods.The results of this application of remote sensing is of particular interest for survey in areas with remote and extreme environments, weather extremes, political unrest, poor travel links, which are commonly associated with mining projects. Such areas frequently suffer language barriers, poor onsite technical support and resources.

  7. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method

    Energy Technology Data Exchange (ETDEWEB)

    Shidahara, Miho; Kato, Takashi; Kawatsu, Shoji; Yoshimura, Kumiko; Ito, Kengo [National Center for Geriatrics and Gerontology Research Institute, Department of Brain Science and Molecular Imaging, Obu, Aichi (Japan); Watabe, Hiroshi; Kim, Kyeong Min; Iida, Hidehiro [National Cardiovascular Center Research Institute, Department of Investigative Radiology, Suita (Japan); Kato, Rikio [National Center for Geriatrics and Gerontology, Department of Radiology, Obu (Japan)

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with {sup 99m}Tc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I{sub AC}{sup {mu}}{sup b} with Chang's attenuation correction factor. The scatter component image is estimated by convolving I{sub AC}{sup {mu}}{sup b} with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and {sup 99m}Tc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine. (orig.)

  8. Comparison of methods to identify crop productivity constraints in developing countries. A review

    NARCIS (Netherlands)

    Kraaijvanger, R.G.M.; Sonneveld, M.P.W.; Almekinders, C.J.M.; Veldkamp, T.

    2015-01-01

    Selecting a method for identifying actual crop productivity constraints is an important step for triggering innovation processes. Applied methods can be diverse and although such methods have consequences for the design of intervention strategies, documented comparisons between various methods are

  9. Comparison Re-invented: Adaptation of Universal Methods to African Studies (Conference Report Die Wiederentdeckung des Vergleichs: Zur Anwendung universeller Methoden in der Afrikaforschung (Konferenzbericht

    Directory of Open Access Journals (Sweden)

    Franzisca Zanker

    2013-01-01

    Full Text Available Drawing from a combination of specific, empirical research projects with different theoretical backgrounds, a workshop discussed one methodological aspect often somewhat overlooked in African Studies: comparison. Participants addressed several questions, along with presenting overviews of how different disciplines within African Studies approach comparison in their research and naming specific challenges within individual research projects. The questions examined included: Why is explicit comparative research so rare in African Studies? Is comparative research more difficult in the African context than in other regions? Does it benefit our research? Should scholars strive to generalise beyond individual cases? Do studies in our field require an explicit comparative design, or will implicit comparison suffice? Cross-discipline communication should help us to move forward in this methodological debate, though in the end the subject matter and specific research question will lead to the appropriate comparative approach, not the other way round.Mit Blick auf einige empirische Forschungsprojekte mit jeweils unterschiedlichem theoretischem Hintergrund wurde im Rahmen eines Workshops ein methodologischer Aspekt debattiert, der in der Afrikaforschung wenig Beachtung erfährt: der Vergleich. Die Teilnehmer(innen entwickelten Fragestellungen, stellten jeweils dar, inwieweit in den verschiedene Disziplinen der Afrikaforschung der Vergleich als Methode eingesetzt wird, und benannten spezifische Herausforderungen in diesem Zusammenhang für einzelne Forschungsprojekte. Unter anderem wurden folgende Fragen erörtert: Warum ist die explizit vergleichende Methode in der Afrikaforschung so selten? Ist vergleichende Forschung im Kontext Afrikas schwieriger anwendbar als in der Forschung zu anderen Regionen? Verbessert sie unsere Forschungsresultate? Sollten sich Forscher um Generalisierungen jenseits der Einzelfallstudien bemühen? Ist in der Afrikaforschung eine

  10. cp-R, an interface the R programming language for clinical laboratory method comparisons.

    Science.gov (United States)

    Holmes, Daniel T

    2015-02-01

    Clinical scientists frequently need to compare two different bioanalytical methods as part of assay validation/monitoring. As a matter necessity, regression methods for quantitative comparison in clinical chemistry, hematology and other clinical laboratory disciplines must allow for error in both the x and y variables. Traditionally the methods popularized by 1) Deming and 2) Passing and Bablok have been recommended. While commercial tools exist, no simple open source tool is available. The purpose of this work was to develop and entirely open-source GUI-driven program for bioanalytical method comparisons capable of performing these regression methods and able to produce highly customized graphical output. The GUI is written in python and PyQt4 with R scripts performing regression and graphical functions. The program can be run from source code or as a pre-compiled binary executable. The software performs three forms of regression and offers weighting where applicable. Confidence bands of the regression are calculated using bootstrapping for Deming and Passing Bablok methods. Users can customize regression plots according to the tools available in R and can produced output in any of: jpg, png, tiff, bmp at any desired resolution or ps and pdf vector formats. Bland Altman plots and some regression diagnostic plots are also generated. Correctness of regression parameter estimates was confirmed against existing R packages. The program allows for rapid and highly customizable graphical output capable of conforming to the publication requirements of any clinical chemistry journal. Quick method comparisons can also be performed and cut and paste into spreadsheet or word processing applications. We present a simple and intuitive open source tool for quantitative method comparison in a clinical laboratory environment. Copyright © 2014 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  11. A comparison of methods for evaluating structure during ship collisions

    International Nuclear Information System (INIS)

    Ammerman, D.J.; Daidola, J.C.

    1996-01-01

    A comparison is provided of the results of various methods for evaluating structure during a ship-to-ship collision. The baseline vessel utilized in the analyses is a 67.4 meter in length displacement hull struck by an identical vessel traveling at speeds ranging from 10 to 30 knots. The structural response of the struck vessel and motion of both the struck and striking vessels are assessed by finite element analysis. These same results are then compared to predictions utilizing the open-quotes Tanker Structural Analysis for Minor Collisionsclose quotes (TSAMC) Method, the Minorsky Method, the Haywood Collision Process, and comparison to full-scale tests. Consideration is given to the nature of structural deformation, absorbed energy, penetration, rigid body motion, and virtual mass affecting the hydrodynamic response. Insights are provided with regard to the calibration of the finite element model which was achievable through utilizing the more empirical analyses and the extent to which the finite element analysis is able to simulate the entire collision event. 7 refs., 8 figs., 4 tabs

  12. The Comparison Study of Short-Term Prediction Methods to Enhance the Model Predictive Controller Applied to Microgrid Energy Management

    Directory of Open Access Journals (Sweden)

    César Hernández-Hernández

    2017-06-01

    Full Text Available Electricity load forecasting, optimal power system operation and energy management play key roles that can bring significant operational advantages to microgrids. This paper studies how methods based on time series and neural networks can be used to predict energy demand and production, allowing them to be combined with model predictive control. Comparisons of different prediction methods and different optimum energy distribution scenarios are provided, permitting us to determine when short-term energy prediction models should be used. The proposed prediction models in addition to the model predictive control strategy appear as a promising solution to energy management in microgrids. The controller has the task of performing the management of electricity purchase and sale to the power grid, maximizing the use of renewable energy sources and managing the use of the energy storage system. Simulations were performed with different weather conditions of solar irradiation. The obtained results are encouraging for future practical implementation.

  13. Comparison of seven optical clearing methods for mouse brain

    Science.gov (United States)

    Wan, Peng; Zhu, Jingtan; Yu, Tingting; Zhu, Dan

    2018-02-01

    Recently, a variety of tissue optical clearing techniques have been developed to reduce light scattering for imaging deeper and three-dimensional reconstruction of tissue structures. Combined with optical imaging techniques and diverse labeling methods, these clearing methods have significantly promoted the development of neuroscience. However, most of the protocols were proposed aiming for specific tissue type. Though there are some comparison results, the clearing methods covered are limited and the evaluation indices are lack of uniformity, which made it difficult to select a best-fit protocol for clearing in practical applications. Hence, it is necessary to systematically assess and compare these clearing methods. In this work, we evaluated the performance of seven typical clearing methods, including 3DISCO, uDISCO, SeeDB, ScaleS, ClearT2, CUBIC and PACT, on mouse brain samples. First, we compared the clearing capability on both brain slices and whole-brains by observing brain transparency. Further, we evaluated the fluorescence preservation and the increase of imaging depth. The results showed that 3DISCO, uDISCO and PACT posed excellent clearing capability on mouse brains, ScaleS and SeeDB rendered moderate transparency, while ClearT2 was the worst. Among those methods, ScaleS was the best on fluorescence preservation, and PACT achieved the highest increase of imaging depth. This study is expected to provide important reference for users in choosing most suitable brain optical clearing method.

  14. Comparison of the analysis result between two laboratories using different methods

    International Nuclear Information System (INIS)

    Sri Murniasih; Agus Taftazani

    2017-01-01

    Comparison of the analysis result of volcano ash sample between two laboratories using different analysis methods. The research aims to improve the testing laboratory quality and cooperate with the testing laboratory from other country. Samples were tested at the Center for Accelerator of Science and Technology (CAST)-NAA laboratory using NAA, while at the University of Texas (UT) USA using ICP-MS and ENAA method. From 12 elements of target, CAST-NAA able to present 11 elements of data analysis. The comparison results shows that the analysis of the K, Mn, Ti and Fe elements from both laboratories have a very good comparison and close one to other. It is known from RSD values and correlation coefficients of the both laboratories analysis results. While observed of the results difference known that the analysis results of Al, Na, K, Fe, V, Mn, Ti, Cr and As elements from both laboratories is not significantly different. From 11 elements were reported, only Zn which have significantly different values for both laboratories. (author)

  15. Differential and difference equations a comparison of methods of solution

    CERN Document Server

    Maximon, Leonard C

    2016-01-01

    This book, intended for researchers and graduate students in physics, applied mathematics and engineering, presents a detailed comparison of the important methods of solution for linear differential and difference equations - variation of constants, reduction of order, Laplace transforms and generating functions - bringing out the similarities as well as the significant differences in the respective analyses. Equations of arbitrary order are studied, followed by a detailed analysis for equations of first and second order. Equations with polynomial coefficients are considered and explicit solutions for equations with linear coefficients are given, showing significant differences in the functional form of solutions of differential equations from those of difference equations. An alternative method of solution involving transformation of both the dependent and independent variables is given for both differential and difference equations. A comprehensive, detailed treatment of Green’s functions and the associat...

  16. Comparison of three sensory profiling methods based on consumer perception

    DEFF Research Database (Denmark)

    Reinbach, Helene Christine; Giacalone, Davide; Ribeiro, Letícia Machado

    2014-01-01

    The present study compares three profiling methods based on consumer perceptions in their ability to discriminate and describe eight beers. Consumers (N=135) evaluated eight different beers using Check-All-That-Apply (CATA) methodology in two variations, with (n=63) and without (n=73) rating...... the intensity of the checked descriptors. With CATA, consumers rated 38 descriptors grouped in 7 overall categories (berries, floral, hoppy, nutty, roasted, spicy/herbal and woody). Additionally 40 of the consumers evaluated the same samples by partial Napping® followed by Ultra Flash Profiling (UFP). ANOVA...... comparisons the RV coefficients varied between 0.90 and 0.97, indicating a very high similarity between all three methods. These results show that the precision and reproducibility of sensory information obtained by consumers by CATA is comparable to that of Napping. The choice of methodology for consumer...

  17. Evaluation and Comparison of Extremal Hypothesis-Based Regime Methods

    Directory of Open Access Journals (Sweden)

    Ishwar Joshi

    2018-03-01

    Full Text Available Regime channels are important for stable canal design and to determine river response to environmental changes, e.g., due to the construction of a dam, land use change, and climate shifts. A plethora of methods is available describing the hydraulic geometry of alluvial rivers in the regime. However, comparison of these methods using the same set of data seems lacking. In this study, we evaluate and compare four different extremal hypothesis-based regime methods, namely minimization of Froude number (MFN, maximum entropy and minimum energy dissipation rate (ME and MEDR, maximum flow efficiency (MFE, and Millar’s method, by dividing regime channel data into sand and gravel beds. The results show that for sand bed channels MFN gives a very high accuracy of prediction for regime channel width and depth. For gravel bed channels we find that MFN and ‘ME and MEDR’ give a very high accuracy of prediction for width and depth. Therefore the notion that extremal hypotheses which do not contain bank stability criteria are inappropriate for use is shown false as both MFN and ‘ME and MEDR’ lack bank stability criteria. Also, we find that bank vegetation has significant influence in the prediction of hydraulic geometry by MFN and ‘ME and MEDR’.

  18. Inconsistency between direct and indirect comparisons of competing interventions: meta-epidemiological study.

    Science.gov (United States)

    Song, Fujian; Xiong, Tengbin; Parekh-Bhurke, Sheetal; Loke, Yoon K; Sutton, Alex J; Eastwood, Alison J; Holland, Richard; Chen, Yen-Fu; Glenny, Anne-Marie; Deeks, Jonathan J; Altman, Doug G

    2011-08-16

    To investigate the agreement between direct and indirect comparisons of competing healthcare interventions. Meta-epidemiological study based on sample of meta-analyses of randomised controlled trials. Data sources Cochrane Database of Systematic Reviews and PubMed. Inclusion criteria Systematic reviews that provided sufficient data for both direct comparison and independent indirect comparisons of two interventions on the basis of a common comparator and in which the odds ratio could be used as the outcome statistic. Inconsistency measured by the difference in the log odds ratio between the direct and indirect methods. The study included 112 independent trial networks (including 1552 trials with 478,775 patients in total) that allowed both direct and indirect comparison of two interventions. Indirect comparison had already been explicitly done in only 13 of the 85 Cochrane reviews included. The inconsistency between the direct and indirect comparison was statistically significant in 16 cases (14%, 95% confidence interval 9% to 22%). The statistically significant inconsistency was associated with fewer trials, subjectively assessed outcomes, and statistically significant effects of treatment in either direct or indirect comparisons. Owing to considerable inconsistency, many (14/39) of the statistically significant effects by direct comparison became non-significant when the direct and indirect estimates were combined. Significant inconsistency between direct and indirect comparisons may be more prevalent than previously observed. Direct and indirect estimates should be combined in mixed treatment comparisons only after adequate assessment of the consistency of the evidence.

  19. An enquiry into the method of paired comparison: reliability, scaling, and Thurstone's Law of Comparative Judgment

    Science.gov (United States)

    Thomas C. Brown; George L. Peterson

    2009-01-01

    The method of paired comparisons is used to measure individuals' preference orderings of items presented to them as discrete binary choices. This paper reviews the theory and application of the paired comparison method, describes a new computer program available for eliciting the choices, and presents an analysis of methods for scaling paired choice data to...

  20. COMPARISON OF METHODS FOR GEOMETRIC CAMERA CALIBRATION

    Directory of Open Access Journals (Sweden)

    J. Hieronymus

    2012-09-01

    Full Text Available Methods for geometric calibration of cameras in close-range photogrammetry are established and well investigated. The most common one is based on test-fields with well-known pattern, which are observed from different directions. The parameters of a distortion model are calculated using bundle-block-adjustment-algorithms. This methods works well for short focal lengths, but is essentially more problematic to use with large focal lengths. Those would require very large test-fields and surrounding space. To overcome this problem, there is another common method for calibration used in remote sensing. It employs measurements using collimator and a goniometer. A third calibration method uses diffractive optical elements (DOE to project holograms of well known pattern. In this paper these three calibration methods are compared empirically, especially in terms of accuracy. A camera has been calibrated with those methods mentioned above. All methods provide a set of distortion correction parameters as used by the photogrammetric software Australis. The resulting parameter values are very similar for all investigated methods. The three sets of distortion parameters are crosscompared against all three calibration methods. This is achieved by inserting the gained distortion parameters as fixed input into the calibration algorithms and only adjusting the exterior orientation. The RMS (root mean square of the remaining image coordinate residuals are taken as a measure of distortion correction quality. There are differences resulting from the different calibration methods. Nevertheless the measure is small for every comparison, which means that all three calibration methods can be used for accurate geometric calibration.

  1. Comparative study of the geostatistical ore reserve estimation method over the conventional methods

    International Nuclear Information System (INIS)

    Kim, Y.C.; Knudsen, H.P.

    1975-01-01

    Part I contains a comprehensive treatment of the comparative study of the geostatistical ore reserve estimation method over the conventional methods. The conventional methods chosen for comparison were: (a) the polygon method, (b) the inverse of the distance squared method, and (c) a method similar to (b) but allowing different weights in different directions. Briefly, the overall result from this comparative study is in favor of the use of geostatistics in most cases because the method has lived up to its theoretical claims. A good exposition on the theory of geostatistics, the adopted study procedures, conclusions and recommended future research are given in Part I. Part II of this report contains the results of the second and the third study objectives, which are to assess the potential benefits that can be derived by the introduction of the geostatistical method to the current state-of-the-art in uranium reserve estimation method and to be instrumental in generating the acceptance of the new method by practitioners through illustrative examples, assuming its superiority and practicality. These are given in the form of illustrative examples on the use of geostatistics and the accompanying computer program user's guide

  2. Comparison of non-parametric methods for ungrouping coarsely aggregated data

    DEFF Research Database (Denmark)

    Rizzi, Silvia; Thinggaard, Mikael; Engholm, Gerda

    2016-01-01

    group at the highest ages. When histogram intervals are too coarse, information is lost and comparison between histograms with different boundaries is arduous. In these cases it is useful to estimate detailed distributions from grouped data. Methods From an extensive literature search we identify five...

  3. Liquid metal reactor/Pressurized water reactor plant comparison study

    International Nuclear Information System (INIS)

    Halverson, T.G.

    1986-01-01

    The selection between alternative electric power generating technologies is mainly based on their overall economics. Capital costs account for over 60% of the total busbar cost of nuclear plants. Estimates reported in the literature have shown capital cost ratios of LMRs to PWRs ranging from less than 1 to as high as 1.8. To reduce this range of uncertainty, the study selected a method for cataloging plant hardware and then performed comparisons using engineering judgment as to the anticipated and reasonable cost differences. The paper summarizes the resulting one-on-one comparisons of components, systems, and buildings and identifies the LMR-PWR similarities and differences which influence costs. The study leads to the conclusion that the capital cost of the most up-to-date large LMR design would be very close to that of the latest PWRs

  4. COMPARISON OF HOLOGRAPHIC AND ITERATIVE METHODS FOR AMPLITUDE OBJECT RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    I. A. Shevkunov

    2015-01-01

    Full Text Available Experimental comparison of four methods for the wavefront reconstruction is presented. We considered two iterative and two holographic methods with different mathematical models and algorithms for recovery. The first two of these methods do not use a reference wave recording scheme that reduces requirements for stability of the installation. A major role in phase information reconstruction by such methods is played by a set of spatial intensity distributions, which are recorded as the recording matrix is being moved along the optical axis. The obtained data are used consistently for wavefront reconstruction using an iterative procedure. In the course of this procedure numerical distribution of the wavefront between the planes is performed. Thus, phase information of the wavefront is stored in every plane and calculated amplitude distributions are replaced for the measured ones in these planes. In the first of the compared methods, a two-dimensional Fresnel transform and iterative calculation in the object plane are used as a mathematical model. In the second approach, an angular spectrum method is used for numerical wavefront propagation, and the iterative calculation is carried out only between closely located planes of data registration. Two digital holography methods, based on the usage of the reference wave in the recording scheme and differing from each other by numerical reconstruction algorithm of digital holograms, are compared with the first two methods. The comparison proved that the iterative method based on 2D Fresnel transform gives results comparable with the result of common holographic method with the Fourier-filtering. It is shown that holographic method for reconstructing of the object complex amplitude in the process of the object amplitude reduction is the best among considered ones.

  5. Comparison between the KBS-3 method and the deep borehole for final disposal of spent nuclear fuel

    International Nuclear Information System (INIS)

    Grundfelt, Bertil

    2010-09-01

    In this report a comparison is made between disposal of spent nuclear fuel according to the KBS-3 method with disposal in very deep boreholes. The objective has been to make a broad comparison between the two methods, and by doing so to pinpoint factors that distinguish them from each other. The ambition has been to make an as fair comparison as possible despite that the quality of the data of relevance is very different between the methods

  6. Comparison of biochemical and molecular methods for the identification of bacterial isolates associated with failed loggerhead sea turtle eggs.

    Science.gov (United States)

    Awong-Taylor, J; Craven, K S; Griffiths, L; Bass, C; Muscarella, M

    2008-05-01

    Comparison of biochemical vs molecular methods for identification of microbial populations associated with failed loggerhead turtle eggs. Two biochemical (API and Microgen) and one molecular methods (16s rRNA analysis) were compared in the areas of cost, identification, corroboration of data with other methods, ease of use, resources and software. The molecular method was costly and identified only 66% of the isolates tested compared with 74% for API. A 74% discrepancy in identifications occurred between API and 16s rRNA analysis. The two biochemical methods were comparable in cost, but Microgen was easier to use and yielded the lowest discrepancy among identifications (29%) when compared with both API 20 enteric (API 20E) and API 20 nonenteric (API 20NE) combined. A comparison of API 20E and API 20NE indicated an 83% discrepancy between the two methods. The Microgen identification system appears to be better suited than API or 16s rRNA analysis for identification of environmental isolates associated with failed loggerhead eggs. Most identification methods are not intended for use with environmental isolates. A comparison of identification systems would provide better options for identifying environmental bacteria for ecological studies.

  7. The choice of statistical methods for comparisons of dosimetric data in radiotherapy.

    Science.gov (United States)

    Chaikh, Abdulhamid; Giraud, Jean-Yves; Perrin, Emmanuel; Bresciani, Jean-Pierre; Balosso, Jacques

    2014-09-18

    Novel irradiation techniques are continuously introduced in radiotherapy to optimize the accuracy, the security and the clinical outcome of treatments. These changes could raise the question of discontinuity in dosimetric presentation and the subsequent need for practice adjustments in case of significant modifications. This study proposes a comprehensive approach to compare different techniques and tests whether their respective dose calculation algorithms give rise to statistically significant differences in the treatment doses for the patient. Statistical investigation principles are presented in the framework of a clinical example based on 62 fields of radiotherapy for lung cancer. The delivered doses in monitor units were calculated using three different dose calculation methods: the reference method accounts the dose without tissues density corrections using Pencil Beam Convolution (PBC) algorithm, whereas new methods calculate the dose with tissues density correction for 1D and 3D using Modified Batho (MB) method and Equivalent Tissue air ratio (ETAR) method, respectively. The normality of the data and the homogeneity of variance between groups were tested using Shapiro-Wilks and Levene test, respectively, then non-parametric statistical tests were performed. Specifically, the dose means estimated by the different calculation methods were compared using Friedman's test and Wilcoxon signed-rank test. In addition, the correlation between the doses calculated by the three methods was assessed using Spearman's rank and Kendall's rank tests. The Friedman's test showed a significant effect on the calculation method for the delivered dose of lung cancer patients (p Wilcoxon signed-rank test of paired comparisons indicated that the delivered dose was significantly reduced using density-corrected methods as compared to the reference method. Spearman's and Kendall's rank tests indicated a positive correlation between the doses calculated with the different methods

  8. A Comparison of Surface Acoustic Wave Modeling Methods

    Science.gov (United States)

    Wilson, W. c.; Atkinson, G. M.

    2009-01-01

    Surface Acoustic Wave (SAW) technology is low cost, rugged, lightweight, extremely low power and can be used to develop passive wireless sensors. For these reasons, NASA is investigating the use of SAW technology for Integrated Vehicle Health Monitoring (IVHM) of aerospace structures. To facilitate rapid prototyping of passive SAW sensors for aerospace applications, SAW models have been developed. This paper reports on the comparison of three methods of modeling SAWs. The three models are the Impulse Response Method a first order model, and two second order matrix methods; the conventional matrix approach, and a modified matrix approach that is extended to include internal finger reflections. The second order models are based upon matrices that were originally developed for analyzing microwave circuits using transmission line theory. Results from the models are presented with measured data from devices.

  9. Comparison between stochastic and machine learning methods for hydrological multi-step ahead forecasting: All forecasts are wrong!

    Science.gov (United States)

    Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris

    2017-04-01

    Machine learning (ML) is considered to be a promising approach to hydrological processes forecasting. We conduct a comparison between several stochastic and ML point estimation methods by performing large-scale computational experiments based on simulations. The purpose is to provide generalized results, while the respective comparisons in the literature are usually based on case studies. The stochastic methods used include simple methods, models from the frequently used families of Autoregressive Moving Average (ARMA), Autoregressive Fractionally Integrated Moving Average (ARFIMA) and Exponential Smoothing models. The ML methods used are Random Forests (RF), Support Vector Machines (SVM) and Neural Networks (NN). The comparison refers to the multi-step ahead forecasting properties of the methods. A total of 20 methods are used, among which 9 are the ML methods. 12 simulation experiments are performed, while each of them uses 2 000 simulated time series of 310 observations. The time series are simulated using stochastic processes from the families of ARMA and ARFIMA models. Each time series is split into a fitting (first 300 observations) and a testing set (last 10 observations). The comparative assessment of the methods is based on 18 metrics, that quantify the methods' performance according to several criteria related to the accurate forecasting of the testing set, the capturing of its variation and the correlation between the testing and forecasted values. The most important outcome of this study is that there is not a uniformly better or worse method. However, there are methods that are regularly better or worse than others with respect to specific metrics. It appears that, although a general ranking of the methods is not possible, their classification based on their similar or contrasting performance in the various metrics is possible to some extent. Another important conclusion is that more sophisticated methods do not necessarily provide better forecasts

  10. A comparison between progressive extension method (PEM) and iterative method (IM) for magnetic field extrapolations in the solar atmosphere

    Science.gov (United States)

    Wu, S. T.; Sun, M. T.; Sakurai, Takashi

    1990-01-01

    This paper presents a comparison between two numerical methods for the extrapolation of nonlinear force-free magnetic fields, viz the Iterative Method (IM) and the Progressive Extension Method (PEM). The advantages and disadvantages of these two methods are summarized, and the accuracy and numerical instability are discussed. On the basis of this investigation, it is claimed that the two methods do resemble each other qualitatively.

  11. Comparison of direct and indirect methods of estimating health state utilities for resource allocation: review and empirical analysis.

    Science.gov (United States)

    Arnold, David; Girling, Alan; Stevens, Andrew; Lilford, Richard

    2009-07-22

    Utilities (values representing preferences) for healthcare priority setting are typically obtained indirectly by asking patients to fill in a quality of life questionnaire and then converting the results to a utility using population values. We compared such utilities with those obtained directly from patients or the public. Review of studies providing both a direct and indirect utility estimate. Papers reporting comparisons of utilities obtained directly (standard gamble or time tradeoff) or indirectly (European quality of life 5D [EQ-5D], short form 6D [SF-6D], or health utilities index [HUI]) from the same patient. PubMed and Tufts database of utilities. Sign test for paired comparisons between direct and indirect utilities; least squares regression to describe average relations between the different methods. Mean utility scores (or median if means unavailable) for each method, and differences in mean (median) scores between direct and indirect methods. We found 32 studies yielding 83 instances where direct and indirect methods could be compared for health states experienced by adults. The direct methods used were standard gamble in 57 cases and time trade off in 60(34 used both); the indirect methods were EQ-5D (67 cases), SF-6D (13), HUI-2 (5), and HUI-3 (37). Mean utility values were 0.81 (standard gamble) and 0.77 (time tradeoff) for the direct methods; for the indirect methods: 0.59(EQ-5D), 0.63 (SF-6D), 0.75 (HUI-2) and 0.68 (HUI-3). Direct methods of estimating utilities tend to result in higher health ratings than the more widely used indirect methods, and the difference can be substantial.Use of indirect methods could have important implications for decisions about resource allocation: for example, non-lifesaving treatments are relatively more favoured in comparison with lifesaving interventions than when using direct methods.

  12. Critical comparison between equation of motion-Green's function methods and configuration interaction methods: analysis of methods and applications

    International Nuclear Information System (INIS)

    Freed, K.F.; Herman, M.F.; Yeager, D.L.

    1980-01-01

    A description is provided of the common conceptual origins of many-body equations of motion and Green's function methods in Liouville operator formulations of the quantum mechanics of atomic and molecular electronic structure. Numerical evidence is provided to show the inadequacies of the traditional strictly perturbative approaches to these methods. Nonperturbative methods are introduced by analogy with techniques developed for handling large configuration interaction calculations and by evaluating individual matrix elements to higher accuracy. The important role of higher excitations is exhibited by the numerical calculations, and explicit comparisons are made between converged equations of motion and configuration interaction calculations for systems where a fundamental theorem requires the equality of the energy differences produced by these different approaches. (Auth.)

  13. Comparison of breeding methods for forage yield in red clover

    Directory of Open Access Journals (Sweden)

    Libor Jalůvka

    2009-01-01

    Full Text Available Three methods of red clover (Trifolium pratense L. breeding for forage yield in two harvest years on locations in Bredelokke (Denmark, Hladké Životice (Czech Republic and Les Alleuds (France were compared. Three types of 46 candivars1, developed by A recurrent selection in subsequent generations (37 candivars, divided into early and late group, B polycross progenies (4 candivars and C ge­no-phe­no­ty­pic selection (5 candivars were compared. The trials were sown in 2005 and cut three times in 2006 and 2007; their evaluation is based primarily on total yield of dry matter. The candivars developed by polycross and geno-phenotypic selections gave significantly higher yields than candivars from the recurrent selection. However, the candivars developed by the methods B and C did not differ significantly. The candivars developed by these progressive methods were suitable for higher yielding and drier environment in Hladké Životice (where was the highest yield level even if averaged annual precipitation were lower by 73 and 113 mm in comparison to other locations, respectively; here was ave­ra­ge yield higher by 19 and 13% for B and C in comparison to A method. Highly significant interaction of the candivars with locations was found. It can be concluded that varieties specifically aimed to different locations by the methods B and C should be bred; also the parental entries should be selected there.

  14. Environmental impact from different modes of transport. Method of comparison

    International Nuclear Information System (INIS)

    2002-03-01

    A prerequisite of long-term sustainable development is that activities of various kinds are adjusted to what humans and the natural world can tolerate. Transport is an activity that affects humans and the environment to a very great extent and in this project, several actors within the transport sector have together laid the foundation for the development of a comparative method to be able to compare the environmental impact at the different stages along the transport chain. The method analyses the effects of different transport concepts on the climate, noise levels, human health, acidification, land use and ozone depletion. Within the framework of the method, a calculation model has been created in Excel which acts as a basis for the comparisons. The user can choose to download the model from the Swedish EPA's on-line bookstore or order it on a floppy disk. Neither the method nor the model are as yet fully developed but our hope is that they can still be used in their present form as a basis and inspire further efforts and research in the field. In the report, we describe most of these shortcomings, the problems associated with the work and the existing development potential. This publication should be seen as the first stage in the development of a method of comparison between different modes of transport in non-monetary terms where there remains a considerable need for further development and amplification

  15. Multiple comparisons in drug efficacy studies: scientific or marketing principles?

    Science.gov (United States)

    Leo, Jonathan

    2004-01-01

    When researchers design an experiment to compare a given medication to another medication, a behavioral therapy, or a placebo, the experiment often involves numerous comparisons. For instance, there may be several different evaluation methods, raters, and time points. Although scientifically justified, such comparisons can be abused in the interests of drug marketing. This article provides two recent examples of such questionable practices. The first involves the case of the arthritis drug celecoxib (Celebrex), where the study lasted 12 months but the authors only presented 6 months of data. The second case involves the NIMH Multimodal Treatment Study (MTA) study evaluating the efficacy of stimulant medication for attention-deficit hyperactivity disorder where ratings made by several groups are reported in contradictory fashion. The MTA authors have not clarified the confusion, at least in print, suggesting that the actual findings of the study may have played little role in the authors' reported conclusions.

  16. VerSi. A method for the quantitative comparison of repository systems

    Energy Technology Data Exchange (ETDEWEB)

    Kaempfer, T.U.; Ruebel, A.; Resele, G. [AF-Consult Switzerland Ltd, Baden (Switzerland); Moenig, J. [GRS Braunschweig (Germany)

    2015-07-01

    Decision making and design processes for radioactive waste repositories are guided by safety goals that need to be achieved. In this context, the comparison of different disposal concepts can provide relevant support to better understand the performance of the repository systems. Such a task requires a method for a traceable comparison that is as objective as possible. We present a versatile method that allows for the comparison of different disposal concepts in potentially different host rocks. The condition for the method to work is that the repository systems are defined to a comparable level including designed repository structures, disposal concepts, and engineered and geological barriers which are all based on site-specific safety requirements. The method is primarily based on quantitative analyses and probabilistic model calculations regarding the long-term safety of the repository systems under consideration. The crucial evaluation criteria for the comparison are statistical key figures of indicators that characterize the radiotoxicity flux out of the so called containment-providing rock zone (einschlusswirksamer Gebirgsbereich). The key figures account for existing uncertainties with respect to the actual site properties, the safety relevant processes, and the potential future impact of external processes on the repository system, i.e., they include scenario-, process-, and parameter-uncertainties. The method (1) leads to an evaluation of the retention and containment capacity of the repository systems and its robustness with respect to existing uncertainties as well as to potential external influences; (2) specifies the procedures for the system analyses and the calculation of the statistical key figures as well as for the comparative interpretation of the key figures; and (3) also gives recommendations and sets benchmarks for the comparative assessment of the repository systems under consideration based on the key figures and additional qualitative

  17. Removal of phenol from water : a comparison of energization methods

    NARCIS (Netherlands)

    Grabowski, L.R.; Veldhuizen, van E.M.; Rutgers, W.R.

    2005-01-01

    Direct electrical energization methods for removal of persistent substances from water are under investigation in the framework of the ytriD-project. The emphasis of the first stage of the project is the energy efficiency. A comparison is made between a batch reactor with a thin layer of water and

  18. Real-world comparison of two molecular methods for detection of respiratory viruses

    Directory of Open Access Journals (Sweden)

    Miller E Kathryn

    2011-06-01

    Full Text Available Abstract Background Molecular polymerase chain reaction (PCR based assays are increasingly used to diagnose viral respiratory infections and conduct epidemiology studies. Molecular assays have generally been evaluated by comparing them to conventional direct fluorescent antibody (DFA or viral culture techniques, with few published direct comparisons between molecular methods or between institutions. We sought to perform a real-world comparison of two molecular respiratory viral diagnostic methods between two experienced respiratory virus research laboratories. Methods We tested nasal and throat swab specimens obtained from 225 infants with respiratory illness for 11 common respiratory viruses using both a multiplex assay (Respiratory MultiCode-PLx Assay [RMA] and individual real-time RT-PCR (RT-rtPCR. Results Both assays detected viruses in more than 70% of specimens, but there was discordance. The RMA assay detected significantly more human metapneumovirus (HMPV and respiratory syncytial virus (RSV, while RT-rtPCR detected significantly more influenza A. We speculated that primer differences accounted for these discrepancies and redesigned the primers and probes for influenza A in the RMA assay, and for HMPV and RSV in the RT-rtPCR assay. The tests were then repeated and again compared. The new primers led to improved detection of HMPV and RSV by RT-rtPCR assay, but the RMA assay remained similar in terms of influenza detection. Conclusions Given the absence of a gold standard, clinical and research laboratories should regularly correlate the results of molecular assays with other PCR based assays, other laboratories, and with standard virologic methods to ensure consistency and accuracy.

  19. Comparison of three analytical methods for the determination of trace elements in whole blood

    International Nuclear Information System (INIS)

    Ward, N.I.; Stephens, R.; Ryan, D.E.

    1979-01-01

    Three different analytical techniques were compared in a study of the role of trace elements in multiple sclerosis. Data for eight elements (Cd, Co, Cr, Cu, Mg, Mn, Pb, Zn) from neutron activation, flame atomic absorption and electrothermal atomic absorption methods were compared and evaluated statistically. No difference (probability less than 0.001) was observed in the elemental values obtained. Comparison of data between suitably different analytical methods gives increased confidence in the results obtained and is of particular value when standard reference materials are not available. (Auth.)

  20. National comparison on volume sample activity measurement methods

    International Nuclear Information System (INIS)

    Sahagia, M.; Grigorescu, E.L.; Popescu, C.; Razdolescu, C.

    1992-01-01

    A national comparison on volume sample activity measurements methods may be regarded as a step toward accomplishing the traceability of the environmental and food chain activity measurements to national standards. For this purpose, the Radionuclide Metrology Laboratory has distributed 137 Cs and 134 Cs water-equivalent solid standard sources to 24 laboratories having responsibilities in this matter. Every laboratory has to measure the activity of the received source(s) by using its own standards, equipment and methods and report the obtained results to the organizer. The 'measured activities' will be compared with the 'true activities'. A final report will be issued, which plans to evaluate the national level of precision of such measurements and give some suggestions for improvement. (Author)

  1. Comparison of potential method in analytic hierarchy process for multi-attribute of catering service companies

    Science.gov (United States)

    Mamat, Siti Salwana; Ahmad, Tahir; Awang, Siti Rahmah

    2017-08-01

    Analytic Hierarchy Process (AHP) is a method used in structuring, measuring and synthesizing criteria, in particular ranking of multiple criteria in decision making problems. On the other hand, Potential Method is a ranking procedure in which utilizes preference graph ς (V, A). Two nodes are adjacent if they are compared in a pairwise comparison whereby the assigned arc is oriented towards the more preferred node. In this paper Potential Method is used to solve problem on a catering service selection. The comparison of result by using Potential method is made with Extent Analysis. The Potential Method is found to produce the same rank as Extent Analysis in AHP.

  2. A comparison of surveillance methods for small incidence rates

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Woodall, William H.; Reynolds, Marion R.

    2008-05-15

    A number of methods have been proposed to detect an increasing shift in the incidence rate of a rare health event, such as a congenital malformation. Among these are the Sets method, two modifcations of the Sets method, and the CUSUM method based on the Poisson distribution. We consider the situation where data are observed as a sequence of Bernoulli trials and propose the Bernoulli CUSUM chart as a desirable method for the surveillance of rare health events. We compare the performance of the Sets method and its modifcations to the Bernoulli CUSUM chart under a wide variety of circumstances. Chart design parameters were chosen to satisfy a minimax criteria.We used the steady- state average run length to measure chart performance instead of the average run length which was used in nearly all previous comparisons involving the Sets method or its modifcations. Except in a very few instances, we found that the Bernoulli CUSUM chart has better steady-state average run length performance than the Sets method and its modifcations for the extensive number of cases considered.

  3. Comparison Among Methods of Retinopathy Assessment (CAMRA) Study: Smartphone, Nonmydriatic, and Mydriatic Photography.

    Science.gov (United States)

    Ryan, Martha E; Rajalakshmi, Ramachandran; Prathiba, Vijayaraghavan; Anjana, Ranjit Mohan; Ranjani, Harish; Narayan, K M Venkat; Olsen, Timothy W; Mohan, Viswanathan; Ward, Laura A; Lynn, Michael J; Hendrick, Andrew M

    2015-10-01

    We compared smartphone fundus photography, nonmydriatic fundus photography, and 7-field mydriatic fundus photography for their abilities to detect and grade diabetic retinopathy (DR). This was a prospective, comparative study of 3 photography modalities. Diabetic patients (n = 300) were recruited at the ophthalmology clinic of a tertiary diabetes care center in Chennai, India. Patients underwent photography by all 3 modalities, and photographs were evaluated by 2 retina specialists. The sensitivity and specificity in the detection of DR for both smartphone and nonmydriatic photography were determined by comparison with the standard method, 7-field mydriatic fundus photography. The sensitivity and specificity of smartphone fundus photography, compared with 7-field mydriatic fundus photography, for the detection of any DR were 50% (95% confidence interval [CI], 43-56) and 94% (95% CI, 92-97), respectively, and of nonmydriatic fundus photography were 81% (95% CI, 75-86) and 94% (95% CI, 92-96%), respectively. The sensitivity and specificity of smartphone fundus photography for the detection of vision-threatening DR were 59% (95% CI, 46-72) and 100% (95% CI, 99-100), respectively, and of nonmydriatic fundus photography were 54% (95% CI, 40-67) and 99% (95% CI, 98-100), respectively. Smartphone and nonmydriatic fundus photography are each able to detect DR and sight-threatening disease. However, the nonmydriatic camera is more sensitive at detecting DR than the smartphone. At this time, the benefits of the smartphone (connectivity, portability, and reduced cost) are not offset by the lack of sufficient sensitivity for detection of DR in most clinical circumstances. Copyright © 2015 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  4. A comparison of in vivo and in vitro methods for determining availability of iron from meals

    International Nuclear Information System (INIS)

    Schricker, B.R.; Miller, D.D.; Rasmussen, R.R.; Van Campen, D.

    1981-01-01

    A comparison is made between in vitro and human and rat in vivo methods for estimating food iron availability. Complex meals formulated to replicate meals used by Cook and Monsen (Am J Clin Nutr 1976;29:859) in human iron availability trials were used in the comparison. The meals were prepared by substituting pork, fish, cheese, egg, liver, or chicken for beef in two basic test meals and were evaluated for iron availability using in vitro and rat in vivo methods. When the criterion for comparison was the ability to show statistically significant differences between iron availability in the various meals, there was substantial agreement between the in vitro and human in vivo methods. There was less agreement between the human in vivo and the rat in vivo and between the in vivo and the rat in vivo and between the in vitro and the rat in vivo methods. Correlation analysis indicated significant agreement between in vitro and human in vivo methods. Correlation between the rat in vivo and human in vivo methods were also significant but correlations between the in vitro and rat in vivo methods were less significant and, in some cases, not significant. The comparison supports the contention that the in vitro method allows a rapid, inexpensive, and accurate estimation of nonheme iron availability in complex meals

  5. A method for statistical comparison of data sets and its uses in analysis of nuclear physics data

    International Nuclear Information System (INIS)

    Bityukov, S.I.; Smirnova, V.V.; Krasnikov, N.V.; Maksimushkina, A.V.; Nikitenko, A.N.

    2014-01-01

    Authors propose a method for statistical comparison of two data sets. The method is based on the method of statistical comparison of histograms. As an estimator of quality of the decision made, it is proposed to use the value which it is possible to call the probability that the decision (data sets are various) is correct [ru

  6. The choice of statistical methods for comparisons of dosimetric data in radiotherapy

    International Nuclear Information System (INIS)

    Chaikh, Abdulhamid; Giraud, Jean-Yves; Perrin, Emmanuel; Bresciani, Jean-Pierre; Balosso, Jacques

    2014-01-01

    Novel irradiation techniques are continuously introduced in radiotherapy to optimize the accuracy, the security and the clinical outcome of treatments. These changes could raise the question of discontinuity in dosimetric presentation and the subsequent need for practice adjustments in case of significant modifications. This study proposes a comprehensive approach to compare different techniques and tests whether their respective dose calculation algorithms give rise to statistically significant differences in the treatment doses for the patient. Statistical investigation principles are presented in the framework of a clinical example based on 62 fields of radiotherapy for lung cancer. The delivered doses in monitor units were calculated using three different dose calculation methods: the reference method accounts the dose without tissues density corrections using Pencil Beam Convolution (PBC) algorithm, whereas new methods calculate the dose with tissues density correction for 1D and 3D using Modified Batho (MB) method and Equivalent Tissue air ratio (ETAR) method, respectively. The normality of the data and the homogeneity of variance between groups were tested using Shapiro-Wilks and Levene test, respectively, then non-parametric statistical tests were performed. Specifically, the dose means estimated by the different calculation methods were compared using Friedman’s test and Wilcoxon signed-rank test. In addition, the correlation between the doses calculated by the three methods was assessed using Spearman’s rank and Kendall’s rank tests. The Friedman’s test showed a significant effect on the calculation method for the delivered dose of lung cancer patients (p <0.001). The density correction methods yielded to lower doses as compared to PBC by on average (−5 ± 4.4 SD) for MB and (−4.7 ± 5 SD) for ETAR. Post-hoc Wilcoxon signed-rank test of paired comparisons indicated that the delivered dose was significantly reduced using density

  7. On solutions of stochastic oscillatory quadratic nonlinear equations using different techniques, a comparison study

    International Nuclear Information System (INIS)

    El-Tawil, M A; Al-Jihany, A S

    2008-01-01

    In this paper, nonlinear oscillators under quadratic nonlinearity with stochastic inputs are considered. Different methods are used to obtain first order approximations, namely, the WHEP technique, the perturbation method, the Pickard approximations, the Adomian decompositions and the homotopy perturbation method (HPM). Some statistical moments are computed for the different methods using mathematica 5. Comparisons are illustrated through figures for different case-studies

  8. Comparison of methods and instruments for 222Rn/220Rn progeny measurement

    International Nuclear Information System (INIS)

    Liu Yanyang; Shang Bing; Wu Yunyun; Zhou Qingzhi

    2012-01-01

    In this paper, comparisons were made among three methods of measurement (grab measurement, continuous measurement and integrating measurement) and also measurement of different instruments for a radon/thoron mixed chamber. Taking the optimized five-segment method as a comparison criterion, for the equilibrium-equivalent concentration of 222 Rn, measured results of Balm and 24 h integrating detectors are 31% and 29% higher than the criterion, the results of Wl x, however, is 20% lower; and for 220 Rn progeny, the results of Fiji-142, Kf-602D, BWLM and 24 h integrating detector are 86%, 18%, 28% and 36% higher than the criterion respectively, except that of WLx, which is 5% lower. For the differences shown, further research is needed. (authors)

  9. Methods to assess intended effects of drug treatment in observational studies are reviewed

    NARCIS (Netherlands)

    Klungel, Olaf H|info:eu-repo/dai/nl/181447649; Martens, Edwin P|info:eu-repo/dai/nl/088859010; Psaty, Bruce M; Grobbee, Diederik E; Sullivan, Sean D; Stricker, Bruno H Ch; Leufkens, Hubert G M|info:eu-repo/dai/nl/075255049; de Boer, A|info:eu-repo/dai/nl/075097346

    2004-01-01

    BACKGROUND AND OBJECTIVE: To review methods that seek to adjust for confounding in observational studies when assessing intended drug effects. METHODS: We reviewed the statistical, economical and medical literature on the development, comparison and use of methods adjusting for confounding. RESULTS:

  10. Comparison of three retail data communication and transmission methods

    Directory of Open Access Journals (Sweden)

    MA Yue

    2016-04-01

    Full Text Available With the rapid development of retail trade,the type and complexity of data keep increasing,and single data file size has a great difference between each other.How to realize an accurate,real-time and efficient data transmission based on a fixed cost is an important problem.Regarding the problem of effective transmission for business data files,this article implements analysis and comparison on 3 existing data transmission methods,considering the requirements on aspects like function in enterprise data communication system,we get a conclusion that which method can both meet the enterprise daily business development requirement better and have good extension ability.

  11. Comparison of continuum and atomistic methods for the analysis of InAs/GaAs quantum dots

    DEFF Research Database (Denmark)

    Barettin, D.; Pecchia, A.; Penazzi, G.

    2011-01-01

    We present a comparison of continuum k · p and atomistic empirical Tight Binding methods for the analysis of the optoelectronic properties of InAs/GaAs quantum dots.......We present a comparison of continuum k · p and atomistic empirical Tight Binding methods for the analysis of the optoelectronic properties of InAs/GaAs quantum dots....

  12. A comparison of visual and quantitative methods to identify interstitial lung abnormalities

    OpenAIRE

    Kliment, Corrine R.; Araki, Tetsuro; Doyle, Tracy J.; Gao, Wei; Dupuis, Jos?e; Latourelle, Jeanne C.; Zazueta, Oscar E.; Fernandez, Isis E.; Nishino, Mizuki; Okajima, Yuka; Ross, James C.; Est?par, Ra?l San Jos?; Diaz, Alejandro A.; Lederer, David J.; Schwartz, David A.

    2015-01-01

    Background: Evidence suggests that individuals with interstitial lung abnormalities (ILA) on a chest computed tomogram (CT) may have an increased risk to develop a clinically significant interstitial lung disease (ILD). Although methods used to identify individuals with ILA on chest CT have included both automated quantitative and qualitative visual inspection methods, there has been not direct comparison between these two methods. To investigate this relationship, we created lung density met...

  13. Comparison of three methods of restoration of cosmic radio source profiles

    International Nuclear Information System (INIS)

    Malov, I.F.; Frolov, V.A.

    1986-01-01

    Effectiveness of three methods for restoration of radio brightness distribution over the source: main solution, fitting and minimal - phase method (MPM) - was compared on the basis of data on module and phase of luminosity function (LF) of 15 cosmic radiosources. It is concluded that MPM can soccessfully compete with other known methods. Its obvious advantages in comparison with the fitting method consist in that it gives unambigous and direct restoration and a main advantage as compared with the main solution is the feasibility of restoration in the absence of data on LF phase which reduces restoration errors

  14. Buildup factors for multilayer shieldings in deterministic methods and their comparison with Monte Carlo

    International Nuclear Information System (INIS)

    Listjak, M.; Slavik, O.; Kubovcova, D.; Vermeersch, F.

    2008-01-01

    In general there are two ways how to calculate effective doses. The first way is by use of deterministic methods like point kernel method which is implemented in Visiplan or Microshield. These kind of calculations are very fast, but they are not very convenient for a complex geometry with shielding composed of more then one material in meaning of result precision. In spite of this that programs are sufficient for ALARA optimisation calculations. On other side there are Monte Carlo methods which can be used for calculations. This way of calculation is quite precise in comparison with reality but calculation time is usually very large. Deterministic method like programs have one disadvantage -usually there is option to choose buildup factor (BUF) only for one material in multilayer stratified slabs shielding calculation problems even if shielding is composed from different materials. In literature there are proposed different formulas for multilayer BUF approximation. Aim of this paper was to examine these different formulas and their comparison with MCNP calculations. At first ware compared results of Visiplan and Microshield. Simple geometry was modelled - point source behind single and double slab shielding. For Build-up calculations was chosen Geometric Progression method (feature of the newest version of Visiplan) because there are lower deviations in comparison with Taylor fitting. (authors)

  15. Buildup factors for multilayer shieldings in deterministic methods and their comparison with Monte Carlo

    International Nuclear Information System (INIS)

    Listjak, M.; Slavik, O.; Kubovcova, D.; Vermeersch, F.

    2009-01-01

    In general there are two ways how to calculate effective doses. The first way is by use of deterministic methods like point kernel method which is implemented in Visiplan or Microshield. These kind of calculations are very fast, but they are not very convenient for a complex geometry with shielding composed of more then one material in meaning of result precision. In spite of this that programs are sufficient for ALARA optimisation calculations. On other side there are Monte Carlo methods which can be used for calculations. This way of calculation is quite precise in comparison with reality but calculation time is usually very large. Deterministic method like programs have one disadvantage -usually there is option to choose buildup factor (BUF) only for one material in multilayer stratified slabs shielding calculation problems even if shielding is composed from different materials. In literature there are proposed different formulas for multilayer BUF approximation. Aim of this paper was to examine these different formulas and their comparison with MCNP calculations. At first ware compared results of Visiplan and Microshield. Simple geometry was modelled - point source behind single and double slab shielding. For Build-up calculations was chosen Geometric Progression method (feature of the newest version of Visiplan) because there are lower deviations in comparison with Taylor fitting. (authors)

  16. A comparison of fitness-case sampling methods for genetic programming

    Science.gov (United States)

    Martínez, Yuliana; Naredo, Enrique; Trujillo, Leonardo; Legrand, Pierrick; López, Uriel

    2017-11-01

    Genetic programming (GP) is an evolutionary computation paradigm for automatic program induction. GP has produced impressive results but it still needs to overcome some practical limitations, particularly its high computational cost, overfitting and excessive code growth. Recently, many researchers have proposed fitness-case sampling methods to overcome some of these problems, with mixed results in several limited tests. This paper presents an extensive comparative study of four fitness-case sampling methods, namely: Interleaved Sampling, Random Interleaved Sampling, Lexicase Selection and Keep-Worst Interleaved Sampling. The algorithms are compared on 11 symbolic regression problems and 11 supervised classification problems, using 10 synthetic benchmarks and 12 real-world data-sets. They are evaluated based on test performance, overfitting and average program size, comparing them with a standard GP search. Comparisons are carried out using non-parametric multigroup tests and post hoc pairwise statistical tests. The experimental results suggest that fitness-case sampling methods are particularly useful for difficult real-world symbolic regression problems, improving performance, reducing overfitting and limiting code growth. On the other hand, it seems that fitness-case sampling cannot improve upon GP performance when considering supervised binary classification.

  17. Procedures, analysis, and comparison of groundwater velocity measurement methods for unconfined aquifers

    International Nuclear Information System (INIS)

    Kearl, P.M.; Dexter, J.J.; Price, J.E.

    1988-09-01

    Six methods for determining the average linear velocity of ground- water were tested at two separate field sites. The methods tested include bail tests, pumping tests, wave propagation, tracer tests, Geoflo Meter/reg sign/, and borehole dilution. This report presents procedures for performing field tests and compares the results of each method on the basis of application, cost, and accuracy. Comparisons of methods to determine the ground-water velocity at two field sites show certain methods yield similar results while other methods measure significantly different values. The literature clearly supports the reliability of pumping tests for determining hydraulic conductivity. Results of this investigation support this finding. Pumping tests, however, are limited because they measure an average hydraulic conductivity which is only representative of the aquifer within the radius of influence. Bail tests are easy and inexpensive to perform. If the tests are conducted on the majority of wells at a hazardous waste site, then the heterogeneity of the site aquifer can be assessed. However, comparisons of bail-test results with pumping-test and tracer-test results indicate that the accuracy of the method is questionable. Consequently, the principal recommendation of this investigation, based on cost and reliability of the ground-water velocity measurement methods, is that bail tests should be performed on all or a majority of monitoring wells at a site to determine the ''relative'' hydraulic conductivities

  18. A review of methods of analysis in contouring studies for radiation oncology

    International Nuclear Information System (INIS)

    Jameson, Michael G.; Holloway, Lois C.; Metcalfe, Peter E.; Vial, Philip J.; Vinod, Shalini K.

    2010-01-01

    Full text: Inter-observer variability in anatomical contouring is the biggest contributor to uncertainty in radiation treatment planning. Contouring studies are frequently performed to investigate the differences between multiple contours on common datasets. There is, however, no widely accepted method for contour comparisons. The purpose of this study is to review the literature on contouring studies in the context of radiation oncology, with particular consideration of the contouring comparison methods they employ. A literature search, not limited by date, was conducted using Medline and Google Scholar with key words; contour, variation, delineation, inter/intra observer, uncertainty and trial dummy-run. This review includes a description of the contouring processes and contour comparison metrics used. The use of different processes and metrics according to tumour site and other factors were also investigated with limitations described. A total of 69 relevant studies were identified. The most common tumour sites were prostate (26), lung (10), head and neck cancers (8) and breast (7).The most common metric of comparison was volume used 59 times, followed by dimension and shape used 36 times, and centre of volume used 19 times. Of all 69 publications, 67 used a combination of metrics and two used only one metric for comparison. No clear relationships between tumour site or any other factors that may in Auence the contouring process and the metrics used to compare contours were observed from the literature. Further studies are needed to assess the advantages and disadvantages of each metric in various situations.

  19. Assessment of interchangeability rate between 2 methods of measurements: An example with a cardiac output comparison study.

    Science.gov (United States)

    Lorne, Emmanuel; Diouf, Momar; de Wilde, Robert B P; Fischer, Marc-Olivier

    2018-02-01

    The Bland-Altman (BA) and percentage error (PE) methods have been previously described to assess the agreement between 2 methods of medical or laboratory measurements. This type of approach raises several problems: the BA methodology constitutes a subjective approach to interchangeability, whereas the PE approach does not take into account the distribution of values over a range. We describe a new methodology that defines an interchangeability rate between 2 methods of measurement and cutoff values that determine the range of interchangeable values. We used a simulated data and a previously published data set to demonstrate the concept of the method. The interchangeability rate of 5 different cardiac output (CO) pulse contour techniques (Wesseling method, LiDCO, PiCCO, Hemac method, and Modelflow) was calculated, in comparison with the reference pulmonary artery thermodilution CO using our new method. In our example, Modelflow with a good interchangeability rate of 93% and a cutoff value of 4.8 L min, was found to be interchangeable with the thermodilution method for >95% of measurements. Modelflow had a higher interchangeability rate compared to Hemac (93% vs 86%; P = .022) or other monitors (Wesseling cZ = 76%, LiDCO = 73%, and PiCCO = 62%; P < .0001). Simulated data and reanalysis of a data set comparing 5 CO monitors against thermodilution CO showed that, depending on the repeatability of the reference method, the interchangeability rate combined with a cutoff value could be used to define the range of values over which interchangeability remains acceptable.

  20. Comparison of two dietary assessment methods by food consumption: results of the German National Nutrition Survey II.

    Science.gov (United States)

    Eisinger-Watzl, Marianne; Straßburg, Andrea; Ramünke, Josa; Krems, Carolin; Heuer, Thorsten; Hoffmann, Ingrid

    2015-04-01

    To further characterise the performance of the diet history method and the 24-h recalls method, both in an updated version, a comparison was conducted. The National Nutrition Survey II, representative for Germany, assessed food consumption with both methods. The comparison was conducted in a sample of 9,968 participants aged 14-80. Besides calculating mean differences, statistical agreement measurements encompass Spearman and intraclass correlation coefficients, ranking participants in quartiles and the Bland-Altman method. Mean consumption of 12 out of 18 food groups was higher assessed with the diet history method. Three of these 12 food groups had a medium to large effect size (e.g., raw vegetables) and seven showed at least a small strength while there was basically no difference for coffee/tea or ice cream. Intraclass correlations were strong only for beverages (>0.50) and revealed the least correlation for vegetables (diet history method to remember consumption of the past 4 weeks may be a source of inaccurateness, especially for inhomogeneous food groups. Additionally, social desirability gains significance. There is no assessment method without errors and attention to specific food groups is a critical issue with every method. Altogether, the 24-h recalls method applied in the presented study, offers advantages approximating food consumption as compared to the diet history method.

  1. Should methods of correction for multiple comparisons be applied in pharmacovigilance?

    Directory of Open Access Journals (Sweden)

    Lorenza Scotti

    2015-12-01

    Full Text Available Purpose. In pharmacovigilance, spontaneous reporting databases are devoted to the early detection of adverse event ‘signals’ of marketed drugs. A common limitation of these systems is the wide number of concurrently investigated associations, implying a high probability of generating positive signals simply by chance. However it is not clear if the application of methods aimed to adjust for the multiple testing problems are needed when at least some of the drug-outcome relationship under study are known. To this aim we applied a robust estimation method for the FDR (rFDR particularly suitable in the pharmacovigilance context. Methods. We exploited the data available for the SAFEGUARD project to apply the rFDR estimation methods to detect potential false positive signals of adverse reactions attributable to the use of non-insulin blood glucose lowering drugs. Specifically, the number of signals generated from the conventional disproportionality measures and after the application of the rFDR adjustment method was compared. Results. Among the 311 evaluable pairs (i.e., drug-event pairs with at least one adverse event report, 106 (34% signals were considered as significant from the conventional analysis. Among them 1 resulted in false positive signals according to rFDR method. Conclusions. The results of this study seem to suggest that when a restricted number of drug-outcome pairs is considered and warnings about some of them are known, multiple comparisons methods for recognizing false positive signals are not so useful as suggested by theoretical considerations.

  2. Stochastic spectral Galerkin and collocation methods for PDEs with random coefficients: A numerical comparison

    KAUST Repository

    Bä ck, Joakim; Nobile, Fabio; Tamellini, Lorenzo; Tempone, Raul

    2010-01-01

    Much attention has recently been devoted to the development of Stochastic Galerkin (SG) and Stochastic Collocation (SC) methods for uncertainty quantification. An open and relevant research topic is the comparison of these two methods

  3. A comparative study of different methods for calculating electronic transition rates

    Science.gov (United States)

    Kananenka, Alexei A.; Sun, Xiang; Schubert, Alexander; Dunietz, Barry D.; Geva, Eitan

    2018-03-01

    We present a comprehensive comparison of the following mixed quantum-classical methods for calculating electronic transition rates: (1) nonequilibrium Fermi's golden rule, (2) mixed quantum-classical Liouville method, (3) mean-field (Ehrenfest) mixed quantum-classical method, and (4) fewest switches surface-hopping method (in diabatic and adiabatic representations). The comparison is performed on the Garg-Onuchic-Ambegaokar benchmark charge-transfer model, over a broad range of temperatures and electronic coupling strengths, with different nonequilibrium initial states, in the normal and inverted regimes. Under weak to moderate electronic coupling, the nonequilibrium Fermi's golden rule rates are found to be in good agreement with the rates obtained via the mixed quantum-classical Liouville method that coincides with the fully quantum-mechanically exact results for the model system under study. Our results suggest that the nonequilibrium Fermi's golden rule can serve as an inexpensive yet accurate alternative to Ehrenfest and the fewest switches surface-hopping methods.

  4. Comparison of several methods of sires evaluation for total milk ...

    African Journals Online (AJOL)

    2015-01-24

    Jan 24, 2015 ... Comparison of several methods of sires evaluation for total milk yield in a herd of Holstein cows in Yemen. F.R. Al-Samarai1,*, Y.K. Abdulrahman1, F.A. Mohammed2, F.H. Al-Zaidi2 and N.N. Al-Anbari3. 1Department of Veterinary Public Health/College of Veterinary Medicine, University of Baghdad, Iraq.

  5. SPACE CHARGE SIMULATION METHODS INCORPORATED IN SOME MULTI - PARTICLE TRACKING CODES AND THEIR RESULTS COMPARISON

    International Nuclear Information System (INIS)

    BEEBE - WANG, J.; LUCCIO, A.U.; D IMPERIO, N.; MACHIDA, S.

    2002-01-01

    Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed

  6. SPACE CHARGE SIMULATION METHODS INCORPORATED IN SOME MULTI - PARTICLE TRACKING CODES AND THEIR RESULTS COMPARISON.

    Energy Technology Data Exchange (ETDEWEB)

    BEEBE - WANG,J.; LUCCIO,A.U.; D IMPERIO,N.; MACHIDA,S.

    2002-06-03

    Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed.

  7. Comparison between two methods of methanol production from carbon dioxide

    International Nuclear Information System (INIS)

    Anicic, B.; Trop, P.; Goricanec, D.

    2014-01-01

    Over recent years there has been a significant increase in the amount of technology contributing to lower emissions of carbon dioxide. The aim of this paper is to provide a comparison between two technologies for methanol production, both of which use carbon dioxide and hydrogen as initial raw materials. The first methanol production technology includes direct synthesis of methanol from CO 2 , and the second has two steps. During the first step CO 2 is converted into CO via RWGS (reverse water gas shift) reaction, and methanol is produced during the second step. A comparison between these two methods was achieved in terms of economical and energy-efficiency bases. The price of electricity had the greatest impact from the economical point of view as hydrogen is produced via the electrolysis of water. Furthermore, both the cost of CO 2 capture and the amounts of carbon taxes were taken into consideration. Energy-efficiency comparison is based on cold gas efficiency, while economic feasibility is compared using net present value. Even though the mentioned processes are similar, it was shown that direct methanol synthesis has higher energy and economic efficiency. - Highlights: • We compared two methods for methanol production. • Process schemes for both, direct synthesis and two-step synthesis, are described. • Direct synthesis has higher economical and energy efficiency

  8. Comparison of 3 in vivo methods for assessment of alcohol-based hand rubs.

    Science.gov (United States)

    Edmonds-Wilson, Sarah; Campbell, Esther; Fox, Kyle; Macinga, David

    2015-05-01

    Alcohol-based hand rubs (ABHRs) are the primary method of hand hygiene in health-care settings. ICPs increasingly are assessing ABHR product efficacy data as improved products and test methods are developed. As a result, ICPs need better tools and recommendations for how to assess and compare ABHRs. Two ABHRs (70% ethanol) were tested according to 3 in vivo methods approved by ASTM International: E1174, E2755, and E2784. Log10 reductions were measured after a single test product use and after 10 consecutive uses at an application volume of 2 mL. The test method used had a significant influence on ABHR efficacy; however, in this study the test product (gel or foam) did not significantly influence efficacy. In addition, for all test methods, log10 reductions obtained after a single application were not predictive of results after 10 applications. Choice of test method can significantly influence efficacy results. Therefore, when assessing antimicrobial efficacy data of hand hygiene products, ICPs should pay close attention to the test method used, and ensure that product comparisons are made head to head in the same study using the same test methodology. Copyright © 2015 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  9. Sampling for ants in different-aged spruce forests: A comparison of methods

    Czech Academy of Sciences Publication Activity Database

    Véle, A.; Holuša, J.; Frouz, Jan

    2009-01-01

    Roč. 45, č. 4 (2009), s. 301-305 ISSN 1164-5563 Institutional research plan: CEZ:AV0Z60660521 Keywords : ants * baits * methods comparison Subject RIV: EH - Ecology, Behaviour Impact factor: 1.247, year: 2009

  10. Attenuation correction for hybrid MR/PET scanners: a comparison study

    Energy Technology Data Exchange (ETDEWEB)

    Rota Kops, Elena [Forschungszentrum Jülich GmbH, Jülich (Germany); Ribeiro, Andre Santos [Imperial College London, London (United Kingdom); Caldeira, Liliana [Forschungszentrum Jülich GmbH, Jülich (Germany); Hautzel, Hubertus [Heinrich-Heine-University Düsseldorf, Düsseldorf (Germany); Lukas, Mathias [Technische Universitaet Muenchen, Munich (Germany); Antoch, Gerald [Heinrich-Heine-University Düsseldorf, Düsseldorf (Germany); Lerche, Christoph; Shah, Jon [Forschungszentrum Jülich GmbH, Jülich (Germany)

    2015-05-18

    Attenuation correction of PET data acquired in hybrid MR/PET scanners is still a challenge. Different methods have been adopted by several groups to obtain reliable attenuation maps (mu-maps). In this study we compare three methods: MGH, UCL, Neural-Network. The MGH method is based on an MR/CT template obtained with the SPM8 software. The UCL method uses a database of MR/CT pairs. Both generate mu-maps from MP-RAGE images. The feed-forward neural-network from Juelich (NN-Juelich) requires two UTE images; it generates segmented mu-maps. Data from eight subjects (S1-S8) measured in the Siemens 3T MR-BrainPET scanner were used. Corresponding CT images were acquired. The resulting mu-maps were compared against the CT-based mu-maps for each subject and method. Overlapped voxels and Dice similarity coefficients, D, for bone, soft-tissue and air regions, and relative differences images were calculated. The true positive (TP) recognized voxels for the whole head were 79.9% (NN-Juelich, S7) to 92.1% (UCL method, S1). D values of the bone were D=0.65 (NN-Juelich, S1) to D=0.87 (UCL method, S1). For S8 the MHG method failed (TP=76.4%; D=0.46 for bone). D values shared a common tendency in all subjects and methods to recognize soft-tissue as bone. The relative difference images showed a variation of -10.9% - +10.1%; for S8 and MHG method the values were -24.5% and +14.2%. A preliminary comparison of three methods for generation of mu-maps for MR/PET scanners is presented. The continuous methods (MGH, UCL) seem to generate reliable mu-maps, whilst the binary method seems to need further improvement. Future work will include more subjects, the reconstruction of corresponding PET data and their comparison.

  11. Attenuation correction for hybrid MR/PET scanners: a comparison study

    International Nuclear Information System (INIS)

    Rota Kops, Elena; Ribeiro, Andre Santos; Caldeira, Liliana; Hautzel, Hubertus; Lukas, Mathias; Antoch, Gerald; Lerche, Christoph; Shah, Jon

    2015-01-01

    Attenuation correction of PET data acquired in hybrid MR/PET scanners is still a challenge. Different methods have been adopted by several groups to obtain reliable attenuation maps (mu-maps). In this study we compare three methods: MGH, UCL, Neural-Network. The MGH method is based on an MR/CT template obtained with the SPM8 software. The UCL method uses a database of MR/CT pairs. Both generate mu-maps from MP-RAGE images. The feed-forward neural-network from Juelich (NN-Juelich) requires two UTE images; it generates segmented mu-maps. Data from eight subjects (S1-S8) measured in the Siemens 3T MR-BrainPET scanner were used. Corresponding CT images were acquired. The resulting mu-maps were compared against the CT-based mu-maps for each subject and method. Overlapped voxels and Dice similarity coefficients, D, for bone, soft-tissue and air regions, and relative differences images were calculated. The true positive (TP) recognized voxels for the whole head were 79.9% (NN-Juelich, S7) to 92.1% (UCL method, S1). D values of the bone were D=0.65 (NN-Juelich, S1) to D=0.87 (UCL method, S1). For S8 the MHG method failed (TP=76.4%; D=0.46 for bone). D values shared a common tendency in all subjects and methods to recognize soft-tissue as bone. The relative difference images showed a variation of -10.9% - +10.1%; for S8 and MHG method the values were -24.5% and +14.2%. A preliminary comparison of three methods for generation of mu-maps for MR/PET scanners is presented. The continuous methods (MGH, UCL) seem to generate reliable mu-maps, whilst the binary method seems to need further improvement. Future work will include more subjects, the reconstruction of corresponding PET data and their comparison.

  12. Multi-reader ROC studies with split-plot designs: a comparison of statistical methods.

    Science.gov (United States)

    Obuchowski, Nancy A; Gallas, Brandon D; Hillis, Stephen L

    2012-12-01

    Multireader imaging trials often use a factorial design, in which study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of this design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper, the authors compare three methods of analysis for the split-plot design. Three statistical methods are presented: the Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean analysis-of-variance approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power, and confidence interval coverage of the three test statistics. The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% confidence intervals falls close to the nominal coverage for small and large sample sizes. The split-plot multireader, multicase study design can be statistically efficient compared to the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rates, similar power, and nominal confidence interval coverage, are available for this study design. Copyright © 2012 AUR. All rights reserved.

  13. Reduction in the Number of Comparisons Required to Create Matrix of Expert Judgment in the Comet Method

    Directory of Open Access Journals (Sweden)

    Sałabun Wojciech

    2014-09-01

    Full Text Available Multi-criteria decision-making (MCDM methods are associated with the ranking of alternatives based on expert judgments made using a number of criteria. In the MCDM field, the distance-based approach is one popular method for receiving a final ranking. One of the newest MCDM method, which uses the distance-based approach, is the Characteristic Objects Method (COMET. In this method, the preferences of each alternative are obtained on the basis of the distance from the nearest characteristic ob jects and their values. For this purpose, the domain and fuzzy numbers set for all the considered criteria are determined. The characteristic objects are obtained as the combination of the crisp values of all the fuzzy numbers. The preference values of all the characteristic ob ject are determined based on the tournament method and the principle of indifference. Finally, the fuzzy model is constructed and is used to calculate preference values of the alternatives. In this way, a multi-criteria model is created and it is free of rank reversal phenomenon. In this approach, the matrix of expert judgment is necessary to create. For this purpose, an expert has to compare all the characteristic ob jects with each other. The number of necessary comparisons depends squarely to the number of ob jects. This study proposes the improvement of the COMET method by using the transitivity of pairwise comparisons. Three numerical examples are used to illustrate the efficiency of the proposed improvement with respect to results from the original approach. The proposed improvement reduces significantly the number of necessary comparisons to create the matrix of expert judgment.

  14. Comparison of classification algorithms for various methods of preprocessing radar images of the MSTAR base

    Science.gov (United States)

    Borodinov, A. A.; Myasnikov, V. V.

    2018-04-01

    The present work is devoted to comparing the accuracy of the known qualification algorithms in the task of recognizing local objects on radar images for various image preprocessing methods. Preprocessing involves speckle noise filtering and normalization of the object orientation in the image by the method of image moments and by a method based on the Hough transform. In comparison, the following classification algorithms are used: Decision tree; Support vector machine, AdaBoost, Random forest. The principal component analysis is used to reduce the dimension. The research is carried out on the objects from the base of radar images MSTAR. The paper presents the results of the conducted studies.

  15. A comparison of methods to predict historical daily streamflow time series in the southeastern United States

    Science.gov (United States)

    Farmer, William H.; Archfield, Stacey A.; Over, Thomas M.; Hay, Lauren E.; LaFontaine, Jacob H.; Kiang, Julie E.

    2015-01-01

    Effective and responsible management of water resources relies on a thorough understanding of the quantity and quality of available water. Streamgages cannot be installed at every location where streamflow information is needed. As part of its National Water Census, the U.S. Geological Survey is planning to provide streamflow predictions for ungaged locations. In order to predict streamflow at a useful spatial and temporal resolution throughout the Nation, efficient methods need to be selected. This report examines several methods used for streamflow prediction in ungaged basins to determine the best methods for regional and national implementation. A pilot area in the southeastern United States was selected to apply 19 different streamflow prediction methods and evaluate each method by a wide set of performance metrics. Through these comparisons, two methods emerged as the most generally accurate streamflow prediction methods: the nearest-neighbor implementations of nonlinear spatial interpolation using flow duration curves (NN-QPPQ) and standardizing logarithms of streamflow by monthly means and standard deviations (NN-SMS12L). It was nearly impossible to distinguish between these two methods in terms of performance. Furthermore, neither of these methods requires significantly more parameterization in order to be applied: NN-SMS12L requires 24 regional regressions—12 for monthly means and 12 for monthly standard deviations. NN-QPPQ, in the application described in this study, required 27 regressions of particular quantiles along the flow duration curve. Despite this finding, the results suggest that an optimal streamflow prediction method depends on the intended application. Some methods are stronger overall, while some methods may be better at predicting particular statistics. The methods of analysis presented here reflect a possible framework for continued analysis and comprehensive multiple comparisons of methods of prediction in ungaged basins (PUB

  16. Comparison of Chemical Extraction Methods for Determination of Soil Potassium in Different Soil Types

    Science.gov (United States)

    Zebec, V.; Rastija, D.; Lončarić, Z.; Bensa, A.; Popović, B.; Ivezić, V.

    2017-12-01

    Determining potassium supply of soil plays an important role in intensive crop production, since it is the basis for balancing nutrients and issuing fertilizer recommendations for achieving high and stable yields within economic feasibility. The aim of this study was to compare the different extraction methods of soil potassium from arable horizon of different types of soils with ammonium lactate method (KAL), which is frequently used as analytical method for determining the accessibility of nutrients and it is a common method used for issuing fertilizer recommendations in many Europe countries. In addition to the ammonium lactate method (KAL, pH 3.75), potassium was extracted with ammonium acetate (KAA, pH 7), ammonium acetate ethylenediaminetetraacetic acid (KAAEDTA, pH 4.6), Bray (KBRAY, pH 2.6) and with barium chloride (K_{BaCl_2 }, pH 8.1). The analyzed soils were extremely heterogeneous with a wide range of determined values. Soil pH reaction ( {pH_{H_2 O} } ) ranged from 4.77 to 8.75, organic matter content ranged from 1.87 to 4.94% and clay content from 8.03 to 37.07%. In relation to KAL method as the standard method, K_{BaCl_2 } method extracts 12.9% more on average of soil potassium, while in relation to standard method, on average KAA extracts 5.3%, KAAEDTA 10.3%, and KBRAY 27.5% less of potassium. Comparison of analyzed extraction methods of potassium from the soil is of high precision, and most reliable comparison was KAL method with KAAEDTA, followed by a: KAA, K_{BaCl_2 } and KBRAY method. Extremely significant statistical correlation between different extractive methods for determining potassium in the soil indicates that any of the methods can be used to accurately predict the concentration of potassium in the soil, and that carried out research can be used to create prediction model for concentration of potassium based on different methods of extraction.

  17. Comparison of the surface friction model with the time-dependent Hartree-Fock method

    International Nuclear Information System (INIS)

    Froebrich, P.

    1984-01-01

    A comparison is made between the classical phenomenological surface friction model and a time-dependent Hartree-Fock study by Dhar for the system 208 Pb+ 74 Ge at E/sub lab/(Pb) = 1600 MeV. The general trends for energy loss, mean values for charge and mass, interaction times and energy-angle correlations turn out to be fairly similar in both methods. However, contrary to Dhar, the events close to capture are interpreted as normal deep-inelastic, i.e., not as fast fission processes

  18. Comparison of methods used to estimate coral cover in the Hawaiian Islands

    Directory of Open Access Journals (Sweden)

    Paul L. Jokiel

    2015-05-01

    Full Text Available Nine coral survey methods were compared at ten sites in various reef habitats with different levels of coral cover in Kāne‘ohe Bay, O’ahu, Hawaiʻi. Mean estimated coverage at the different sites ranged from less than 10% cover to greater than 90% cover. The methods evaluated include line transects, various visual and photographic belt transects, video transects and visual estimates. At each site 25 m transect lines were laid out and secured. Observers skilled in each method measured coral cover at each site. The time required to run each transect, time required to process data and time to record the results were documented. Cost of hardware and software for each method was also tabulated. Results of this investigation indicate that all of the methods used provide a good first estimate of coral cover on a reef. However, there were differences between the methods in detecting the number of coral species. For example, the classic “quadrat” method allows close examination of small and cryptic coral species that are not detected by other methods such as the “towboard” surveys. The time, effort and cost involved with each method varied widely, and the suitability of each method for answering particular research questions in various environments was evaluated. Results of this study support the finding of three other comparison method studies conducted at various geographic locations throughout the world. Thus, coral cover measured by different methods can be legitimately combined or compared in many situations. The success of a recent modeling effort based on coral cover data consisting of observations taken in Hawai‘i using the different methods supports this conclusion.

  19. Comparison of methods used to estimate coral cover in the Hawaiian Islands.

    Science.gov (United States)

    Jokiel, Paul L; Rodgers, Kuʻulei S; Brown, Eric K; Kenyon, Jean C; Aeby, Greta; Smith, William R; Farrell, Fred

    2015-01-01

    Nine coral survey methods were compared at ten sites in various reef habitats with different levels of coral cover in Kāne'ohe Bay, O'ahu, Hawai'i. Mean estimated coverage at the different sites ranged from less than 10% cover to greater than 90% cover. The methods evaluated include line transects, various visual and photographic belt transects, video transects and visual estimates. At each site 25 m transect lines were laid out and secured. Observers skilled in each method measured coral cover at each site. The time required to run each transect, time required to process data and time to record the results were documented. Cost of hardware and software for each method was also tabulated. Results of this investigation indicate that all of the methods used provide a good first estimate of coral cover on a reef. However, there were differences between the methods in detecting the number of coral species. For example, the classic "quadrat" method allows close examination of small and cryptic coral species that are not detected by other methods such as the "towboard" surveys. The time, effort and cost involved with each method varied widely, and the suitability of each method for answering particular research questions in various environments was evaluated. Results of this study support the finding of three other comparison method studies conducted at various geographic locations throughout the world. Thus, coral cover measured by different methods can be legitimately combined or compared in many situations. The success of a recent modeling effort based on coral cover data consisting of observations taken in Hawai'i using the different methods supports this conclusion.

  20. An investigation of methods for free-field comparison calibration of measurement microphones

    DEFF Research Database (Denmark)

    Barrera-Figueroa, Salvador; Moreno Pescador, Guillermo; Jacobsen, Finn

    2010-01-01

    Free-field comparison calibration of measurement microphones requires that a calibrated reference microphone and a test microphone are exposed to the same sound pressure in a free field. The output voltages of the microphones can be measured either sequentially or simultaneously. The sequential...... method requires the sound field to have good temporal stability. The simultaneous method requires instead that the sound pressure is the same in the positions where the microphones are placed. In this paper the results of the application of the two methods are compared. A third combined method...

  1. A Comparison between the Effect of Cooperative Learning Teaching Method and Lecture Teaching Method on Students' Learning and Satisfaction Level

    Science.gov (United States)

    Mohammadjani, Farzad; Tonkaboni, Forouzan

    2015-01-01

    The aim of the present research is to investigate a comparison between the effect of cooperative learning teaching method and lecture teaching method on students' learning and satisfaction level. The research population consisted of all the fourth grade elementary school students of educational district 4 in Shiraz. The statistical population…

  2. Estimation of the specific activity of radioiodinated gonadotrophins: comparison of three methods

    Energy Technology Data Exchange (ETDEWEB)

    Englebienne, P [Centre for Research and Diagnosis in Endocrinology, Kain (Belgium); Slegers, G [Akademisch Ziekenhuis, Ghent (Belgium). Lab. voor Analytische Chemie

    1983-01-14

    The authors compared 3 methods for estimating the specific activity of radioiodinated gonadotrophins. Two of the methods (column recovery and isotopic dilution) gave similar results, while the third (autodisplacement) gave significantly higher estimations. In the autodisplacement method, B/T ratios, obtained when either labelled hormone alone, or labelled and unlabelled hormone, are added to the antibody, were compared as estimates of the mass of hormone iodinated. It is likely that immunologically unreactive impurities present in the labelled hormone solution invalidate such comparison.

  3. A Simulation-Based Study on the Comparison of Statistical and Time Series Forecasting Methods for Early Detection of Infectious Disease Outbreaks.

    Science.gov (United States)

    Yang, Eunjoo; Park, Hyun Woo; Choi, Yeon Hwa; Kim, Jusim; Munkhdalai, Lkhagvadorj; Musa, Ibrahim; Ryu, Keun Ho

    2018-05-11

    Early detection of infectious disease outbreaks is one of the important and significant issues in syndromic surveillance systems. It helps to provide a rapid epidemiological response and reduce morbidity and mortality. In order to upgrade the current system at the Korea Centers for Disease Control and Prevention (KCDC), a comparative study of state-of-the-art techniques is required. We compared four different temporal outbreak detection algorithms: the CUmulative SUM (CUSUM), the Early Aberration Reporting System (EARS), the autoregressive integrated moving average (ARIMA), and the Holt-Winters algorithm. The comparison was performed based on not only 42 different time series generated taking into account trends, seasonality, and randomly occurring outbreaks, but also real-world daily and weekly data related to diarrhea infection. The algorithms were evaluated using different metrics. These were namely, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), F1 score, symmetric mean absolute percent error (sMAPE), root-mean-square error (RMSE), and mean absolute deviation (MAD). Although the comparison results showed better performance for the EARS C3 method with respect to the other algorithms, despite the characteristics of the underlying time series data, Holt⁻Winters showed better performance when the baseline frequency and the dispersion parameter values were both less than 1.5 and 2, respectively.

  4. Development and comparison in uncertainty assessment based Bayesian modularization method in hydrological modeling

    Science.gov (United States)

    Li, Lu; Xu, Chong-Yu; Engeland, Kolbjørn

    2013-04-01

    SummaryWith respect to model calibration, parameter estimation and analysis of uncertainty sources, various regression and probabilistic approaches are used in hydrological modeling. A family of Bayesian methods, which incorporates different sources of information into a single analysis through Bayes' theorem, is widely used for uncertainty assessment. However, none of these approaches can well treat the impact of high flows in hydrological modeling. This study proposes a Bayesian modularization uncertainty assessment approach in which the highest streamflow observations are treated as suspect information that should not influence the inference of the main bulk of the model parameters. This study includes a comprehensive comparison and evaluation of uncertainty assessments by our new Bayesian modularization method and standard Bayesian methods using the Metropolis-Hastings (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions were used in combination with standard Bayesian method: the AR(1) plus Normal model independent of time (Model 1), the AR(1) plus Normal model dependent on time (Model 2) and the AR(1) plus Multi-normal model (Model 3). The results reveal that the Bayesian modularization method provides the most accurate streamflow estimates measured by the Nash-Sutcliffe efficiency and provide the best in uncertainty estimates for low, medium and entire flows compared to standard Bayesian methods. The study thus provides a new approach for reducing the impact of high flows on the discharge uncertainty assessment of hydrological models via Bayesian method.

  5. Comparison of two split-window methods for retrieving land surface temperature from MODIS data

    Science.gov (United States)

    Zhao, Shaohua; Qin, Qiming; Yang, Yonghui; Xiong, Yujiu; Qiu, Guoyu

    2009-08-01

    Land surface temperature (LST) is a key parameter in environment and earth science study, especially for monitoring drought. The objective of this work is a comparison of two split-window methods: Mao method and Sobrino method, for retrieving LST using MODIS (Moderate-resolution Imaging Spectroradiometer) data in North China Plain. The results show that the max, min and mean errors of Mao method are 1.33K, 1.54K and 0.13K lower than the standard LST product respectively; while those of Sobrino method are 0.73K, 1.46K and 1.50K higher than the standard respectively. Validation of the two methods using LST product based on weather stations shows a good agreement between the standard and Sobrino method, with RMSE of 1.17K, whereas RMSE of Mao method is 1.85K. Finally, the study introduces the Sobmao method, which is based on Sobrino method but simplifies the estimation of atmospheric water vapour content using Mao method. The Sobmao method has almost the same accuracy with Sobrino method. With high accuracy and simplification of water vapour content estimation, the Sobmao method is recommendable in LST inversion for good application in Ningxia region, the northwest China, with mean error of 0.33K and the RMSE value of 0.91K.

  6. A Comparison of Three Methods for the Analysis of Skin Flap Viability: Reliability and Validity.

    Science.gov (United States)

    Tim, Carla Roberta; Martignago, Cintia Cristina Santi; da Silva, Viviane Ribeiro; Dos Santos, Estefany Camila Bonfim; Vieira, Fabiana Nascimento; Parizotto, Nivaldo Antonio; Liebano, Richard Eloin

    2018-05-01

    Objective: Technological advances have provided new alternatives to the analysis of skin flap viability in animal models; however, the interrater validity and reliability of these techniques have yet to be analyzed. The present study aimed to evaluate the interrater validity and reliability of three different methods: weight of paper template (WPT), paper template area (PTA), and photographic analysis. Approach: Sixteen male Wistar rats had their cranially based dorsal skin flap elevated. On the seventh postoperative day, the viable tissue area and the necrotic area of the skin flap were recorded using the paper template method and photo image. The evaluation of the percentage of viable tissue was performed using three methods, simultaneously and independently by two raters. The analysis of interrater reliability and viability was performed using the intraclass correlation coefficient and Bland Altman Plot Analysis was used to visualize the presence or absence of systematic bias in the evaluations of data validity. Results: The results showed that interrater reliability for WPT, measurement of PTA, and photographic analysis were 0.995, 0.990, and 0.982, respectively. For data validity, a correlation >0.90 was observed for all comparisons made between the three methods. In addition, Bland Altman Plot Analysis showed agreement between the comparisons of the methods and the presence of systematic bias was not observed. Innovation: Digital methods are an excellent choice for assessing skin flap viability; moreover, they make data use and storage easier. Conclusion: Independently from the method used, the interrater reliability and validity proved to be excellent for the analysis of skin flaps' viability.

  7. Clustering Scientific Publications Based on Citation Relations: A Systematic Comparison of Different Methods.

    Science.gov (United States)

    Šubelj, Lovro; van Eck, Nees Jan; Waltman, Ludo

    2016-01-01

    Clustering methods are applied regularly in the bibliometric literature to identify research areas or scientific fields. These methods are for instance used to group publications into clusters based on their relations in a citation network. In the network science literature, many clustering methods, often referred to as graph partitioning or community detection techniques, have been developed. Focusing on the problem of clustering the publications in a citation network, we present a systematic comparison of the performance of a large number of these clustering methods. Using a number of different citation networks, some of them relatively small and others very large, we extensively study the statistical properties of the results provided by different methods. In addition, we also carry out an expert-based assessment of the results produced by different methods. The expert-based assessment focuses on publications in the field of scientometrics. Our findings seem to indicate that there is a trade-off between different properties that may be considered desirable for a good clustering of publications. Overall, map equation methods appear to perform best in our analysis, suggesting that these methods deserve more attention from the bibliometric community.

  8. Clustering Scientific Publications Based on Citation Relations: A Systematic Comparison of Different Methods

    Science.gov (United States)

    Šubelj, Lovro; van Eck, Nees Jan; Waltman, Ludo

    2016-01-01

    Clustering methods are applied regularly in the bibliometric literature to identify research areas or scientific fields. These methods are for instance used to group publications into clusters based on their relations in a citation network. In the network science literature, many clustering methods, often referred to as graph partitioning or community detection techniques, have been developed. Focusing on the problem of clustering the publications in a citation network, we present a systematic comparison of the performance of a large number of these clustering methods. Using a number of different citation networks, some of them relatively small and others very large, we extensively study the statistical properties of the results provided by different methods. In addition, we also carry out an expert-based assessment of the results produced by different methods. The expert-based assessment focuses on publications in the field of scientometrics. Our findings seem to indicate that there is a trade-off between different properties that may be considered desirable for a good clustering of publications. Overall, map equation methods appear to perform best in our analysis, suggesting that these methods deserve more attention from the bibliometric community. PMID:27124610

  9. Comparison of calculational methods for liquid metal reactor shields

    International Nuclear Information System (INIS)

    Carter, L.L.; Moore, F.S.; Morford, R.J.; Mann, F.M.

    1985-09-01

    A one-dimensional comparison is made between Monte Carlo (MCNP), discrete ordinances (ANISN), and diffusion theory (MlDX) calculations of neutron flux and radiation damage from the core of the Fast Flux Test Facility (FFTF) out to the reactor vessel. Diffusion theory was found to be reasonably accurate for the calculation of both total flux and radiation damage. However, for large distances from the core, the calculated flux at very high energies is low by an order of magnitude or more when the diffusion theory is used. Particular emphasis was placed in this study on the generation of multitable cross sections for use in discrete ordinates codes that are self-shielded, consistent with the self-shielding employed in the generation of cross sections for use with diffusion theory. The Monte Carlo calculation, with a pointwise representation of the cross sections, was used as the benchmark for determining the limitations of the other two calculational methods. 12 refs., 33 figs

  10. Comparison of the dose evaluation methods for criticality accident

    International Nuclear Information System (INIS)

    Shimizu, Yoshio; Oka, Tsutomu

    2004-01-01

    The improvement of the dose evaluation method for criticality accidents is important to rationalize design of the nuclear fuel cycle facilities. The source spectrums of neutron and gamma ray of a criticality accident depend on the condition of the source, its materials, moderation, density and so on. The comparison of the dose evaluation methods for a criticality accident is made. Some methods, which are combination of criticality calculation and shielding calculation, are proposed. Prompt neutron and gamma ray doses from nuclear criticality of some uranium systems have been evaluated as the Nuclear Criticality Slide Rule. The uranium metal source (unmoderated system) and the uranyl nitrate solution source (moderated system) in the rule are evaluated by some calculation methods, which are combinations of code and cross section library, as follows: (a) SAS1X (ENDF/B-IV), (b) MCNP4C (ENDF/B-VI)-ANISN (DLC23E or JSD120), (c) MCNP4C-MCNP4C (ENDF/B-VI). They have consisted of criticality calculation and shielding calculation. These calculation methods are compared about the tissue absorbed dose and the spectrums at 2 m from the source. (author)

  11. Comparison and transfer testing of multiplex ligation detection methods for GM plants

    Directory of Open Access Journals (Sweden)

    Ujhelyi Gabriella

    2012-01-01

    Full Text Available Abstract Background With the increasing number of GMOs on the global market the maintenance of European GMO regulations is becoming more complex. For the analysis of a single food or feed sample it is necessary to assess the sample for the presence of many GMO-targets simultaneously at a sensitive level. Several methods have been published regarding DNA-based multidetection. Multiplex ligation detection methods have been described that use the same basic approach: i hybridisation and ligation of specific probes, ii amplification of the ligated probes and iii detection and identification of the amplified products. Despite they all have this same basis, the published ligation methods differ radically. The present study investigated with real-time PCR whether these different ligation methods have any influence on the performance of the probes. Sensitivity and the specificity of the padlock probes (PLPs with the ligation protocol with the best performance were also tested and the selected method was initially validated in a laboratory exchange study. Results Of the ligation protocols tested in this study, the best results were obtained with the PPLMD I and PPLMD II protocols and no consistent differences between these two protocols were observed. Both protocols are based on padlock probe ligation combined with microarray detection. Twenty PLPs were tested for specificity and the best probes were subjected to further evaluation. Up to 13 targets were detected specifically and simultaneously. During the interlaboratory exchange study similar results were achieved by the two participating institutes (NIB, Slovenia, and RIKILT, the Netherlands. Conclusions From the comparison of ligation protocols it can be concluded that two protocols perform equally well on the basis of the selected set of PLPs. Using the most ideal parameters the multiplicity of one of the methods was tested and 13 targets were successfully and specifically detected. In the

  12. A comparison study of size-specific dose estimate calculation methods

    Energy Technology Data Exchange (ETDEWEB)

    Parikh, Roshni A. [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); University of Michigan Health System, Department of Radiology, Ann Arbor, MI (United States); Wien, Michael A.; Jordan, David W.; Ciancibello, Leslie; Berlin, Sheila C. [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); Novak, Ronald D. [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); Rebecca D. Considine Research Institute, Children' s Hospital Medical Center of Akron, Center for Mitochondrial Medicine Research, Akron, OH (United States); Klahr, Paul [CT Clinical Science, Philips Healthcare, Highland Heights, OH (United States); Soriano, Stephanie [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); University of Washington, Department of Radiology, Seattle, WA (United States)

    2018-01-15

    The size-specific dose estimate (SSDE) has emerged as an improved metric for use by medical physicists and radiologists for estimating individual patient dose. Several methods of calculating SSDE have been described, ranging from patient thickness or attenuation-based (automated and manual) measurements to weight-based techniques. To compare the accuracy of thickness vs. weight measurement of body size to allow for the calculation of the size-specific dose estimate (SSDE) in pediatric body CT. We retrospectively identified 109 pediatric body CT examinations for SSDE calculation. We examined two automated methods measuring a series of level-specific diameters of the patient's body: method A used the effective diameter and method B used the water-equivalent diameter. Two manual methods measured patient diameter at two predetermined levels: the superior endplate of L2, where body width is typically most thin, and the superior femoral head or iliac crest (for scans that did not include the pelvis), where body width is typically most thick; method C averaged lateral measurements at these two levels from the CT projection scan, and method D averaged lateral and anteroposterior measurements at the same two levels from the axial CT images. Finally, we used body weight to characterize patient size, method E, and compared this with the various other measurement methods. Methods were compared across the entire population as well as by subgroup based on body width. Concordance correlation (ρ{sub c}) between each of the SSDE calculation methods (methods A-E) was greater than 0.92 across the entire population, although the range was wider when analyzed by subgroup (0.42-0.99). When we compared each SSDE measurement method with CTDI{sub vol,} there was poor correlation, ρ{sub c}<0.77, with percentage differences between 20.8% and 51.0%. Automated computer algorithms are accurate and efficient in the calculation of SSDE. Manual methods based on patient thickness provide

  13. An Enzymatic Clinical Chemistry Laboratory Experiment Incorporating an Introduction to Mathematical Method Comparison Techniques

    Science.gov (United States)

    Duxbury, Mark

    2004-01-01

    An enzymatic laboratory experiment based on the analysis of serum is described that is suitable for students of clinical chemistry. The experiment incorporates an introduction to mathematical method-comparison techniques in which three different clinical glucose analysis methods are compared using linear regression and Bland-Altman difference…

  14. Comparison of Transmission Line Methods for Surface Acoustic Wave Modeling

    Science.gov (United States)

    Wilson, William; Atkinson, Gary

    2009-01-01

    Surface Acoustic Wave (SAW) technology is low cost, rugged, lightweight, extremely low power and can be used to develop passive wireless sensors. For these reasons, NASA is investigating the use of SAW technology for Integrated Vehicle Health Monitoring (IVHM) of aerospace structures. To facilitate rapid prototyping of passive SAW sensors for aerospace applications, SAW models have been developed. This paper reports on the comparison of three methods of modeling SAWs. The three models are the Impulse Response Method (a first order model), and two second order matrix methods; the conventional matrix approach, and a modified matrix approach that is extended to include internal finger reflections. The second order models are based upon matrices that were originally developed for analyzing microwave circuits using transmission line theory. Results from the models are presented with measured data from devices. Keywords: Surface Acoustic Wave, SAW, transmission line models, Impulse Response Method.

  15. Classification Method in Integrated Information Network Using Vector Image Comparison

    Directory of Open Access Journals (Sweden)

    Zhou Yuan

    2014-05-01

    Full Text Available Wireless Integrated Information Network (WMN consists of integrated information that can get data from its surrounding, such as image, voice. To transmit information, large resource is required which decreases the service time of the network. In this paper we present a Classification Approach based on Vector Image Comparison (VIC for WMN that improve the service time of the network. The available methods for sub-region selection and conversion are also proposed.

  16. Propensity-score matching in economic analyses: comparison with regression models, instrumental variables, residual inclusion, differences-in-differences, and decomposition methods.

    Science.gov (United States)

    Crown, William H

    2014-02-01

    This paper examines the use of propensity score matching in economic analyses of observational data. Several excellent papers have previously reviewed practical aspects of propensity score estimation and other aspects of the propensity score literature. The purpose of this paper is to compare the conceptual foundation of propensity score models with alternative estimators of treatment effects. References are provided to empirical comparisons among methods that have appeared in the literature. These comparisons are available for a subset of the methods considered in this paper. However, in some cases, no pairwise comparisons of particular methods are yet available, and there are no examples of comparisons across all of the methods surveyed here. Irrespective of the availability of empirical comparisons, the goal of this paper is to provide some intuition about the relative merits of alternative estimators in health economic evaluations where nonlinearity, sample size, availability of pre/post data, heterogeneity, and missing variables can have important implications for choice of methodology. Also considered is the potential combination of propensity score matching with alternative methods such as differences-in-differences and decomposition methods that have not yet appeared in the empirical literature.

  17. An empirical comparison of several recent epistatic interaction detection methods.

    Science.gov (United States)

    Wang, Yue; Liu, Guimei; Feng, Mengling; Wong, Limsoon

    2011-11-01

    Many new methods have recently been proposed for detecting epistatic interactions in GWAS data. There is, however, no in-depth independent comparison of these methods yet. Five recent methods-TEAM, BOOST, SNPHarvester, SNPRuler and Screen and Clean (SC)-are evaluated here in terms of power, type-1 error rate, scalability and completeness. In terms of power, TEAM performs best on data with main effect and BOOST performs best on data without main effect. In terms of type-1 error rate, TEAM and BOOST have higher type-1 error rates than SNPRuler and SNPHarvester. SC does not control type-1 error rate well. In terms of scalability, we tested the five methods using a dataset with 100 000 SNPs on a 64 bit Ubuntu system, with Intel (R) Xeon(R) CPU 2.66 GHz, 16 GB memory. TEAM takes ~36 days to finish and SNPRuler reports heap allocation problems. BOOST scales up to 100 000 SNPs and the cost is much lower than that of TEAM. SC and SNPHarvester are the most scalable. In terms of completeness, we study how frequently the pruning techniques employed by these methods incorrectly prune away the most significant epistatic interactions. We find that, on average, 20% of datasets without main effect and 60% of datasets with main effect are pruned incorrectly by BOOST, SNPRuler and SNPHarvester. The software for the five methods tested are available from the URLs below. TEAM: http://csbio.unc.edu/epistasis/download.php BOOST: http://ihome.ust.hk/~eeyang/papers.html. SNPHarvester: http://bioinformatics.ust.hk/SNPHarvester.html. SNPRuler: http://bioinformatics.ust.hk/SNPRuler.zip. Screen and Clean: http://wpicr.wpic.pitt.edu/WPICCompGen/. wangyue@nus.edu.sg.

  18. German precursor study: methods and results

    International Nuclear Information System (INIS)

    Hoertner, H.; Frey, W.; von Linden, J.; Reichart, G.

    1985-01-01

    This study has been prepared by the GRS by contract of the Federal Minister of Interior. The purpose of the study is to show how the application of system-analytic tools and especially of probabilistic methods on the Licensee Event Reports (LERs) and on other operating experience can support a deeper understanding of the safety-related importance of the events reported in reactor operation, the identification of possible weak points, and further conclusions to be drawn from the events. Additionally, the study aimed at a comparison of its results for the severe core damage frequency with those of the German Risk Study as far as this is possible and useful. The German Precursor Study is a plant-specific study. The reference plant is Biblis NPP with its very similar Units A and B, whereby the latter was also the reference plant for the German Risk Study

  19. Diversity Indices as Measures of Functional Annotation Methods in Metagenomics Studies

    KAUST Repository

    Jankovic, Boris R.

    2016-01-26

    Applications of high-throughput techniques in metagenomics studies produce massive amounts of data. Fragments of genomic, transcriptomic and proteomic molecules are all found in metagenomics samples. Laborious and meticulous effort in sequencing and functional annotation are then required to, amongst other objectives, reconstruct a taxonomic map of the environment that metagenomics samples were taken from. In addition to computational challenges faced by metagenomics studies, the analysis is further complicated by the presence of contaminants in the samples, potentially resulting in skewed taxonomic analysis. The functional annotation in metagenomics can utilize all available omics data and therefore different methods that are associated with a particular type of data. For example, protein-coding DNA, non-coding RNA or ribosomal RNA data can be used in such an analysis. These methods would have their advantages and disadvantages and the question of comparison among them naturally arises. There are several criteria that can be used when performing such a comparison. Loosely speaking, methods can be evaluated in terms of computational complexity or in terms of the expected biological accuracy. We propose that the concept of diversity that is used in the ecosystems and species diversity studies can be successfully used in evaluating certain aspects of the methods employed in metagenomics studies. We show that when applying the concept of Hill’s diversity, the analysis of variations in the diversity order provides valuable clues into the robustness of methods used in the taxonomical analysis.

  20. Monitoring uterine activity during labor: a comparison of three methods

    Science.gov (United States)

    EULIANO, Tammy Y.; NGUYEN, Minh Tam; DARMANJIAN, Shalom; MCGORRAY, Susan P.; EULIANO, Neil; ONKALA, Allison; GREGG, Anthony R.

    2012-01-01

    Objective Tocodynamometry (Toco—strain gauge technology) provides contraction frequency and approximate duration of labor contractions, but suffers frequent signal dropout necessitating re-positioning by a nurse, and may fail in obese patients. The alternative invasive intrauterine pressure catheter (IUPC) is more reliable and adds contraction pressure information, but requires ruptured membranes and introduces small risks of infection and abruption. Electrohysterography (EHG) reports the electrical activity of the uterus through electrodes placed on the maternal abdomen. This study compared all three methods of contraction detection simultaneously in laboring women. Study Design Upon consent, laboring women were monitored simultaneously with Toco, EHG, and IUPC. Contraction curves were generated in real-time for the EHG and all three curves were stored electronically. A contraction detection algorithm was used to compare frequency and timing between methods. Seventy-three subjects were enrolled in the study; 14 were excluded due to hardware failure of one or more of the devices (12) or inadequate data collection duration(2). Results In comparison with the gold-standard IUPC, EHG performed significantly better than Toco with regard to Contractions Consistency Index (CCI). The mean CCI for EHG was 0.88 ± 0.17 compared to 0.69 ± 0.27 for Toco (pToco, EHG was not significantly affected by obesity. Conclusion Toco does not correlate well with the gold-standard IUPC and fails more frequently in obese patients. EHG provides a reliable non-invasive alternative regardless of body habitus. PMID:23122926

  1. Comparison of the spatial landmark scatter of various 3D digitalization methods.

    Science.gov (United States)

    Boldt, Florian; Weinzierl, Christian; Hertrich, Klaus; Hirschfelder, Ursula

    2009-05-01

    The aim of this study was to compare four different three-dimensional digitalization methods on the basis of the complex anatomical surface of a cleft lip and palate plaster cast, and to ascertain their accuracy when positioning 3D landmarks. A cleft lip and palate plaster cast was digitalized with the SCAN3D photo-optical scanner, the OPTIX 400S laser-optical scanner, the Somatom Sensation 64 computed tomography system and the MicroScribe MLX 3-axis articulated-arm digitizer. First, four examiners appraised by individual visual inspection the surface detail reproduction of the three non-tactile digitalization methods in comparison to the reference plaster cast. The four examiners then localized the landmarks five times at intervals of 2 weeks. This involved simply copying, or spatially tracing, the landmarks from a reference plaster cast to each model digitally reproduced by each digitalization method. Statistical analysis of the landmark distribution specific to each method was performed based on the 3D coordinates of the positioned landmarks. Visual evaluation of surface detail conformity assigned the photo-optical digitalization method an average score of 1.5, the highest subjectively-determined conformity (surpassing computer tomographic and laser-optical methods). The tactile scanning method revealed the lowest degree of 3D landmark scatter, 0.12 mm, and at 1.01 mm the lowest maximum 3D landmark scatter; this was followed by the computer tomographic, photo-optical and laser-optical methods (in that order). This study demonstrates that the landmarks' precision and reproducibility are determined by the complexity of the reference-model surface as well as the digital surface quality and individual ability of each evaluator to capture 3D spatial relationships. The differences in the 3D-landmark scatter values and lowest maximum 3D-landmark scatter between the best and the worst methods showed minor differences. The measurement results in this study reveal that it

  2. A Comparison of underground opening support design methods in jointed rock mass

    International Nuclear Information System (INIS)

    Gharavi, M.; Shafiezadeh, N.

    2008-01-01

    It is of great importance to consider long-term stability of rock mass around the openings of underground structure. during design, construction and operation of the said structures in rock. In this context. three methods namely. empirical. analytical and numerical have been applied to design and analyze the stability of underground infrastructure at the Siah Bisheh Pumping Storage Hydro-Electric Power Project in Iran. The geological and geotechnical data utilized in this article were selected and based on the preliminary studies of this project. In the initial stages of design. it was recommended that, two methods of rock mass classification Q and rock mass rating should be utilized for the support system of the underground cavern. Next, based on the structural instability, the support system was adjusted by the analytical method. The performance of the recommended support system was reviewed by the comparison of the ground response curve and rock support interactions with surrounding rock mass, using FEST03 software. Moreover, for further assessment of the realistic rock mass behavior and support system, the numerical modeling was performed utilizing FEST03 software. Finally both the analytical and numerical methods were compared, to obtain satisfactory results complimenting each other

  3. Analysis and research on Maximum Power Point Tracking of Photovoltaic Array with Fuzzy Logic Control and Three-point Weight Comparison Method

    Institute of Scientific and Technical Information of China (English)

    LIN; Kuang-Jang; LIN; Chii-Ruey

    2010-01-01

    The Photovoltaic Array has a best optimal operating point where the array operating can obtain the maximum power.However, the optimal operating point can be compromised by the strength of solar radiation,angle,and by the change of environment and load.Due to the constant changes in these conditions,it has become very difficult to locate the optimal operating point by following a mathematical model.Therefore,this study will focus mostly on the application of Fuzzy Logic Control theory and Three-point Weight Comparison Method in effort to locate the optimal operating point of solar panel and achieve maximum efficiency in power generation. The Three-point Weight Comparison Method is the comparison between the characteristic curves of the voltage of photovoltaic array and output power;it is a rather simple way to track the maximum power.The Fuzzy Logic Control,on the other hand,can be used to solve problems that cannot be effectively dealt with by calculation rules,such as concepts,contemplation, deductive reasoning,and identification.Therefore,this paper uses these two kinds of methods to make simulation successively. The simulation results show that,the Three-point Comparison Method is more effective under the environment with more frequent change of solar radiation;however,the Fuzzy Logic Control has better tacking efficiency under the environment with violent change of solar radiation.

  4. Recent amendments of the KTA 2101.2 fire barrier resistance rating method for German NPP and comparison to the Eurocode t-equivalent method

    Energy Technology Data Exchange (ETDEWEB)

    Forell, Burkhard [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) gGmbH, Koeln (Germany)

    2015-12-15

    The German nuclear standard KTA2101 on ''Fire Protection in Nuclear Power Plants'', Part 2: ''Fire Protection of Structural Plant Components'' includes a simplified method for the fire resistance rating of fire barrier elements based on the t-equivalent approach. The method covers the specific features of compartments in nuclear power plant buildings in terms of the boundary conditions which have to be expected in the event of fire. The method has proven to be relatively simple and straightforward to apply. The paper gives an overview of amendments with respect to the rating method made within the regular review of the KTA 2101.2. A comparison to the method of the non-nuclear Eurocode 1 is also provided. The Eurocode method is closely connected to the German standard DIN 18230 on structural fire protection in industrial buildings. Special emphasis of the comparison is given to the ventilation factor, which has a large impact on the required fire resistance.

  5. Comparison Study of Three Common Technologies for Freezing-Thawing Measurement

    Directory of Open Access Journals (Sweden)

    Xinbao Yu

    2010-01-01

    Full Text Available This paper describes a comparison study on three different technologies (i.e., thermocouple, electrical resistivity probe and Time Domain Reflectometry (TDR that are commonly used for frost measurement. Specially, the paper developed an analyses procedure to estimate the freezing-thawing status based on the dielectric properties of freezing soil. Experiments were conducted where the data of temperature, electrical resistivity, and dielectric constant were simultaneously monitored during the freezing/thawing process. The comparison uncovered the advantages and limitations of these technologies for frost measurement. The experimental results indicated that TDR measured soil dielectric constant clearly indicates the different stages of the freezing/thawing process. Analyses method was developed to determine not only the onset of freezing or thawing, but also the extent of their development. This is a major advantage of TDR over other technologies.

  6. Comparison of different methods to extract the required coefficient of friction for level walking.

    Science.gov (United States)

    Chang, Wen-Ruey; Chang, Chien-Chi; Matz, Simon

    2012-01-01

    The required coefficient of friction (RCOF) is an important predictor for slip incidents. Despite the wide use of the RCOF there is no standardised method for identifying the RCOF from ground reaction forces. This article presents a comparison of the outcomes from seven different methods, derived from those reported in the literature, for identifying the RCOF from the same data. While commonly used methods are based on a normal force threshold, percentage of stance phase or time from heel contact, a newly introduced hybrid method is based on a combination of normal force, time and direction of increase in coefficient of friction. Although no major differences were found with these methods in more than half the strikes, significant differences were found in a significant portion of strikes. Potential problems with some of these methods were identified and discussed and they appear to be overcome by the hybrid method. No standard method exists for determining the required coefficient of friction (RCOF), an important predictor for slipping. In this study, RCOF values from a single data set, using various methods from the literature, differed considerably for a significant portion of strikes. A hybrid method may yield improved results.

  7. A Quantitative Comparison of Calibration Methods for RGB-D Sensors Using Different Technologies

    Directory of Open Access Journals (Sweden)

    Víctor Villena-Martínez

    2017-01-01

    Full Text Available RGB-D (Red Green Blue and Depth sensors are devices that can provide color and depth information from a scene at the same time. Recently, they have been widely used in many solutions due to their commercial growth from the entertainment market to many diverse areas (e.g., robotics, CAD, etc.. In the research community, these devices have had good uptake due to their acceptable levelofaccuracyformanyapplicationsandtheirlowcost,butinsomecases,theyworkatthelimitof their sensitivity, near to the minimum feature size that can be perceived. For this reason, calibration processes are critical in order to increase their accuracy and enable them to meet the requirements of such kinds of applications. To the best of our knowledge, there is not a comparative study of calibration algorithms evaluating its results in multiple RGB-D sensors. Specifically, in this paper, a comparison of the three most used calibration methods have been applied to three different RGB-D sensors based on structured light and time-of-flight. The comparison of methods has been carried out by a set of experiments to evaluate the accuracy of depth measurements. Additionally, an object reconstruction application has been used as example of an application for which the sensor works at the limit of its sensitivity. The obtained results of reconstruction have been evaluated through visual inspection and quantitative measurements.

  8. Comparison of Feature Learning Methods for Human Activity Recognition Using Wearable Sensors.

    Science.gov (United States)

    Li, Frédéric; Shirahama, Kimiaki; Nisar, Muhammad Adeel; Köping, Lukas; Grzegorzek, Marcin

    2018-02-24

    Getting a good feature representation of data is paramount for Human Activity Recognition (HAR) using wearable sensors. An increasing number of feature learning approaches-in particular deep-learning based-have been proposed to extract an effective feature representation by analyzing large amounts of data. However, getting an objective interpretation of their performances faces two problems: the lack of a baseline evaluation setup, which makes a strict comparison between them impossible, and the insufficiency of implementation details, which can hinder their use. In this paper, we attempt to address both issues: we firstly propose an evaluation framework allowing a rigorous comparison of features extracted by different methods, and use it to carry out extensive experiments with state-of-the-art feature learning approaches. We then provide all the codes and implementation details to make both the reproduction of the results reported in this paper and the re-use of our framework easier for other researchers. Our studies carried out on the OPPORTUNITY and UniMiB-SHAR datasets highlight the effectiveness of hybrid deep-learning architectures involving convolutional and Long-Short-Term-Memory (LSTM) to obtain features characterising both short- and long-term time dependencies in the data.

  9. A Study on Influencing Factors of Knowledge Management Systems Adoption: Models Comparison Approach

    OpenAIRE

    Mei-Chun Yeh; Ming-Shu Yuan

    2007-01-01

    Using Linear Structural Relation model (LISREL model) as analysis method and technology acceptance model and decomposed theory of planned behavior as research foundation, this study approachesmainly from the angle of behavioral intention to examine the influential factors of 421 employees adopting knowledge management systems and in the meantime to compare the two method models mentioned on the top. According to the research, there is no, in comparison with technology acceptance model anddeco...

  10. A Comparison Study between Two MPPT Control Methods for a Large Variable-Speed Wind Turbine under Different Wind Speed Characteristics

    Directory of Open Access Journals (Sweden)

    Dongran Song

    2017-05-01

    Full Text Available Variable speed wind turbines (VSWTs usually adopt a maximum power point tracking (MPPT method to optimize energy capture performance. Nevertheless, obtained performance offered by different MPPT methods may be affected by the impact of wind turbine (WT’s inertia and wind speed characteristics and it needs to be clarified. In this paper, the tip speed ratio (TSR and optimal torque (OT methods are investigated in terms of their performance under different wind speed characteristics on a 1.5 MW wind turbine model. To this end, the TSR control method based on an effective wind speed estimator and the OT control method are firstly presented. Then, their performance is investigated and compared through simulation test results under different wind speeds using Bladed software. Comparison results show that the TSR control method can capture slightly more wind energy at the cost of high component loads than the other one under all wind conditions. Furthermore, it is found that both control methods present similar trends of power reduction that is relevant to mean wind speed and turbulence intensity. From the obtained results, we demonstrate that, to further improve MPPT capability of large VSWTs, other advanced control methods using wind speed prediction information need to be addressed.

  11. Comparison of PDF and Moment Closure Methods in the Modeling of Turbulent Reacting Flows

    Science.gov (United States)

    Norris, Andrew T.; Hsu, Andrew T.

    1994-01-01

    In modeling turbulent reactive flows, Probability Density Function (PDF) methods have an advantage over the more traditional moment closure schemes in that the PDF formulation treats the chemical reaction source terms exactly, while moment closure methods are required to model the mean reaction rate. The common model used is the laminar chemistry approximation, where the effects of turbulence on the reaction are assumed negligible. For flows with low turbulence levels and fast chemistry, the difference between the two methods can be expected to be small. However for flows with finite rate chemistry and high turbulence levels, significant errors can be expected in the moment closure method. In this paper, the ability of the PDF method and the moment closure scheme to accurately model a turbulent reacting flow is tested. To accomplish this, both schemes were used to model a CO/H2/N2- air piloted diffusion flame near extinction. Identical thermochemistry, turbulence models, initial conditions and boundary conditions are employed to ensure a consistent comparison can be made. The results of the two methods are compared to experimental data as well as to each other. The comparison reveals that the PDF method provides good agreement with the experimental data, while the moment closure scheme incorrectly shows a broad, laminar-like flame structure.

  12. A Comparison of Central Composite Design and Taguchi Method for Optimizing Fenton Process

    Directory of Open Access Journals (Sweden)

    Anam Asghar

    2014-01-01

    Full Text Available In the present study, a comparison of central composite design (CCD and Taguchi method was established for Fenton oxidation. Dyeini, Dye : Fe+2, H2O2 : Fe+2, and pH were identified control variables while COD and decolorization efficiency were selected responses. L9 orthogonal array and face-centered CCD were used for the experimental design. Maximum 99% decolorization and 80% COD removal efficiency were obtained under optimum conditions. R squared values of 0.97 and 0.95 for CCD and Taguchi method, respectively, indicate that both models are statistically significant and are in well agreement with each other. Furthermore, Prob > F less than 0.0500 and ANOVA results indicate the good fitting of selected model with experimental results. Nevertheless, possibility of ranking of input variables in terms of percent contribution to the response value has made Taguchi method a suitable approach for scrutinizing the operating parameters. For present case, pH with percent contribution of 87.62% and 66.2% was ranked as the most contributing and significant factor. This finding of Taguchi method was also verified by 3D contour plots of CCD. Therefore, from this comparative study, it is concluded that Taguchi method with 9 experimental runs and simple interaction plots is a suitable alternative to CCD for several chemical engineering applications.

  13. A METHOD OF INTEGRATED ASSESSMENT OF LIVER FIBROSIS DEGREE: A COMPARISON OF THE OPTICAL METHOD FOR THE STUDY OF RED BLOOD CELLS AND INDIRECT ELASTOGRAPHY OF THE LIVER

    Directory of Open Access Journals (Sweden)

    M. V. Kruchinina

    2017-01-01

    Full Text Available The presented  method  of integrated  assessment of liver fibrosis degree based on the comparison of the data obtained in the study of electric and viscoelastic parameters of erythrocytes by the dielectrophoresis  method  using the electro-optical system  of the detection  cells and method  for indirect elastometry. A high degree of comparability of the results of the above-described  methods  was established  when the degree of fibrosis F 2-4  in the  absence  of marked  cytolysis, cholestasis,  inflammatory  syndrome,  metal  overload. It is shown that  parallel using the methods  of dielectrophoresis and indirect elastography is needed in the presence of a rise of transaminases, gammaglutamyltranspeptidase more than 5 norms, expressed dysproteinemia,  syndromes of iron overload, copper to increase the accuracy in determining the degree of liver fibrosis. The evaluation of dynamics of changes of the degree of fibrosis during antiviral therapy is more accurate by the method  of indirect elastometry, and the method of dielectrophoresis  is preferable in the treatment of nonalcoholic fatty liver disease. In cases of restrictions on the use of the method  for indirect elastography (marked obesity, ascites, cholelithiasis, pregnancy, presence of pacemaker, prosthesis to determine  the degree of fibrosis the method of dielectrophoresis  of red blood cells can be used. Simultaneous use of both methods  (using the identified discriminatory values allows to reduce or to neutralize their disadvantages, dependence on associated syndromes, to expand the possibilities of their application, to improve the diagnostic accuracy in determining each of the degrees of liver fibrosis. Integrated application of both methods — indirect elastography and dielectrophoresis of red blood cells — for determining the degree of liver fibrosis allows to achieve high levels of sensitivity (88.9 percent and specificity (100 percent compared to

  14. Comparison of multivariate methods for studying the G×E interaction

    Directory of Open Access Journals (Sweden)

    Deoclécio Domingos Garbuglio

    2015-12-01

    Full Text Available The objective of this work was to evaluate three statistical multivariate methods for analyzing adaptability and environmental stratification simultaneously, using data from maize cultivars indicated for planting in the State of Paraná-Brazil. Under the FGGE and GGE methods, the genotypic effect adjusts the G×E interactions across environments, resulting in a high percentage of explanation associated with a smaller number of axes. Environmental stratification via the FGGE and GGE methods showed similar responses, while the AMMI method did not ensure grouping of environments. The adaptability analysis revealed low divergence patterns of the responses obtained through the three methods. Genotypes P30F35, P30F53, P30R50, P30K64 and AS 1570 showed high yields associated with general adaptability. The FGGE method allowed differences in yield responses in specific regions and the impact in locations belonging to the same environmental group (through rE to be associated with the level of the simple portion of the G×E interaction.

  15. A comparison between NASCET and ECST methods in the study of carotids

    International Nuclear Information System (INIS)

    Saba, Luca; Mallarini, Giorgio

    2010-01-01

    Purpose: NASCET and ECST systems to quantify carotid artery stenosis use percent diameter ratios from conventional angiography. With the use of Multi-Detector-Row CT scanners it is possible to easily measure plaque area and residual lumen in order to calculate carotid stenosis degree. Our purpose was to compare NASCET and ECST techniques in the measurement of carotid stenosis degree by using MDCTA. Methods and material: From February 2007 to October 2007, 83 non-consecutive patients (68 males; 15 females) were studied using Multi-Detector-Row CT. Each patient was assessed by two experienced radiologists for stenosis degree by using both NASCET and ECST methods. Statistic analysis was performed to determine the entity of correlation (method of Pearson) between NASCET and ECST. The Cohen kappa test and Bland-Altman analysis were applied to assess the level of inter- and intra-observer agreement. Results: The correlation Pearson coefficient between NASCET and ECST was 0.962 (p < 0.01). Intra-observer agreement in the NASCET evaluation, by using Cohen statistic was 0.844 and 0.825. Intra-observer agreement in the ECST evaluation was 0.871 and 0.836. Inter-observer agreement in the NASCET and ECTS were 0.822 and 0.834, respectively. Agreement analysis by using Bland-Altman plots showed a good intra-/inter-observer agreement for the NASCET and an optimal intra-/inter-observer agreement for the ECST. Conclusions: Results of our study suggest that NASCET and ECST methods show a strength correlation according to quadratic regression. Intra-observer agreement results high for both NASCET and ECST.

  16. Comparison of fuzzy AHP and fuzzy TODIM methods for landfill location selection.

    Science.gov (United States)

    Hanine, Mohamed; Boutkhoum, Omar; Tikniouine, Abdessadek; Agouti, Tarik

    2016-01-01

    Landfill location selection is a multi-criteria decision problem and has a strategic importance for many regions. The conventional methods for landfill location selection are insufficient in dealing with the vague or imprecise nature of linguistic assessment. To resolve this problem, fuzzy multi-criteria decision-making methods are proposed. The aim of this paper is to use fuzzy TODIM (the acronym for Interactive and Multi-criteria Decision Making in Portuguese) and the fuzzy analytic hierarchy process (AHP) methods for the selection of landfill location. The proposed methods have been applied to a landfill location selection problem in the region of Casablanca, Morocco. After determining the criteria affecting the landfill location decisions, fuzzy TODIM and fuzzy AHP methods are applied to the problem and results are presented. The comparisons of these two methods are also discussed.

  17. Text-in-context: a method for extracting findings in mixed-methods mixed research synthesis studies.

    Science.gov (United States)

    Sandelowski, Margarete; Leeman, Jennifer; Knafl, Kathleen; Crandell, Jamie L

    2013-06-01

    Our purpose in this paper is to propose a new method for extracting findings from research reports included in mixed-methods mixed research synthesis studies. International initiatives in the domains of systematic review and evidence synthesis have been focused on broadening the conceptualization of evidence, increased methodological inclusiveness and the production of evidence syntheses that will be accessible to and usable by a wider range of consumers. Initiatives in the general mixed-methods research field have been focused on developing truly integrative approaches to data analysis and interpretation. The data extraction challenges described here were encountered, and the method proposed for addressing these challenges was developed, in the first year of the ongoing (2011-2016) study: Mixed-Methods Synthesis of Research on Childhood Chronic Conditions and Family. To preserve the text-in-context of findings in research reports, we describe a method whereby findings are transformed into portable statements that anchor results to relevant information about sample, source of information, time, comparative reference point, magnitude and significance and study-specific conceptions of phenomena. The data extraction method featured here was developed specifically to accommodate mixed-methods mixed research synthesis studies conducted in nursing and other health sciences, but reviewers might find it useful in other kinds of research synthesis studies. This data extraction method itself constitutes a type of integration to preserve the methodological context of findings when statements are read individually and in comparison to each other. © 2012 Blackwell Publishing Ltd.

  18. Comparison of two solution ways of district heating control: Using analysis methods, using artificial intelligence methods

    Energy Technology Data Exchange (ETDEWEB)

    Balate, J.; Sysala, T. [Technical Univ., Zlin (Czech Republic). Dept. of Automation and Control Technology

    1997-12-31

    The District Heating Systems - DHS (Centralized Heat Supply Systems - CHSS) are being developed in large cities in accordance with their growth. The systems are formed by enlarging networks of heat distribution to consumers and at the same time they interconnect the heat sources gradually built. The heat is distributed to the consumers through the circular networks, that are supplied by several cooperating heat sources, that means by power and heating plants and heating plants. The complicated process of heat production technology and supply requires the system approach when solving the concept of automatized control. The paper deals with comparison of the solution way using the analysis methods and using the artificial intelligence methods. (orig.)

  19. Comparison of Potentiometric and Gravimetric Methods for Determination of O/U Ratio

    International Nuclear Information System (INIS)

    Farida; Windaryati, L; Putro Kasino, P

    1998-01-01

    Comparison of determination O/U ratio by using potentiometric and gravimetric methods has been done. Those methods are simple, economical and having high precision and accuracy. Determination O/U ratio for UO 2 powder using potentiometric is carried out by adopting the davies-gray method. This technique is based on the redox reaction of uranium species such as U(IV) and U(VI). In gravimetric method,the UO 2 power as a sample is calcined at temperature of 900 C, and the weight of the sample is measured after calcination process. The t-student test show that there are no different result significantly between those methods. However, for low concentration in the sample the potentiometric method has a highed precision and accuracy compare to the gravimetric method. O/U ratio obtained is 2.00768 ± 0,00170 for potentiometric method 2.01089 ± 0,02395 for gravimetric method

  20. Development and comparison of Bayesian modularization method in uncertainty assessment of hydrological models

    Science.gov (United States)

    Li, L.; Xu, C.-Y.; Engeland, K.

    2012-04-01

    With respect to model calibration, parameter estimation and analysis of uncertainty sources, different approaches have been used in hydrological models. Bayesian method is one of the most widely used methods for uncertainty assessment of hydrological models, which incorporates different sources of information into a single analysis through Bayesian theorem. However, none of these applications can well treat the uncertainty in extreme flows of hydrological models' simulations. This study proposes a Bayesian modularization method approach in uncertainty assessment of conceptual hydrological models by considering the extreme flows. It includes a comprehensive comparison and evaluation of uncertainty assessments by a new Bayesian modularization method approach and traditional Bayesian models using the Metropolis Hasting (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions are used in combination with traditional Bayesian: the AR (1) plus Normal and time period independent model (Model 1), the AR (1) plus Normal and time period dependent model (Model 2) and the AR (1) plus multi-normal model (Model 3). The results reveal that (1) the simulations derived from Bayesian modularization method are more accurate with the highest Nash-Sutcliffe efficiency value, and (2) the Bayesian modularization method performs best in uncertainty estimates of entire flows and in terms of the application and computational efficiency. The study thus introduces a new approach for reducing the extreme flow's effect on the discharge uncertainty assessment of hydrological models via Bayesian. Keywords: extreme flow, uncertainty assessment, Bayesian modularization, hydrological model, WASMOD

  1. A comparison of Nodal methods in neutron diffusion calculations

    Energy Technology Data Exchange (ETDEWEB)

    Tavron, Barak [Israel Electric Company, Haifa (Israel) Nuclear Engineering Dept. Research and Development Div.

    1996-12-01

    The nuclear engineering department at IEC uses in the reactor analysis three neutron diffusion codes based on nodal methods. The codes, GNOMERl, ADMARC2 and NOXER3 solve the neutron diffusion equation to obtain flux and power distributions in the core. The resulting flux distributions are used for the furl cycle analysis and for fuel reload optimization. This work presents a comparison of the various nodal methods employed in the above codes. Nodal methods (also called Coarse-mesh methods) have been designed to solve problems that contain relatively coarse areas of homogeneous composition. In the nodal method parts of the equation that present the state in the homogeneous area are solved analytically while, according to various assumptions and continuity requirements, a general solution is sought out. Thus efficiency of the method for this kind of problems, is very high compared with the finite element and finite difference methods. On the other hand, using this method one can get only approximate information about the node vicinity (or coarse-mesh area, usually a feel assembly of a 20 cm size). These characteristics of the nodal method make it suitable for feel cycle analysis and reload optimization. This analysis requires many subsequent calculations of the flux and power distributions for the feel assemblies while there is no need for detailed distribution within the assembly. For obtaining detailed distribution within the assembly methods of power reconstruction may be applied. However homogenization of feel assembly properties, required for the nodal method, may cause difficulties when applied to fuel assemblies with many absorber rods, due to exciting strong neutron properties heterogeneity within the assembly. (author).

  2. Sampling methods for the study of pneumococcal carriage: a systematic review.

    Science.gov (United States)

    Gladstone, R A; Jefferies, J M; Faust, S N; Clarke, S C

    2012-11-06

    Streptococcus pneumoniae is an important pathogen worldwide. Accurate sampling of S. pneumoniae carriage is central to surveillance studies before and following conjugate vaccination programmes to combat pneumococcal disease. Any bias introduced during sampling will affect downstream recovery and typing. Many variables exist for the method of collection and initial processing, which can make inter-laboratory or international comparisons of data complex. In February 2003, a World Health Organisation working group published a standard method for the detection of pneumococcal carriage for vaccine trials to reduce or eliminate variability. We sought to describe the variables associated with the sampling of S. pneumoniae from collection to storage in the context of the methods recommended by the WHO and those used in pneumococcal carriage studies since its publication. A search of published literature in the online PubMed database was performed on the 1st June 2012, to identify published studies that collected pneumococcal carriage isolates, conducted after the publication of the WHO standard method. After undertaking a systematic analysis of the literature, we show that a number of differences in pneumococcal sampling protocol continue to exist between studies since the WHO publication. The majority of studies sample from the nasopharynx, but the choice of swab and swab transport media is more variable between studies. At present there is insufficient experimental data that supports the optimal sensitivity of any standard method. This may have contributed to incomplete adoption of the primary stages of the WHO detection protocol, alongside pragmatic or logistical issues associated with study design. Consequently studies may not provide a true estimate of pneumococcal carriage. Optimal sampling of carriage could lead to improvements in downstream analysis and the evaluation of pneumococcal vaccine impact and extrapolation to pneumococcal disease control therefore

  3. Comparison of the quantitative dry culture methods with both conventional media and most probable number method for the enumeration of coliforms and Escherichia coli/coliforms in food.

    Science.gov (United States)

    Teramura, H; Sota, K; Iwasaki, M; Ogihara, H

    2017-07-01

    Sanita-kun™ CC (coliform count) and EC (Escherichia coli/coliform count), sheet quantitative culture systems which can avoid chromogenic interference by lactase in food, were evaluated in comparison with conventional methods for these bacteria. Based on the results of inclusivity and exclusivity studies using 77 micro-organisms, sensitivity and specificity of both Sanita-kun™ met the criteria for ISO 16140. Both media were compared with deoxycholate agar, violet red bile agar, Merck Chromocult™ coliform agar (CCA), 3M Petrifilm™ CC and EC (PEC) and 3-tube MPN, as reference methods, in 100 naturally contaminated food samples. The correlation coefficients of both Sanita-kun™ for coliform detection were more than 0·95 for all comparisons. For E. coli detection, Sanita-kun™ EC was compared with CCA, PEC and MPN in 100 artificially contaminated food samples. The correlation coefficients for E. coli detection of Sanita-kun™ EC were more than 0·95 for all comparisons. There were no significant differences in all comparisons when conducting a one-way analysis of variance (anova). Both Sanita-kun™ significantly inhibited colour interference by lactase when inhibition of enzymatic staining was assessed using 40 natural cheese samples spiked with coliform. Our results demonstrated Sanita-kun™ CC and EC are suitable alternatives for the enumeration of coliforms and E. coli/coliforms, respectively, in a variety of foods, and specifically in fermented foods. Current chromogenic media for coliforms and Escherichia coli/coliforms have enzymatic coloration due to breaking down of chromogenic substrates by food lactase. The novel sheet culture media which have film layer to avoid coloration by food lactase have been developed for enumeration of coliforms and E. coli/coliforms respectively. In this study, we demonstrated these media had comparable performance with reference methods and less interference by food lactase. These media have a possibility not only

  4. Detection of circulating immune complexes by Raji cell assay: comparison of flow cytometric and radiometric methods

    International Nuclear Information System (INIS)

    Kingsmore, S.F.; Crockard, A.D.; Fay, A.C.; McNeill, T.A.; Roberts, S.D.; Thompson, J.M.

    1988-01-01

    Several flow cytometric methods for the measurement of circulating immune complexes (CIC) have recently become available. We report a Raji cell flow cytometric assay (FCMA) that uses aggregated human globulin (AHG) as primary calibrator. Technical advantages of the Raji cell flow cytometric assay are discussed, and its clinical usefulness is evaluated in a method comparison study with the widely used Raji cell immunoradiometric assay. FCMA is more precise and has greater analytic sensitivity for AHG. Diagnostic sensitivity by the flow cytometric method is superior in systemic lupus erythematosus (SLE), rheumatoid arthritis, and vasculitis patients: however, diagnostic specificity is similar for both assays, but the reference interval of FCMA is narrower. Significant correlations were found between CIC levels obtained with both methods in SLE, rheumatoid arthritis, and vasculitis patients and in longitudinal studies of two patients with cerebral SLE. The Raji cell FCMA is recommended for measurement of CIC levels to clinical laboratories with access to a flow cytometer

  5. A method comparison of total and HMW adiponectin : HMW/total adiponectin ratio varies versus total adiponectin, independent of clinical condition

    NARCIS (Netherlands)

    van Andel, Merel; Drent, Madeleine L; van Herwaarden, Antonius E; Ackermans, Mariëtte T; Heijboer, Annemieke C

    BACKGROUND: Total and high-molecular-weight (HMW) adiponectin have been associated with endocrine and cardiovascular pathology. As no gold standard is available, the discussion about biological relevance of isoforms is complicated. In our study we perform a method comparison between two commercially

  6. A method comparison of total and HMW adiponectin: HMW/total adiponectin ratio varies versus total adiponectin, independent of clinical condition

    NARCIS (Netherlands)

    van Andel, Merel; Drent, Madeleine L.; van Herwaarden, Antonius E.; Ackermans, Mariëtte T.; Heijboer, Annemieke C.

    2017-01-01

    Background: Total and high-molecular-weight (HMW) adiponectin have been associated with endocrine and cardiovascular pathology. As no gold standard is available, the discussion about biological relevance of isoforms is complicated. In our study we perform a method comparison between two commercially

  7. Antimicrobial activity evaluation and comparison of methods of susceptibility for Klebsiella pneumoniae carbapenemase (KPC)-producing Enterobacter spp. isolates.

    Science.gov (United States)

    Rechenchoski, Daniele Zendrini; Dambrozio, Angélica Marim Lopes; Vivan, Ana Carolina Polano; Schuroff, Paulo Alfonso; Burgos, Tatiane das Neves; Pelisson, Marsileni; Perugini, Marcia Regina Eches; Vespero, Eliana Carolina

    The production of KPC (Klebsiella pneumoniae carbapenemase) is the major mechanism of resistance to carbapenem agents in enterobacterias. In this context, forty KPC-producing Enterobacter spp. clinical isolates were studied. It was evaluated the activity of antimicrobial agents: polymyxin B, tigecycline, ertapenem, imipenem and meropenem, and was performed a comparison of the methodologies used to determine the susceptibility: broth microdilution, Etest ® (bioMérieux), Vitek 2 ® automated system (bioMérieux) and disc diffusion. It was calculated the minimum inhibitory concentration (MIC) for each antimicrobial and polymyxin B showed the lowest concentrations for broth microdilution. Errors also were calculated among the techniques, tigecycline and ertapenem were the antibiotics with the largest and the lower number of discrepancies, respectively. Moreover, Vitek 2 ® automated system was the method most similar compared to the broth microdilution. Therefore, is important to evaluate the performance of new methods in comparison to the reference method, broth microdilution. Copyright © 2017 Sociedade Brasileira de Microbiologia. Published by Elsevier Editora Ltda. All rights reserved.

  8. A BAND SELECTION METHOD FOR SUB-PIXEL TARGET DETECTION IN HYPERSPECTRAL IMAGES BASED ON LABORATORY AND FIELD REFLECTANCE SPECTRAL COMPARISON

    Directory of Open Access Journals (Sweden)

    S. Sharifi hashjin

    2016-06-01

    Full Text Available In recent years, developing target detection algorithms has received growing interest in hyperspectral images. In comparison to the classification field, few studies have been done on dimension reduction or band selection for target detection in hyperspectral images. This study presents a simple method to remove bad bands from the images in a supervised manner for sub-pixel target detection. The proposed method is based on comparing field and laboratory spectra of the target of interest for detecting bad bands. For evaluation, the target detection blind test dataset is used in this study. Experimental results show that the proposed method can improve efficiency of the two well-known target detection methods, ACE and CEM.

  9. Monitoring uterine activity during labor: a comparison of 3 methods.

    Science.gov (United States)

    Euliano, Tammy Y; Nguyen, Minh Tam; Darmanjian, Shalom; McGorray, Susan P; Euliano, Neil; Onkala, Allison; Gregg, Anthony R

    2013-01-01

    Tocodynamometry (Toco; strain gauge technology) provides contraction frequency and approximate duration of labor contractions but suffers frequent signal dropout, necessitating repositioning by a nurse, and may fail in obese patients. The alternative invasive intrauterine pressure catheter (IUPC) is more reliable and adds contraction pressure information but requires ruptured membranes and introduces small risks of infection and abruption. Electrohysterography (EHG) reports the electrical activity of the uterus through electrodes placed on the maternal abdomen. This study compared all 3 methods of contraction detection simultaneously in laboring women. Upon consent, laboring women were monitored simultaneously with Toco, EHG, and IUPC. Contraction curves were generated in real-time for the EHG, and all 3 curves were stored electronically. A contraction detection algorithm was used to compare frequency and timing between methods. Seventy-three subjects were enrolled in the study; 14 were excluded due to hardware failure of 1 or more of the devices (n = 12) or inadequate data collection duration (n = 2). In comparison with the gold-standard IUPC, EHG performed significantly better than Toco with regard to the Contractions Consistency Index (CCI). The mean CCI for EHG was 0.88 ± 0.17 compared with 0.69 ± 0.27 for Toco (P Toco, EHG was not significantly affected by obesity. Toco does not correlate well with the gold-standard IUPC and fails more frequently in obese patients. EHG provides a reliable noninvasive alternative, regardless of body habitus. Copyright © 2013 Mosby, Inc. All rights reserved.

  10. Quantitative comparison of analysis methods for spectroscopic optical coherence tomography: reply to comment

    NARCIS (Netherlands)

    Bosschaart, Nienke; van Leeuwen, Ton; Aalders, Maurice C.G.; Faber, Dirk

    2014-01-01

    We reply to the comment by Kraszewski et al on “Quantitative comparison of analysis methods for spectroscopic optical coherence tomography.” We present additional simulations evaluating the proposed window function. We conclude that our simulations show good qualitative agreement with the results of

  11. The Comparison Study of Neutron Activation Analysis and Fission Track Technique for Uranium Determination

    International Nuclear Information System (INIS)

    Sirinuntavid, Alice; Rodthongkom, Chouvana

    2007-08-01

    Full text: Comparison between Neutron Activation Analysis (NAA) and fission track technique for uranium determination in solid samples was studied by use of standard reference materials, i.e., ore, coal fly ash, soil. For NAA, the epithermal neutron was applied for activated irradiation. Then, the 74.5 keV gamma from U-239 or 277.7 keV gamma from Np-239 was measured. For high Uranium content samples, NAA method with 74.5 keV gamma measurement, gave higher precision result than the 277.7 keV gamma measurement method. NAA method with 277.7 keV gamma measurement, gave higher sensitivity and precision result for low Uranium content samples and the uranium contained less than 10 ppm samples. Nevertheless, the latter procedure needed longer time for neutron irradiation and analysis procedure. In comparison the results of Uranium analysis between NAA and fission track, it was found that no significant difference within 95 % of confidence level

  12. Comparison of Flood Frequency Analysis Methods for Ungauged Catchments in France

    Directory of Open Access Journals (Sweden)

    Jean Odry

    2017-09-01

    Full Text Available The objective of flood frequency analysis (FFA is to associate flood intensity with a probability of exceedance. Many methods are currently employed for this, ranging from statistical distribution fitting to simulation approaches. In many cases the site of interest is actually ungauged, and a regionalisation scheme has to be associated with the FFA method, leading to a multiplication of the number of possible methods available. This paper presents the results of a wide-range comparison of FFA methods from statistical and simulation families associated with different regionalisation schemes based on regression, or spatial or physical proximity. The methods are applied to a set of 1535 French catchments, and a k-fold cross-validation procedure is used to consider the ungauged configuration. The results suggest that FFA from the statistical family largely relies on the regionalisation step, whereas the simulation-based method is more stable regarding regionalisation. This conclusion emphasises the difficulty of the regionalisation process. The results are also contrasted depending on the type of climate: the Mediterranean catchments tend to aggravate the differences between the methods.

  13. Comparison of validation methods for forming simulations

    Science.gov (United States)

    Schug, Alexander; Kapphan, Gabriel; Bardl, Georg; Hinterhölzl, Roland; Drechsler, Klaus

    2018-05-01

    The forming simulation of fibre reinforced thermoplastics could reduce the development time and improve the forming results. But to take advantage of the full potential of the simulations it has to be ensured that the predictions for material behaviour are correct. For that reason, a thorough validation of the material model has to be conducted after characterising the material. Relevant aspects for the validation of the simulation are for example the outer contour, the occurrence of defects and the fibre paths. To measure these features various methods are available. Most relevant and also most difficult to measure are the emerging fibre orientations. For that reason, the focus of this study was on measuring this feature. The aim was to give an overview of the properties of different measuring systems and select the most promising systems for a comparison survey. Selected were an optical, an eddy current and a computer-assisted tomography system with the focus on measuring the fibre orientations. Different formed 3D parts made of unidirectional glass fibre and carbon fibre reinforced thermoplastics were measured. Advantages and disadvantages of the tested systems were revealed. Optical measurement systems are easy to use, but are limited to the surface plies. With an eddy current system also lower plies can be measured, but it is only suitable for carbon fibres. Using a computer-assisted tomography system all plies can be measured, but the system is limited to small parts and challenging to evaluate.

  14. A comparison of the weights-of-evidence method and probabilistic neural networks

    Science.gov (United States)

    Singer, Donald A.; Kouda, Ryoichi

    1999-01-01

    The need to integrate large quantities of digital geoscience information to classify locations as mineral deposits or nondeposits has been met by the weights-of-evidence method in many situations. Widespread selection of this method may be more the result of its ease of use and interpretation rather than comparisons with alternative methods. A comparison of the weights-of-evidence method to probabilistic neural networks is performed here with data from Chisel Lake-Andeson Lake, Manitoba, Canada. Each method is designed to estimate the probability of belonging to learned classes where the estimated probabilities are used to classify the unknowns. Using these data, significantly lower classification error rates were observed for the neural network, not only when test and training data were the same (0.02 versus 23%), but also when validation data, not used in any training, were used to test the efficiency of classification (0.7 versus 17%). Despite these data containing too few deposits, these tests of this set of data demonstrate the neural network's ability at making unbiased probability estimates and lower error rates when measured by number of polygons or by the area of land misclassified. For both methods, independent validation tests are required to ensure that estimates are representative of real-world results. Results from the weights-of-evidence method demonstrate a strong bias where most errors are barren areas misclassified as deposits. The weights-of-evidence method is based on Bayes rule, which requires independent variables in order to make unbiased estimates. The chi-square test for independence indicates no significant correlations among the variables in the Chisel Lake–Andeson Lake data. However, the expected number of deposits test clearly demonstrates that these data violate the independence assumption. Other, independent simulations with three variables show that using variables with correlations of 1.0 can double the expected number of deposits

  15. Comparison of a clinical gait analysis method using videography and temporal-distance measures with 16-mm cinematography.

    Science.gov (United States)

    Stuberg, W A; Colerick, V L; Blanke, D J; Bruce, W

    1988-08-01

    The purpose of this study was to compare a clinical gait analysis method using videography and temporal-distance measures with 16-mm cinematography in a gait analysis laboratory. Ten children with a diagnosis of cerebral palsy (means age = 8.8 +/- 2.7 years) and 9 healthy children (means age = 8.9 +/- 2.4 years) participated in the study. Stride length, walking velocity, and goniometric measurements of the hip, knee, and ankle were recorded using the two gait analysis methods. A multivariate analysis of variance was used to determine significant differences between the data collected using the two methods. Pearson product-moment correlation coefficients were determined to examine the relationship between the measurements recorded by the two methods. The consistency of performance of the subjects during walking was examined by intraclass correlation coefficients. No significant differences were found between the methods for the variables studied. Pearson product-moment correlation coefficients ranged from .79 to .95, and intraclass coefficients ranged from .89 to .97. The clinical gait analysis method was found to be a valid tool in comparison with 16-mm cinematography for the variables that were studied.

  16. Estimation of potential evapotranspiration of a coastal savannah environment; comparison of methods

    International Nuclear Information System (INIS)

    Asare, D.K.; Ayeh, E.O.; Amenorpe, G.; Banini, G.K.

    2011-01-01

    Six potential evapotranspiration models namely, Penman-Monteith, Hargreaves-Samani , Priestley-Taylor, IRMAK1, IRMAK2 and TURC, were used to estimate daily PET values at Atomic-Kwabenya in the coastal savannah environment of Ghana for the year 2005. The study compared PET values generated by six models and identified which ones compared favourably with the Penman-Monteith model which is the recommended standard method for estimating PET. Cross comparison analysis showed that only the daily estimates of PET of Hargreaves-Samani model correlated reasonably (r = 0.82) with estimates by the Penman-Monteith model. Additionally, PET values by the Priestley-Taylor and TURC models were highly correlated (r = 0.99) as well as those generated by IRMAK2 and TURC models (r = 0.96). Statistical analysis, based on pair comparison of means, showed that daily PET estimates of the Penman-Monteith model were not different from the Priestley-Taylor model for the Kwabenya-Atomic area located in the coastal savannah environment of Ghana. The Priestley-Taylor model can be used, in place of the Penman-Monteith model, to estimate daily PET for the Atomic-Kwabenya area of the coastal savannah environment of Ghana. The Hargreaves-Samani model can also be used to estimate PET for the study area because its PET estimates correlated reasonably with those of the Penman-Monteith model (r = 0.82) and requires only air temperature measurements as inputs. (au)

  17. EPA (Environmental Protection Agency) Method Study 12, cyanide in water. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Winter, J.; Britton, P.; Kroner, R.

    1984-05-01

    EPA Method Study 12, Cyanide in Water reports the results of a study by EMSL-Cincinnati for the parameters, Total Cyanide and Cyanides Amendable to Chlorination, present in water at microgram per liter levels. Four methods: pyridine-pyrazolone, pyridine-barbituric acid, electrode and Roberts-Jackson were used by 112 laboratories in Federal and State agencies, municipalities, universities, and the private/industrial sector. Sample concentrates were prepared in pairs with similar concentrations at each of three levels. Analysts diluted samples to volume with distilled and natural waters and analyzed them. Precision, accuracy, bias and the natural water interference were evaluated for each analytical method and comparisons were made between the four methods.

  18. Comparison of Feature Learning Methods for Human Activity Recognition Using Wearable Sensors

    Directory of Open Access Journals (Sweden)

    Frédéric Li

    2018-02-01

    Full Text Available Getting a good feature representation of data is paramount for Human Activity Recognition (HAR using wearable sensors. An increasing number of feature learning approaches—in particular deep-learning based—have been proposed to extract an effective feature representation by analyzing large amounts of data. However, getting an objective interpretation of their performances faces two problems: the lack of a baseline evaluation setup, which makes a strict comparison between them impossible, and the insufficiency of implementation details, which can hinder their use. In this paper, we attempt to address both issues: we firstly propose an evaluation framework allowing a rigorous comparison of features extracted by different methods, and use it to carry out extensive experiments with state-of-the-art feature learning approaches. We then provide all the codes and implementation details to make both the reproduction of the results reported in this paper and the re-use of our framework easier for other researchers. Our studies carried out on the OPPORTUNITY and UniMiB-SHAR datasets highlight the effectiveness of hybrid deep-learning architectures involving convolutional and Long-Short-Term-Memory (LSTM to obtain features characterising both short- and long-term time dependencies in the data.

  19. A comparison of methods to determine tannin acyl hydrolase activity

    Directory of Open Access Journals (Sweden)

    Cristóbal Aguilar

    1999-01-01

    Full Text Available Six methods to determine the activity of tannase produced by Aspergillus niger Aa-20 on polyurethane foam by solid state fermentation, which included two titrimetric techniques, three spectrophotometric methods and one HPLC assay were tested and compared. All methods assayed enabled the measurement of extracellular tannase activity. However, only five were useful to evaluate intracellular tannase activity. Studies on the effect of pH on tannase extraction demonstrated that tannase activity was considerably under-estimated when its extraction was carried out at pH values below 5.5 and above 6.0. Results showed that the HPLC technique and the modified Bajpai and Patil methods presented several advantages in comparison to the other methods tested.Seis métodos para determinar a atividade de tannase produzida por Aspergillus niger O Aa-20 em espuma de polyuretano por fermentação em estado sólido foram estudados. Duas técnicas titulométricas , três métodos spectrofotométricos e um método por HPLC foram testados e comparados. Todos os métodos testados permitiram determinar a atividade da tannase produzida extracelularmente. Entretanto, somente cinco se mostraram úteis para avaliar a atividade da tannase produzida intracelularmente. Os estudos do efeito do pH na extração de tannase demonstraram que a atividade de tannase era consideravelmente subestimada quando sua extração foi executada em valores de pH inferiores a 5.5 e superior a pH 6.0. Os resultados demostraram que a técnica de HPLC o método Bajpai and Patil modificado apresentam várias vantagens em comparação aos outros métodos testados.

  20. Hydrogeochemical methods for studying uranium mineralization in sedimentary rocks

    International Nuclear Information System (INIS)

    Lisitsin, A.K.

    1985-01-01

    The role of hydrogeochemical studies of uranium deposits is considered, which permits to obtain data on ore forming role of water solutions. The hydrogeochemistry of ore formation is determined as a result of physicochemical analysis of mineral paragenesis. Analysis results of the content of primary and secondary gaseous - liquid inclusions into the minerals are of great importance. Another way to determine the main features of ore formation hydrogeochemistry envisages simultaneous analysis of material from a number of deposits of one genetic type but in different periods of their geochemical life: being formed, formed and preserved, and being destructed. Comparison of mineralogo-geochemical zonation and hydrogeochemical one in water-bearing horizon is an efficient method, resulting in the objective interpretation of the facts. The comparison is compulsory when determining deposit genesis

  1. Comparison of deterministic and Monte Carlo methods in shielding design.

    Science.gov (United States)

    Oliveira, A D; Oliveira, C

    2005-01-01

    In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions.

  2. Comparison of deterministic and Monte Carlo methods in shielding design

    International Nuclear Information System (INIS)

    Oliveira, A. D.; Oliveira, C.

    2005-01-01

    In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions. (authors)

  3. Simulated annealing method for electronic circuits design: adaptation and comparison with other optimization methods

    International Nuclear Information System (INIS)

    Berthiau, G.

    1995-10-01

    The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. Finally, our simulated annealing program

  4. The Semianalytical Solutions for Stiff Systems of Ordinary Differential Equations by Using Variational Iteration Method and Modified Variational Iteration Method with Comparison to Exact Solutions

    Directory of Open Access Journals (Sweden)

    Mehmet Tarik Atay

    2013-01-01

    Full Text Available The Variational Iteration Method (VIM and Modified Variational Iteration Method (MVIM are used to find solutions of systems of stiff ordinary differential equations for both linear and nonlinear problems. Some examples are given to illustrate the accuracy and effectiveness of these methods. We compare our results with exact results. In some studies related to stiff ordinary differential equations, problems were solved by Adomian Decomposition Method and VIM and Homotopy Perturbation Method. Comparisons with exact solutions reveal that the Variational Iteration Method (VIM and the Modified Variational Iteration Method (MVIM are easier to implement. In fact, these methods are promising methods for various systems of linear and nonlinear stiff ordinary differential equations. Furthermore, VIM, or in some cases MVIM, is giving exact solutions in linear cases and very satisfactory solutions when compared to exact solutions for nonlinear cases depending on the stiffness ratio of the stiff system to be solved.

  5. Results from the Application of Uncertainty Methods in the CSNI Uncertainty Methods Study (UMS)

    International Nuclear Information System (INIS)

    Glaeser, H.

    2008-01-01

    Within licensing procedures there is the incentive to replace the conservative requirements for code application by a - best estimate - concept supplemented by an uncertainty analysis to account for predictive uncertainties of code results. Methods have been developed to quantify these uncertainties. The Uncertainty Methods Study (UMS) Group, following a mandate from CSNI, has compared five methods for calculating the uncertainty in the predictions of advanced -best estimate- thermal-hydraulic codes. Most of the methods identify and combine input uncertainties. The major differences between the predictions of the methods came from the choice of uncertain parameters and the quantification of the input uncertainties, i.e. the wideness of the uncertainty ranges. Therefore, suitable experimental and analytical information has to be selected to specify these uncertainty ranges or distributions. After the closure of the Uncertainty Method Study (UMS) and after the report was issued comparison calculations of experiment LSTF-SB-CL-18 were performed by University of Pisa using different versions of the RELAP 5 code. It turned out that the version used by two of the participants calculated a 170 K higher peak clad temperature compared with other versions using the same input deck. This may contribute to the differences of the upper limit of the uncertainty ranges.

  6. Comparison of selected methods for the enumeration of fecal coliforms and Escherichia coli in shellfish.

    Science.gov (United States)

    Grabow, W O; De Villiers, J C; Schildhauer, C I

    1992-09-01

    In a comparison of five selected methods for the enumeration of fecal coliforms and Escherichia coli in naturally contaminated and sewage-seeded mussels (Choromytilus spp.) and oysters (Ostrea spp.), a spread-plate procedure with mFC agar without rosolic acid and preincubation proved the method of choice for routine quality assessment.

  7. A comparison of radiosity with current methods of sound level prediction in commercial spaces

    Science.gov (United States)

    Beamer, C. Walter, IV; Muehleisen, Ralph T.

    2002-11-01

    The ray tracing and image methods (and variations thereof) are widely used for the computation of sound fields in architectural spaces. The ray tracing and image methods are best suited for spaces with mostly specular reflecting surfaces. The radiosity method, a method based on solving a system of energy balance equations, is best applied to spaces with mainly diffusely reflective surfaces. Because very few spaces are either purely specular or purely diffuse, all methods must deal with both types of reflecting surfaces. A comparison of the radiosity method to other methods for the prediction of sound levels in commercial environments is presented. [Work supported by NSF.

  8. Experimental comparison of the different methods for seismic qualification of electrical cabinets

    International Nuclear Information System (INIS)

    Buland, P.; Gauthier, G.; Simon, D.

    1993-01-01

    This paper presents the results of an experimental seismic study performed on a cabinet equipped with 96 acceleration sensitive relays located in four racks. The aim of this study is to verify the validity of the seismic qualification method proposed in the IEC 980 standard. The cabinet was primarily tested on a shaking table and the relay chatter was monitored. The interactions between the racks and the cabinet frame induce shocks which were correlated with some of the contact openings. The racks were afterwards tested individually with the accelerations recorded during the cabinet test. A comparison of relay chatter was performed for both test phases. An important reduction of relay chatter was noticed during the racks test. This is due to the fact that it is not possible to fully represent on shaking table the complex vibration environment (shocks) sustained by a rack in a cabinet

  9. Comparison of four statistical and machine learning methods for crash severity prediction.

    Science.gov (United States)

    Iranitalab, Amirfarrokh; Khattak, Aemal

    2017-11-01

    Crash severity prediction models enable different agencies to predict the severity of a reported crash with unknown severity or the severity of crashes that may be expected to occur sometime in the future. This paper had three main objectives: comparison of the performance of four statistical and machine learning methods including Multinomial Logit (MNL), Nearest Neighbor Classification (NNC), Support Vector Machines (SVM) and Random Forests (RF), in predicting traffic crash severity; developing a crash costs-based approach for comparison of crash severity prediction methods; and investigating the effects of data clustering methods comprising K-means Clustering (KC) and Latent Class Clustering (LCC), on the performance of crash severity prediction models. The 2012-2015 reported crash data from Nebraska, United States was obtained and two-vehicle crashes were extracted as the analysis data. The dataset was split into training/estimation (2012-2014) and validation (2015) subsets. The four prediction methods were trained/estimated using the training/estimation dataset and the correct prediction rates for each crash severity level, overall correct prediction rate and a proposed crash costs-based accuracy measure were obtained for the validation dataset. The correct prediction rates and the proposed approach showed NNC had the best prediction performance in overall and in more severe crashes. RF and SVM had the next two sufficient performances and MNL was the weakest method. Data clustering did not affect the prediction results of SVM, but KC improved the prediction performance of MNL, NNC and RF, while LCC caused improvement in MNL and RF but weakened the performance of NNC. Overall correct prediction rate had almost the exact opposite results compared to the proposed approach, showing that neglecting the crash costs can lead to misjudgment in choosing the right prediction method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.

    Science.gov (United States)

    Rukhin, Andrew L

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.

  11. Developement of the method for realization of spectral irradiance scale featuring system of spectral comparisons

    International Nuclear Information System (INIS)

    Skerovic, V; Zarubica, V; Aleksic, M; Zekovic, L; Belca, I

    2010-01-01

    Realization of the scale of spectral responsivity of the detectors in the Directorate of Measures and Precious Metals (DMDM) is based on silicon detectors traceable to LNE-INM. In order to realize the unit of spectral irradiance in the laboratory for photometry and radiometry of the Bureau of Measures and Precious Metals, the new method based on the calibration of the spectroradiometer by comparison with standard detector has been established. The development of the method included realization of the System of Spectral Comparisons (SSC), together with the detector spectral responsivity calibrations by means of a primary spectrophotometric system. The linearity testing and stray light analysis were preformed to characterize the spectroradiometer. Measurement of aperture diameter and calibration of transimpedance amplifier were part of the overall experiment. In this paper, the developed method is presented and measurement results with the associated measurement uncertainty budget are shown.

  12. Developement of the method for realization of spectral irradiance scale featuring system of spectral comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Skerovic, V; Zarubica, V; Aleksic, M [Directorate of measures and precious metals, Optical radiation Metrology department, Mike Alasa 14, 11000 Belgrade (Serbia); Zekovic, L; Belca, I, E-mail: vladanskerovic@dmdm.r [Faculty of Physics, Department for Applied physics and metrology, Studentski trg 12-16, 11000 Belgrade (Serbia)

    2010-10-15

    Realization of the scale of spectral responsivity of the detectors in the Directorate of Measures and Precious Metals (DMDM) is based on silicon detectors traceable to LNE-INM. In order to realize the unit of spectral irradiance in the laboratory for photometry and radiometry of the Bureau of Measures and Precious Metals, the new method based on the calibration of the spectroradiometer by comparison with standard detector has been established. The development of the method included realization of the System of Spectral Comparisons (SSC), together with the detector spectral responsivity calibrations by means of a primary spectrophotometric system. The linearity testing and stray light analysis were preformed to characterize the spectroradiometer. Measurement of aperture diameter and calibration of transimpedance amplifier were part of the overall experiment. In this paper, the developed method is presented and measurement results with the associated measurement uncertainty budget are shown.

  13. Scaling Professional Problems of Teachers in Turkey with Paired Comparison Method

    Directory of Open Access Journals (Sweden)

    Yasemin Duygu ESEN

    2017-03-01

    Full Text Available In this study, teachers’ professional problems was investigated and the significance level of them was measured with the paired comparison method. The study was carried out in survey model. The study group consisted of 484 teachers working in public schools which are accredited by Ministry of National Education (MEB in Turkey. “The Teacher Professional Problems Survey” developed by the researchers was used as a data collection tool. In data analysis , the scaling method with the third conditional equation of Thurstone’s law of comparative judgement was used. According to the results of study, the teachers’ professional problems include teacher training and the quality of teacher, employee rights and financial problems, decrease of professional reputation, the problems with MEB policies, the problems with union activities, workload, the problems with administration in school, physical conditions and the lack of infrastructure, the problems with parents, the problems with students. According to teachers, the most significant problem is MEB educational policies. This is followed by decrease of professional reputation, physical conditions and the lack of infrastructure, the problems with students, employee rights and financial problems, the problems with administration in school, teacher training and the quality of teacher, the problems with parents, workload, and the problems with union activities. When teachers’ professional problems were analyzed seniority variable, there was little difference in scale values. While the teachers with 0-10 years experience consider decrease of professional reputation as the most important problem, the teachers with 11-45 years experience put the problems with MEB policies at the first place.

  14. Development of a method for the comparison of final repository sites in different host rock formations; Weiterentwicklung einer Methode zum Vergleich von Endlagerstandorten in unterschiedlichen Wirtsgesteinsformationen

    Energy Technology Data Exchange (ETDEWEB)

    Fischer-Appelt, Klaus; Frieling, Gerd; Kock, Ingo; and others

    2017-10-15

    The report on the development of a method for the comparison of final repository sites in different host rock formations covers the following issues: influence of the requirement of retrievability on the methodology, study on the possible extension of the methodology for repository sites with crystalline host rocks: boundary conditions in Germany, final disposal concept for crystalline host rocks, generic extension of the VerSi method, identification, classification and relevance weighting of safety functions, relevance of the safety functions for the crystalline host rock formation, review of the methodological need for changes for crystalline rock sites under low-permeability covering; study on the applicability of the methodology for the determination of site regions for surface exploitation (phase 1).

  15. Methods and models used in comparative risk studies

    International Nuclear Information System (INIS)

    Devooght, J.

    1983-01-01

    Comparative risk studies make use of a large number of methods and models based upon a set of assumptions incompletely formulated or of value judgements. Owing to the multidimensionality of risks and benefits, the economic and social context may notably influence the final result. Five classes of models are briefly reviewed: accounting of fluxes of effluents, radiation and energy; transport models and health effects; systems reliability and bayesian analysis; economic analysis of reliability and cost-risk-benefit analysis; decision theory in presence of uncertainty and multiple objectives. Purpose and prospect of comparative studies are assessed in view of probable diminishing returns for large generic comparisons [fr

  16. A comparison of different quasi-newton acceleration methods for partitioned multi-physics codes

    CSIR Research Space (South Africa)

    Haelterman, R

    2018-02-01

    Full Text Available & structures, 88/7, pp. 446–457 (2010) 8. J.E. Dennis, J.J. More´, Quasi-Newton methods: motivation and theory. SIAM Rev. 19, pp. 46–89 (1977) A Comparison of Quasi-Newton Acceleration Methods 15 9. J.E. Dennis, R.B. Schnabel, Least Change Secant Updates... Dois Metodos de Broyden. Mat. Apl. Comput. 1/2, pp. 135– 143 (1982) 25. J.M. Martinez, A quasi-Newton method with modification of one column per iteration. Com- puting 33, pp. 353–362 (1984) 26. J.M. Martinez, M.C. Zambaldi, An Inverse Column...

  17. Comparison between Different Extraction Methods for Determination of Primary Aromatic Amines in Food Simulant

    Directory of Open Access Journals (Sweden)

    Morteza Shahrestani

    2018-01-01

    Full Text Available The primary aromatic amines (PAAs are food contaminants which may exist in packaged food. Polyurethane (PU adhesives which are used in flexible packaging are the main source of PAAs. It is the unreacted diisocyanates which in fact migrate to foodstuff and then hydrolyze to PAAs. These PAAs include toluenediamines (TDAs and methylenedianilines (MDAs, and the selected PAAs were 2,4-TDA, 2,6-TDA, 4,4′-MDA, 2,4′-MDA, and 2,2′-MDA. PAAs have genotoxic, carcinogenic, and allergenic effects. In this study, extraction methods were applied on a 3% acetic acid as food simulant which was spiked with the PAAs under study. Extraction methods were liquid-liquid extraction (LLE, dispersive liquid-liquid microextraction (DLLME, and solid-phase extraction (SPE with C18 ec (octadecyl, HR-P (styrene/divinylbenzene, and SCX (strong cationic exchange cartridges. Extracted samples were detected and analyzed by HPLC-UV. In comparison between methods, recovery rate of SCX cartridge showed the best adsorption, up to 91% for polar PAAs (TDAs and MDAs. The interested PAAs are polar and relatively soluble in water, so a cartridge with cationic exchange properties has the best absorption and consequently the best recoveries.

  18. Radiolysis: an efficient method of studying radicalar antioxidant mechanisms

    International Nuclear Information System (INIS)

    Gardes-Albert, M.; Jore, D.

    1998-01-01

    The use of the radiolysis method for studying radicalar antioxidant mechanisms offers the different following possibilities: 1- quantitative evaluation of antioxidant activity of molecules soluble in aqueous or non aqueous media (oxidation yields, molecular mechanisms, rate constants), 2- evaluation of the yield of prevention towards polyunsaturated fatty acids peroxidation, 3- evaluation of antioxidant activity towards biological systems such as liposomes or low density lipoproteins (LDL), 4- simple comparison in different model systems of drags effect versus natural antioxidants. (authors)

  19. Sampling methods in archaeomagnetic dating: A comparison using case studies from Wörterberg, Eisenerz and Gams Valley (Austria)

    Science.gov (United States)

    Trapanese, A.; Batt, C. M.; Schnepp, E.

    The aim of this research was to review the relative merits of different methods of taking samples for archaeomagnetic dating. To allow different methods to be investigated, two archaeological structures and one modern fireplace were sampled in Austria. On each structure a variety of sampling methods were used: the tube and disc techniques of Clark et al. (Clark, A.J., Tarling, D.H., Noel, M., 1988. Developments in archaeomagnetic dating in Great Britain. Journal of Archaeological Science 15, 645-667), the drill core technique, the mould plastered hand block method of Thellier, and a modification of it. All samples were oriented with a magnetic compass and sun compass, where weather conditions allowed. Approximately 12 discs, tubes, drill cores or plaster hand blocks were collected from each structure, with one mould plaster hand block being collected and cut into specimens. The natural remanent magnetisation (NRM) of the samples was measured and stepwise alternating field (AF) or thermal demagnetisation was applied. Samples were measured either in the UK or in Austria, which allowed the comparison of results between magnetometers with different sensitivity. The tubes and plastered hand block specimens showed good agreement in directional results, and the samples obtained showed good stability. The discs proved to be unreliable as both NRM and the characteristic remanent magnetisation (ChRM) distribution were very scattered. The failure of some methods may be related to the suitability of the material sampled, for example if it was disturbed before sampling, had been insufficiently heated or did not contain appropriate magnetic minerals to retain a remanent magnetisation. Caution is also recommended for laboratory procedures as the cutting of poorly consolidated specimens may disturb the material and therefore the remanent magnetisation. Criteria and guidelines were established to aid researchers in selecting the most appropriate method for a particular

  20. Methodology or method? A critical review of qualitative case study reports

    Directory of Open Access Journals (Sweden)

    Nerida Hyett

    2014-05-01

    Full Text Available Despite on-going debate about credibility, and reported limitations in comparison to other approaches, case study is an increasingly popular approach among qualitative researchers. We critically analysed the methodological descriptions of published case studies. Three high-impact qualitative methods journals were searched to locate case studies published in the past 5 years; 34 were selected for analysis. Articles were categorized as health and health services (n=12, social sciences and anthropology (n=7, or methods (n=15 case studies. The articles were reviewed using an adapted version of established criteria to determine whether adequate methodological justification was present, and if study aims, methods, and reported findings were consistent with a qualitative case study approach. Findings were grouped into five themes outlining key methodological issues: case study methodology or method, case of something particular and case selection, contextually bound case study, researcher and case interactions and triangulation, and study design inconsistent with methodology reported. Improved reporting of case studies by qualitative researchers will advance the methodology for the benefit of researchers and practitioners.

  1. Methodology or method? A critical review of qualitative case study reports

    Science.gov (United States)

    Hyett, Nerida; Kenny, Amanda; Dickson-Swift, Virginia

    2014-01-01

    Despite on-going debate about credibility, and reported limitations in comparison to other approaches, case study is an increasingly popular approach among qualitative researchers. We critically analysed the methodological descriptions of published case studies. Three high-impact qualitative methods journals were searched to locate case studies published in the past 5 years; 34 were selected for analysis. Articles were categorized as health and health services (n=12), social sciences and anthropology (n=7), or methods (n=15) case studies. The articles were reviewed using an adapted version of established criteria to determine whether adequate methodological justification was present, and if study aims, methods, and reported findings were consistent with a qualitative case study approach. Findings were grouped into five themes outlining key methodological issues: case study methodology or method, case of something particular and case selection, contextually bound case study, researcher and case interactions and triangulation, and study design inconsistent with methodology reported. Improved reporting of case studies by qualitative researchers will advance the methodology for the benefit of researchers and practitioners. PMID:24809980

  2. Comparison of methods for measuring flux gradients in type II superconductors

    International Nuclear Information System (INIS)

    Kroeger, D.M.; Koch, C.C.; Charlesworth, J.P.

    1975-01-01

    A comparison has been made of four methods of measuring the critical current density J/sub c/ in hysteretic type II superconductors, having a wide range of K and J/sub c/ values, in magnetic fields up to 70 kOe. Two of the methods, (a) resistive measurements and (b) magnetization measurements, were carried out in static magnetic fields. The other two methods involved analysis of the response of the sample to a small alternating field superimposed on the static field. The response was analyzed either (c) by measuring the third-harmonic content or (d) by integration of the waveform to obtain measure of flux penetration. The results are discussed with reference to the agreement between the different techniques and the consistency of the critical state hypothesis on which all these techniques are based. It is concluded that flux-penetration measurements by method (d) provide the most detailed information about J/sub c/ but that one must be wary of minor failures of the critical state hypothesis. Best results are likely to be obtained by using more than one method. (U.S.)

  3. Comparison of conventional and nuclear-medical examination methods in the field of nephro-urology

    International Nuclear Information System (INIS)

    Pfannenstiel, P.

    1975-07-01

    Main part of this study is a further simplification of the 131 I-hippuran total body clearance, a method which was first described by Oberhausen (1968). As a result of the study the clearance can now be calculated with only one scintillation detector after i.v. injection of 131 I-hippuran. Essential for this simplification is the exact localisation of the detector at the right or left shoulder region of the patient. The unilateral clearance is calculated from the total body clearance in conjunction with the simultaneously registered isotope renogram. This method was routinely applied to 4,138 patients during November 15th 1971 and March 31st 1975. In special groups of patients sequential renal studies with the gamma-camera, renal rectilinear scans, xenon-133-washout curves from the kidneys and radioimmuno-assays of angiotensin were performed for comparison. All data were analysed in view of other clinical and radiological data. Recommendations for rational procedures in diagnosis and control of therapy of the most common nephro-urological disorders were derived from a joint study of 16 hospitals analysing 1,979 cases. (orig.) [de

  4. Comparative Study of the Volumetric Methods Calculation Using GNSS Measurements

    Science.gov (United States)

    Şmuleac, Adrian; Nemeş, Iacob; Alina Creţan, Ioana; Sorina Nemeş, Nicoleta; Şmuleac, Laura

    2017-10-01

    This paper aims to achieve volumetric calculations for different mineral aggregates using different methods of analysis and also comparison of results. To achieve these comparative studies and presentation were chosen two software licensed, namely TopoLT 11.2 and Surfer 13. TopoLT program is a program dedicated to the development of topographic and cadastral plans. 3D terrain model, level courves and calculation of cut and fill volumes, including georeferencing of images. The program Surfer 13 is produced by Golden Software, in 1983 and is active mainly used in various fields such as agriculture, construction, geophysical, geotechnical engineering, GIS, water resources and others. It is also able to achieve GRID terrain model, to achieve the density maps using the method of isolines, volumetric calculations, 3D maps. Also, it can read different file types, including SHP, DXF and XLSX. In these paper it is presented a comparison in terms of achieving volumetric calculations using TopoLT program by two methods: a method where we choose a 3D model both for surface as well as below the top surface and a 3D model in which we choose a 3D terrain model for the bottom surface and another 3D model for the top surface. The comparison of the two variants will be made with data obtained from the realization of volumetric calculations with the program Surfer 13 generating GRID terrain model. The topographical measurements were performed with equipment from Leica GPS 1200 Series. Measurements were made using Romanian position determination system - ROMPOS which ensures accurate positioning of reference and coordinates ETRS through the National Network of GNSS Permanent Stations. GPS data processing was performed with the program Leica Geo Combined Office. For the volumetric calculating the GPS used point are in 1970 stereographic projection system and for the altitude the reference is 1975 the Black Sea projection system.

  5. Human body mass estimation: a comparison of "morphometric" and "mechanical" methods.

    Science.gov (United States)

    Auerbach, Benjamin M; Ruff, Christopher B

    2004-12-01

    In the past, body mass was reconstructed from hominin skeletal remains using both "mechanical" methods which rely on the support of body mass by weight-bearing skeletal elements, and "morphometric" methods which reconstruct body mass through direct assessment of body size and shape. A previous comparison of two such techniques, using femoral head breadth (mechanical) and stature and bi-iliac breadth (morphometric), indicated a good general correspondence between them (Ruff et al. [1997] Nature 387:173-176). However, the two techniques were never systematically compared across a large group of modern humans of diverse body form. This study incorporates skeletal measures taken from 1,173 Holocene adult individuals, representing diverse geographic origins, body sizes, and body shapes. Femoral head breadth, bi-iliac breadth (after pelvic rearticulation), and long bone lengths were measured on each individual. Statures were estimated from long bone lengths using appropriate reference samples. Body masses were calculated using three available femoral head breadth (FH) formulae and the stature/bi-iliac breadth (STBIB) formula, and compared. All methods yielded similar results. Correlations between FH estimates and STBIB estimates are 0.74-0.81. Slight differences in results between the three FH estimates can be attributed to sampling differences in the original reference samples, and in particular, the body-size ranges included in those samples. There is no evidence for systematic differences in results due to differences in body proportions. Since the STBIB method was validated on other samples, and the FH methods produced similar estimates, this argues that either may be applied to skeletal remains with some confidence. 2004 Wiley-Liss, Inc.

  6. Comparison of Data Fusion Methods Using Crowdsourced Data in Creating a Hybrid Forest Cover Map

    Directory of Open Access Journals (Sweden)

    Myroslava Lesiv

    2016-03-01

    Full Text Available Data fusion represents a powerful way of integrating individual sources of information to produce a better output than could be achieved by any of the individual sources on their own. This paper focuses on the data fusion of different land cover products derived from remote sensing. In the past, many different methods have been applied, without regard to their relative merit. In this study, we compared some of the most commonly-used methods to develop a hybrid forest cover map by combining available land cover/forest products and crowdsourced data on forest cover obtained through the Geo-Wiki project. The methods include: nearest neighbour, naive Bayes, logistic regression and geographically-weighted logistic regression (GWR, as well as classification and regression trees (CART. We ran the comparison experiments using two data types: presence/absence of forest in a grid cell; percentage of forest cover in a grid cell. In general, there was little difference between the methods. However, GWR was found to perform better than the other tested methods in areas with high disagreement between the inputs.

  7. [Comparison of bite marks and teeth features using 2D and 3D methods].

    Science.gov (United States)

    Lorkiewicz-Muszyńska, Dorota; Glapiński, Mariusz; Zaba, Czesław; Łabecka, Marzena

    2011-01-01

    The nature of bite marks is complex. They are found at the scene of crime on different materials and surfaces - not only on human body and corpse, but also on food products and material objects. Human bites on skin are sometimes difficult to interpret and to analyze because of the specific character of skin--elastic and distortable--and because different areas of human body have different surfaces and curvatures. A bite mark left at the scene of crime can be a highly helpful way to lead investigators to criminals. The study was performed to establish: 1) whether bite marks exhibit variations in the accuracy of impressions on different materials, 2) whether it is possible to use the 3D method in the process of identifying an individual based on the comparison of bite marks revealed at the scene, and 3D scans of dental casts, 3) whether application of the 3D method allows for elimination of secondary photographic distortion of bite marks. The authors carried out experiments on simulated cases. Five volunteers bit various materials with different surfaces. Experimental bite marks were collected with emphasis on differentiations of materials. Subsequently, dental impressions were taken from five volunteers in order to prepare five sets of dental casts (the maxilla and mandible. The biting edges of teeth were impressed in wax to create an imprint. The samples of dental casts, corresponding wax bite impressions and bite marks from different materials were scanned with 2D and 3D scanners and photographs were taken. All of these were examined in detail and then compared using different methods (2D and 3D). 1) Bite marks exhibit variations in accuracy of impression on different materials. The most legible reproduction of bite marks was seen on cheese. 2) In comparison of bite marks, the 3D method and 3D scans of dental casts are highly accurate. 3) The 3D method helps to eliminate secondary photographic distortion of bite marks.

  8. Comparison of three-dimensional poisson solution methods for particle-based simulation and inhomogeneous dielectrics.

    Science.gov (United States)

    Berti, Claudio; Gillespie, Dirk; Bardhan, Jaydeep P; Eisenberg, Robert S; Fiegna, Claudio

    2012-07-01

    Particle-based simulation represents a powerful approach to modeling physical systems in electronics, molecular biology, and chemical physics. Accounting for the interactions occurring among charged particles requires an accurate and efficient solution of Poisson's equation. For a system of discrete charges with inhomogeneous dielectrics, i.e., a system with discontinuities in the permittivity, the boundary element method (BEM) is frequently adopted. It provides the solution of Poisson's equation, accounting for polarization effects due to the discontinuity in the permittivity by computing the induced charges at the dielectric boundaries. In this framework, the total electrostatic potential is then found by superimposing the elemental contributions from both source and induced charges. In this paper, we present a comparison between two BEMs to solve a boundary-integral formulation of Poisson's equation, with emphasis on the BEMs' suitability for particle-based simulations in terms of solution accuracy and computation speed. The two approaches are the collocation and qualocation methods. Collocation is implemented following the induced-charge computation method of D. Boda et al. [J. Chem. Phys. 125, 034901 (2006)]. The qualocation method is described by J. Tausch et al. [IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 20, 1398 (2001)]. These approaches are studied using both flat and curved surface elements to discretize the dielectric boundary, using two challenging test cases: a dielectric sphere embedded in a different dielectric medium and a toy model of an ion channel. Earlier comparisons of the two BEM approaches did not address curved surface elements or semiatomistic models of ion channels. Our results support the earlier findings that for flat-element calculations, qualocation is always significantly more accurate than collocation. On the other hand, when the dielectric boundary is discretized with curved surface elements, the

  9. Comparison of multimedia system and conventional method in patients’ selecting prosthetic treatment

    Directory of Open Access Journals (Sweden)

    Baghai R

    2010-12-01

    Full Text Available "nBackground and Aims: Selecting an appropriate treatment plan is one of the most critical aspects of dental treatments. The purpose of this study was to compare multimedia system and conventional method in patients' selecting prosthetic treatment and the time consumed."nMaterials and Methods: Ninety patients were randomly divided into three groups. Patients in group A, once were instructed using the conventional method of dental office and once multimedia system and time was measured in seconds from the beginning of the instruction till the patient had came to decision. The patients were asked about the satisfaction of the method used for them. In group B, patients were only instructed using the conventional method, whereas they were only exposed to soft ware in group C. The data were analyzed with Paired-T-test"n(in group A and T-test and Mann-Whitney test (in groups B and C."nResult: There was a significant difference between multimedia system and conventional method in group A and also between groups B and C (P<0.001. In group A and between groups B and C, patient's satisfaction about multimedia system was better. However, in comparison between groups B and C, multimedia system did not have a significant effect in treatment selection score (P=0.08."nConclusion: Using multimedia system is recommended due to its high ability in giving answers to a large number of patient's questions as well as in terms of marketing.

  10. Comparison of template registration methods for multi-site meta-analysis of brain morphometry

    Science.gov (United States)

    Faskowitz, Joshua; de Zubicaray, Greig I.; McMahon, Katie L.; Wright, Margaret J.; Thompson, Paul M.; Jahanshad, Neda

    2016-03-01

    Neuroimaging consortia such as ENIGMA can significantly improve power to discover factors that affect the human brain by pooling statistical inferences across cohorts to draw generalized conclusions from populations around the world. Voxelwise analyses such as tensor-based morphometry also allow an unbiased search for effects throughout the brain. Even so, such consortium-based analyses are limited by a lack of high-powered methods to harmonize voxelwise information across study populations and scanners. While the simplest approach may be to map all images to a single standard space, the benefits of cohort-specific templates have long been established. Here we studied methods to pool voxel-wise data across sites using templates customized for each cohort but providing a meaningful common space across all studies for voxelwise comparisons. As non-linear 3D MRI registrations represent mappings between images at millimeter resolution, we need to consider the reliability of these mappings. To evaluate these mappings, we calculated test-retest statistics on the volumetric maps of expansion and contraction. Further, we created study-specific brain templates for ten T1-weighted MRI datasets, and a common space from four study-specific templates. We evaluated the efficacy of using a two-step registration framework versus a single standard space. We found that the two-step framework more reliably mapped subjects to a common space.

  11. COMPARISON OF ULTRASOUND IMAGE FILTERING METHODS BY MEANS OF MULTIVARIABLE KURTOSIS

    Directory of Open Access Journals (Sweden)

    Mariusz Nieniewski

    2017-06-01

    Full Text Available Comparison of the quality of despeckled US medical images is complicated because there is no image of a human body that would be free of speckles and could serve as a reference. A number of various image metrics are currently used for comparison of filtering methods; however, they do not satisfactorily represent the visual quality of images and medical expert’s satisfaction with images. This paper proposes an innovative use of relative multivariate kurtosis for the evaluation of the most important edges in an image. Multivariate kurtosis allows one to introduce an order among the filtered images and can be used as one of the metrics for image quality evaluation. At present there is no method which would jointly consider individual metrics. Furthermore, these metrics are typically defined by comparing the noisy original and filtered images, which is incorrect since the noisy original cannot serve as a golden standard. In contrast to this, the proposed kurtosis is the absolute measure, which is calculated independently of any reference image and it agrees with the medical expert’s satisfaction to a large extent. The paper presents a numerical procedure for calculating kurtosis and describes results of such calculations for a computer-generated noisy image, images of a general purpose phantom and a cyst phantom, as well as real-life images of thyroid and carotid artery obtained with SonixTouch ultrasound machine. 16 different methods of image despeckling are compared via kurtosis. The paper shows that visually more satisfactory despeckling results are associated with higher kurtosis, and to a certain degree kurtosis can be used as a single metric for evaluation of image quality.

  12. Methods for cost management during product development: A review and comparison of different literatures

    NARCIS (Netherlands)

    Wouters, M.; Morales, S.; Grollmuss, S.; Scheer, M.

    2016-01-01

    Purpose The paper provides an overview of research published in the innovation and operations management (IOM) literature on 15 methods for cost management in new product development, and it provides a comparison to an earlier review of the management accounting (MA) literature (Wouters & Morales,

  13. Comparison of quantification methods for the analysis of polychlorinated alkanes using electron capture negative ionization mass spectrometry.

    NARCIS (Netherlands)

    Rusina, T.; Korytar, P.; de Boer, J.

    2011-01-01

    Four quantification methods for short-chain chlorinated paraffins (SCCPs) or polychlorinated alkanes (PCAs) using gas chromatography electron capture negative ionisation low resolution mass spectrometry (GC-ECNI-LRMS) were investigated. The method based on visual comparison of congener group

  14. Comparison of quantification methods for the analysis of polychlorinated alkanes using electron capture negative ionisation mass spectrometry

    NARCIS (Netherlands)

    Rusina, T.; Korytar, P.; Boer, de J.

    2011-01-01

    Four quantification methods for short-chain chlorinated paraffins (SCCPs) or polychlorinated alkanes (PCAs) using gas chromatography electron capture negative ionisation low resolution mass spectrometry (GC-ECNI-LRMS) were investigated. The method based on visual comparison of congener group

  15. Comparison of soil CO2 fluxes by eddy-covariance and chamber methods in fallow periods of a corn-soybean rotation

    Science.gov (United States)

    Soil carbon dioxide (CO2) fluxes are typically measured by eddy-covariance (EC) or chamber (Ch) methods, but a long-term comparison has not been undertaken. This study was conducted to assess the agreement between EC and Ch techniques when measuring CO2 flux during fallow periods of a corn-soybean r...

  16. Comparison of two inductive learning methods: A case study in failed fuel identification

    Energy Technology Data Exchange (ETDEWEB)

    Reifman, J. [Argonne National Lab., IL (United States); Lee, J.C. [Michigan Univ., Ann Arbor, MI (United States). Dept. of Nuclear Engineering

    1992-05-01

    Two inductive learning methods, the ID3 and Rg algorithms, are studied as a means for systematically and automatically constructing the knowledge base of expert systems. Both inductive learning methods are general-purpose and use information entropy as a discriminatory measure in order to group objects of a common class. ID3 constructs a knowledge base by building decision trees that discriminate objects of a data set as a function of their class. Rg constructs a knowledge base by grouping objects of the same class into patterns or clusters. The two inductive methods are applied to the construction of a knowledge base for failed fuel identification in the Experimental Breeder Reactor II. Through analysis of the knowledge bases generated, the ID3 and Rg algorithms are compared for their knowledge representation, data overfitting, feature space partition, feature selection, and search procedure.

  17. Comparison of two inductive learning methods: A case study in failed fuel identification

    Energy Technology Data Exchange (ETDEWEB)

    Reifman, J. (Argonne National Lab., IL (United States)); Lee, J.C. (Michigan Univ., Ann Arbor, MI (United States). Dept. of Nuclear Engineering)

    1992-01-01

    Two inductive learning methods, the ID3 and Rg algorithms, are studied as a means for systematically and automatically constructing the knowledge base of expert systems. Both inductive learning methods are general-purpose and use information entropy as a discriminatory measure in order to group objects of a common class. ID3 constructs a knowledge base by building decision trees that discriminate objects of a data set as a function of their class. Rg constructs a knowledge base by grouping objects of the same class into patterns or clusters. The two inductive methods are applied to the construction of a knowledge base for failed fuel identification in the Experimental Breeder Reactor II. Through analysis of the knowledge bases generated, the ID3 and Rg algorithms are compared for their knowledge representation, data overfitting, feature space partition, feature selection, and search procedure.

  18. Comparison of two inductive learning methods: A case study in failed fuel identification

    International Nuclear Information System (INIS)

    Reifman, J.; Lee, J.C.

    1992-01-01

    Two inductive learning methods, the ID3 and Rg algorithms, are studied as a means for systematically and automatically constructing the knowledge base of expert systems. Both inductive learning methods are general-purpose and use information entropy as a discriminatory measure in order to group objects of a common class. ID3 constructs a knowledge base by building decision trees that discriminate objects of a data set as a function of their class. Rg constructs a knowledge base by grouping objects of the same class into patterns or clusters. The two inductive methods are applied to the construction of a knowledge base for failed fuel identification in the Experimental Breeder Reactor II. Through analysis of the knowledge bases generated, the ID3 and Rg algorithms are compared for their knowledge representation, data overfitting, feature space partition, feature selection, and search procedure

  19. From a tree to a stand in Finnish boreal forests: biomass estimation and comparison of methods

    OpenAIRE

    Liu, Chunjiang

    2009-01-01

    There is an increasing need to compare the results obtained with different methods of estimation of tree biomass in order to reduce the uncertainty in the assessment of forest biomass carbon. In this study, tree biomass was investigated in a 30-year-old Scots pine (Pinus sylvestris) (Young-Stand) and a 130-year-old mixed Norway spruce (Picea abies)-Scots pine stand (Mature-Stand) located in southern Finland (61º50' N, 24º22' E). In particular, a comparison of the results of different estimati...

  20. How Many Alternatives Can Be Ranked? A Comparison of the Paired Comparison and Ranking Methods.

    Science.gov (United States)

    Ock, Minsu; Yi, Nari; Ahn, Jeonghoon; Jo, Min-Woo

    2016-01-01

    To determine the feasibility of converting ranking data into paired comparison (PC) data and suggest the number of alternatives that can be ranked by comparing a PC and a ranking method. Using a total of 222 health states, a household survey was conducted in a sample of 300 individuals from the general population. Each respondent performed a PC 15 times and a ranking method 6 times (two attempts of ranking three, four, and five health states, respectively). The health states of the PC and the ranking method were constructed to overlap each other. We converted the ranked data into PC data and examined the consistency of the response rate. Applying probit regression, we obtained the predicted probability of each method. Pearson correlation coefficients were determined between the predicted probabilities of those methods. The mean absolute error was also assessed between the observed and the predicted values. The overall consistency of the response rate was 82.8%. The Pearson correlation coefficients were 0.789, 0.852, and 0.893 for ranking three, four, and five health states, respectively. The lowest mean absolute error was 0.082 (95% confidence interval [CI] 0.074-0.090) in ranking five health states, followed by 0.123 (95% CI 0.111-0.135) in ranking four health states and 0.126 (95% CI 0.113-0.138) in ranking three health states. After empirically examining the consistency of the response rate between a PC and a ranking method, we suggest that using five alternatives in the ranking method may be superior to using three or four alternatives. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  1. THE FORMATION OF A MILKY WAY-SIZED DISK GALAXY. I. A COMPARISON OF NUMERICAL METHODS

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Qirong; Li, Yuexing, E-mail: qxz125@psu.edu [Department of Astronomy and Astrophysics, The Pennsylvania State University, 525 Davey Lab, University Park, PA 16802 (United States)

    2016-11-01

    The long-standing challenge of creating a Milky Way- (MW-) like disk galaxy from cosmological simulations has motivated significant developments in both numerical methods and physical models. We investigate these two fundamental aspects in a new comparison project using a set of cosmological hydrodynamic simulations of an MW-sized galaxy. In this study, we focus on the comparison of two particle-based hydrodynamics methods: an improved smoothed particle hydrodynamics (SPH) code Gadget, and a Lagrangian Meshless Finite-Mass (MFM) code Gizmo. All the simulations in this paper use the same initial conditions and physical models, which include star formation, “energy-driven” outflows, metal-dependent cooling, stellar evolution, and metal enrichment. We find that both numerical schemes produce a late-type galaxy with extended gaseous and stellar disks. However, notable differences are present in a wide range of galaxy properties and their evolution, including star-formation history, gas content, disk structure, and kinematics. Compared to Gizmo, the Gadget simulation produced a larger fraction of cold, dense gas at high redshift which fuels rapid star formation and results in a higher stellar mass by 20% and a lower gas fraction by 10% at z = 0, and the resulting gas disk is smoother and more coherent in rotation due to damping of turbulent motion by the numerical viscosity in SPH, in contrast to the Gizmo simulation, which shows a more prominent spiral structure. Given its better convergence properties and lower computational cost, we argue that the MFM method is a promising alternative to SPH in cosmological hydrodynamic simulations.

  2. Comparison of Methods for Computing the Exchange Energy of quantum helium and hydrogen

    International Nuclear Information System (INIS)

    Cayao, J. L. C. D.

    2009-01-01

    I investigate approach methods to find the exchange energy for quantum helium and hydrogen. I focus on Heitler-London, Hund-Mullikan, Molecular Orbital and variational approach methods. I use Fock-Darwin states centered at the potential minima as the single electron wavefunctions. Using these we build Slater determinants as the basis for the two electron problem. I do a comparison of methods for two electron double dot (quantum hydrogen) and for two electron single dot (quantum helium) in zero and finite magnetic field. I show that the variational, Hund-Mullikan and Heitler-London methods are in agreement with the exact solutions. Also I show that the exchange energy calculation by Heitler-London (HL) method is an excellent approximation for large inter dot distances and for single dot in magnetic field is an excellent approximation the Variational method. (author)

  3. Comparison of methods used to estimate conventional undiscovered petroleum resources: World examples

    Science.gov (United States)

    Ahlbrandt, T.S.; Klett, T.R.

    2005-01-01

    Various methods for assessing undiscovered oil, natural gas, and natural gas liquid resources were compared in support of the USGS World Petroleum Assessment 2000. Discovery process, linear fractal, parabolic fractal, engineering estimates, PETRIMES, Delphi, and the USGS 2000 methods were compared. Three comparisons of these methods were made in: (1) the Neuquen Basin province, Argentina (different assessors, same input data); (2) provinces in North Africa, Oman, and Yemen (same assessors, different methods); and (3) the Arabian Peninsula, Arabian (Persian) Gulf, and North Sea (different assessors, different methods). A fourth comparison (same assessors, same assessment methods but different geologic models), between results from structural and stratigraphic assessment units in the North Sea used only the USGS 2000 method, and hence compared the type of assessment unit rather than the method. In comparing methods, differences arise from inherent differences in assumptions regarding: (1) the underlying distribution of the parent field population (all fields, discovered and undiscovered), (2) the population of fields being estimated; that is, the entire parent distribution or the undiscovered resource distribution, (3) inclusion or exclusion of large outlier fields; (4) inclusion or exclusion of field (reserve) growth, (5) deterministic or probabilistic models, (6) data requirements, and (7) scale and time frame of the assessment. Discovery process, Delphi subjective consensus, and the USGS 2000 method yield comparable results because similar procedures are employed. In mature areas such as the Neuquen Basin province in Argentina, the linear and parabolic fractal and engineering methods were conservative compared to the other five methods and relative to new reserve additions there since 1995. The PETRIMES method gave the most optimistic estimates in the Neuquen Basin. In less mature areas, the linear fractal method yielded larger estimates relative to other methods

  4. Comparison of immersion density and improved microstructural characterization methods for measuring swelling in small irradiated disk specimens

    International Nuclear Information System (INIS)

    Sawai, T.; Suzuki, M.; Hishinuma, A.; Maziasz, P.J.

    1992-01-01

    The procedure of obtaining microstructural data from reactor-irradiated specimens has been carefully checked for accuracy by comparison of swelling data obtained from transmission electron microscopy (TEM) observations of cavities with density-change data measured using the Precision Densitometer at Oak Ridge National Laboratory (ORNL). Comparison of data measured by both methods on duplicate or, in some cases, on the same specimen has shown some appreciable discrepancies for US/Japan collaborative experiments irradiated in the High Flux Isotope Reactor (HFIR). The contamination spot separation (CSS) method was used in the past to determine the thickness of a TEM foil. Recent work has revealed an appreciable error in this method that can result in an overestimation of the foil thickness. This error causes lower swelling values to be measured by TEM microstructural observation relative to the Precision Densitometer. An improved method is proposed for determining the foil thickness by the CSS method, which includes a correction for the systematic overestimation of foil thickness. (orig.)

  5. Comparison of two fractal interpolation methods

    Science.gov (United States)

    Fu, Yang; Zheng, Zeyu; Xiao, Rui; Shi, Haibo

    2017-03-01

    As a tool for studying complex shapes and structures in nature, fractal theory plays a critical role in revealing the organizational structure of the complex phenomenon. Numerous fractal interpolation methods have been proposed over the past few decades, but they differ substantially in the form features and statistical properties. In this study, we simulated one- and two-dimensional fractal surfaces by using the midpoint displacement method and the Weierstrass-Mandelbrot fractal function method, and observed great differences between the two methods in the statistical characteristics and autocorrelation features. From the aspect of form features, the simulations of the midpoint displacement method showed a relatively flat surface which appears to have peaks with different height as the fractal dimension increases. While the simulations of the Weierstrass-Mandelbrot fractal function method showed a rough surface which appears to have dense and highly similar peaks as the fractal dimension increases. From the aspect of statistical properties, the peak heights from the Weierstrass-Mandelbrot simulations are greater than those of the middle point displacement method with the same fractal dimension, and the variances are approximately two times larger. When the fractal dimension equals to 1.2, 1.4, 1.6, and 1.8, the skewness is positive with the midpoint displacement method and the peaks are all convex, but for the Weierstrass-Mandelbrot fractal function method the skewness is both positive and negative with values fluctuating in the vicinity of zero. The kurtosis is less than one with the midpoint displacement method, and generally less than that of the Weierstrass-Mandelbrot fractal function method. The autocorrelation analysis indicated that the simulation of the midpoint displacement method is not periodic with prominent randomness, which is suitable for simulating aperiodic surface. While the simulation of the Weierstrass-Mandelbrot fractal function method has

  6. Comparison of three different prehospital wrapping methods for preventing hypothermia - a crossover study in humans

    Directory of Open Access Journals (Sweden)

    Zakariassen Erik

    2011-06-01

    Full Text Available Abstract Background Accidental hypothermia increases mortality and morbidity in trauma patients. Various methods for insulating and wrapping hypothermic patients are used worldwide. The aim of this study was to compare the thermal insulating effects and comfort of bubble wrap, ambulance blankets / quilts, and Hibler's method, a low-cost method combining a plastic outer layer with an insulating layer. Methods Eight volunteers were dressed in moistened clothing, exposed to a cold and windy environment then wrapped using one of the three different insulation methods in random order on three different days. They were rested quietly on their back for 60 minutes in a cold climatic chamber. Skin temperature, rectal temperature, oxygen consumption were measured, and metabolic heat production was calculated. A questionnaire was used for a subjective evaluation of comfort, thermal sensation, and shivering. Results Skin temperature was significantly higher 15 minutes after wrapping using Hibler's method compared with wrapping with ambulance blankets / quilts or bubble wrap. There were no differences in core temperature between the three insulating methods. The subjects reported more shivering, they felt colder, were more uncomfortable, and had an increased heat production when using bubble wrap compared with the other two methods. Hibler's method was the volunteers preferred method for preventing hypothermia. Bubble wrap was the least effective insulating method, and seemed to require significantly higher heat production to compensate for increased heat loss. Conclusions This study demonstrated that a combination of vapour tight layer and an additional dry insulating layer (Hibler's method is the most efficient wrapping method to prevent heat loss, as shown by increased skin temperatures, lower metabolic rate and better thermal comfort. This should then be the method of choice when wrapping a wet patient at risk of developing hypothermia in prehospital

  7. Comparison of DNA extraction methods for detection of citrus huanglongbing in Colombia

    Directory of Open Access Journals (Sweden)

    Jorge Evelio Ángel

    2014-04-01

    Full Text Available Four DNA citrus plant tissue extraction protocols and three methods of DNA extraction from vector psyllid Diaphorina citri Kuwayama (Hemiptera: Psyllidae were compared as part of the validation process and standardization for detection of huanglongbing (HLB. The comparison was done using several criterias such as integrity, purity and concentration. The best quality parameters presented in terms of extraction of DNA from plant midribs tissue of citrus, were cited by Murray and Thompson (1980 and Rodríguez et al. (2010, while for the DNA extraction from psyllid vectors of HLB, the best extraction method was suggested by Manjunath et al.(2008.

  8. Standardization of Tc-99 by two methods and participation at the CCRI(II)-K2. Tc-99 comparison.

    Science.gov (United States)

    Sahagia, M; Antohe, A; Ioan, R; Luca, A; Ivan, C

    2014-05-01

    The work accomplished within the participation at the 2012 key comparison of Tc-99 is presented. The solution was standardized for the first time in IFIN-HH by two methods: LSC-TDCR and 4π(PC)β-γ efficiency tracer. The methods are described and the results are compared. For the LSC-TDCR method, the program TDCR07c, written and provided by P. Cassette, was used for processing the measurement data. The results are 2.1% higher than when applying the TDCR06b program; the higher value, calculated with the software TDCR07c, was used for reporting the final result in the comparison. The tracer used for the 4π(PC)β-γ efficiency tracer method was a standard (60)Co solution. The sources were prepared from the mixture (60)Co+(99)Tc solution and a general extrapolation curve, type: N(βTc-99)/(M)(Tc-99)=f [1-ε(Co-60)], was drawn. This value was not used for the final result of the comparison. The difference between the values of activity concentration obtained by the two methods was within the limit of the combined standard uncertainty of the difference of these two results. © 2013 Published by Elsevier Ltd.

  9. Quantitative methods for reconstructing tissue biomechanical properties in optical coherence elastography: a comparison study

    International Nuclear Information System (INIS)

    Han, Zhaolong; Li, Jiasong; Singh, Manmohan; Wu, Chen; Liu, Chih-hao; Wang, Shang; Idugboe, Rita; Raghunathan, Raksha; Sudheendran, Narendran; Larin, Kirill V; Aglyamov, Salavat R; Twa, Michael D

    2015-01-01

    We present a systematic analysis of the accuracy of five different methods for extracting the biomechanical properties of soft samples using optical coherence elastography (OCE). OCE is an emerging noninvasive technique, which allows assessment of biomechanical properties of tissues with micrometer spatial resolution. However, in order to accurately extract biomechanical properties from OCE measurements, application of a proper mechanical model is required. In this study, we utilize tissue-mimicking phantoms with controlled elastic properties and investigate the feasibilities of four available methods for reconstructing elasticity (Young’s modulus) based on OCE measurements of an air-pulse induced elastic wave. The approaches are based on the shear wave equation (SWE), the surface wave equation (SuWE), Rayleigh-Lamb frequency equation (RLFE), and finite element method (FEM), Elasticity values were compared with uniaxial mechanical testing. The results show that the RLFE and the FEM are more robust in quantitatively assessing elasticity than the other simplified models. This study provides a foundation and reference for reconstructing the biomechanical properties of tissues from OCE data, which is important for the further development of noninvasive elastography methods. (paper)

  10. Comparison of the auxiliary function method and the discrete-ordinate method for solving the radiative transfer equation for light scattering.

    Science.gov (United States)

    da Silva, Anabela; Elias, Mady; Andraud, Christine; Lafait, Jacques

    2003-12-01

    Two methods for solving the radiative transfer equation are compared with the aim of computing the angular distribution of the light scattered by a heterogeneous scattering medium composed of a single flat layer or a multilayer. The first method [auxiliary function method (AFM)], recently developed, uses an auxiliary function and leads to an exact solution; the second [discrete-ordinate method (DOM)] is based on the channel concept and needs an angular discretization. The comparison is applied to two different media presenting two typical and extreme scattering behaviors: Rayleigh and Mie scattering with smooth or very anisotropic phase functions, respectively. A very good agreement between the predictions of the two methods is observed in both cases. The larger the number of channels used in the DOM, the better the agreement. The principal advantages and limitations of each method are also listed.

  11. A comparison of ancestral state reconstruction methods for quantitative characters.

    Science.gov (United States)

    Royer-Carenzi, Manuela; Didier, Gilles

    2016-09-07

    Choosing an ancestral state reconstruction method among the alternatives available for quantitative characters may be puzzling. We present here a comparison of seven of them, namely the maximum likelihood, restricted maximum likelihood, generalized least squares under Brownian, Brownian-with-trend and Ornstein-Uhlenbeck models, phylogenetic independent contrasts and squared parsimony methods. A review of the relations between these methods shows that the maximum likelihood, the restricted maximum likelihood and the generalized least squares under Brownian model infer the same ancestral states and can only be distinguished by the distributions accounting for the reconstruction uncertainty which they provide. The respective accuracy of the methods is assessed over character evolution simulated under a Brownian motion with (and without) directional or stabilizing selection. We give the general form of ancestral state distributions conditioned on leaf states under the simulation models. Ancestral distributions are used first, to give a theoretical lower bound of the expected reconstruction error, and second, to develop an original evaluation scheme which is more efficient than comparing the reconstructed and the simulated states. Our simulations show that: (i) the distributions of the reconstruction uncertainty provided by the methods generally make sense (some more than others); (ii) it is essential to detect the presence of an evolutionary trend and to choose a reconstruction method accordingly; (iii) all the methods show good performances on characters under stabilizing selection; (iv) without trend or stabilizing selection, the maximum likelihood method is generally the most accurate. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Comparative study on γ-ray spectrum by several filtering method

    International Nuclear Information System (INIS)

    Yuan Xinyu; Liu Liangjun; Zhou Jianliang

    2011-01-01

    Comparative study was conducted on results of gamma-ray spectrum by using a majority of active smoothing method, which were used to show filtering effect. The results showed that peak was widened and overlap peaks increased with energy domain filter in γ-ray spectrum. Filter and its parameters should be seriously taken into consideration in frequency domain. Wavelet transformation can keep signal in high frequency region well. Improved threshold method showed the advantages of hard and soft threshold method at the same time by comparison, which was suitable for weak peaks detection. A new filter was put forward to eke out gravity model approach, whose denoise level was detected by standard deviation. This method not only kept signal and net area of peak well,but also attained better result and had simple computer program. (authors)

  13. Data-driven intensity normalization of PET group comparison studies is superior to global mean normalization

    DEFF Research Database (Denmark)

    Borghammer, Per; Aanerud, Joel; Gjedde, Albert

    2009-01-01

    BACKGROUND: Global mean (GM) normalization is one of the most commonly used methods of normalization in PET and SPECT group comparison studies of neurodegenerative disorders. It requires that no between-group GM difference is present, which may be strongly violated in neurodegenerative disorders....... Importantly, such GM differences often elude detection due to the large intrinsic variance in absolute values of cerebral blood flow or glucose consumption. Alternative methods of normalization are needed for this type of data. MATERIALS AND METHODS: Two types of simulation were performed using CBF images...

  14. Regional cerebral blood flow measurements by a noninvasive microsphere method using 123I-IMP. Comparison with the modified fractional uptake method and the continuous arterial blood sampling method

    International Nuclear Information System (INIS)

    Nakano, Seigo; Matsuda, Hiroshi; Tanizaki, Hiroshi; Ogawa, Masafumi; Miyazaki, Yoshiharu; Yonekura, Yoshiharu

    1998-01-01

    A noninvasive microsphere method using N-isopropyl-p-( 123 I)iodoamphetamine ( 123 I-IMP), developed by Yonekura et al., was performed in 10 patients with neurological diseases to quantify regional cerebral blood flow (rCBF). Regional CBF values by this method were compared with rCBF values simultaneously estimated from both the modified fractional uptake (FU) method using cardiac output developed by Miyazaki et al. and the conventional method with continuous arterial blood sampling. In comparison, we designated the factor which converted raw SPECT voxel counts to rCBF values as a CBF factor. A highly significant correlation (r=0.962, p<0.001) was obtained in the CBF factors between the present method and the continuous arterial blood sampling method. The CBF factors by the present method were only 2.7% higher on the average than those by the continuous arterial blood sampling method. There were significant correlation (r=0.811 and r=O.798, p<0.001) in the CBF factor between modified FU method (threshold for estimating total brain SPECT counts; 10% and 30% respectively) and the continuous arterial blood sampling method. However, the CBF factors of the modified FU method showed 31.4% and 62.3% higher on the average (threshold; 10% and 30% respectively) than those by the continuous arterial blood sampling method. In conclusion, this newly developed method for rCBF measurements was considered to be useful for routine clinical studies without any blood sampling. (author)

  15. Consensus building for interlaboratory studies, key comparisons, and meta-analysis

    Science.gov (United States)

    Koepke, Amanda; Lafarge, Thomas; Possolo, Antonio; Toman, Blaza

    2017-06-01

    Interlaboratory studies in measurement science, including key comparisons, and meta-analyses in several fields, including medicine, serve to intercompare measurement results obtained independently, and typically produce a consensus value for the common measurand that blends the values measured by the participants. Since interlaboratory studies and meta-analyses reveal and quantify differences between measured values, regardless of the underlying causes for such differences, they also provide so-called ‘top-down’ evaluations of measurement uncertainty. Measured values are often substantially over-dispersed by comparison with their individual, stated uncertainties, thus suggesting the existence of yet unrecognized sources of uncertainty (dark uncertainty). We contrast two different approaches to take dark uncertainty into account both in the computation of consensus values and in the evaluation of the associated uncertainty, which have traditionally been preferred by different scientific communities. One inflates the stated uncertainties by a multiplicative factor. The other adds laboratory-specific ‘effects’ to the value of the measurand. After distinguishing what we call recipe-based and model-based approaches to data reductions in interlaboratory studies, we state six guiding principles that should inform such reductions. These principles favor model-based approaches that expose and facilitate the critical assessment of validating assumptions, and give preeminence to substantive criteria to determine which measurement results to include, and which to exclude, as opposed to purely statistical considerations, and also how to weigh them. Following an overview of maximum likelihood methods, three general purpose procedures for data reduction are described in detail, including explanations of how the consensus value and degrees of equivalence are computed, and the associated uncertainty evaluated: the DerSimonian-Laird procedure; a hierarchical Bayesian

  16. Analysis of radioactive corrosion test specimens by means of ICP-MS. Comparison with earlier methods

    International Nuclear Information System (INIS)

    Forsyth, Roy

    1997-07-01

    In June 1992, an ICP-MS instrument (Inductively Coupled Plasma-Mass Spectrometry) was commissioned for use with radioactive sample solutions at Studsvik Nuclear's Hot Cell Laboratory. For conventional environmental samples the instrument permits the simultaneous analysis of many trace elements, but the software used in evaluation of the mass spectra is based on a library of isotopic compositions relevant only for elements in the lithosphere. Fission products and actinides, however, have isotopic compositions which are significantly different from the natural elements, and which also vary with the burnup of the nuclear fuel specimen. Consequently, a spread-sheet had to be developed which could evaluate the mass spectra with these isotopic compositions. Following these preparations, a large number of samples (about 200) from SKB's experimental programme for the study of spent fuel corrosion have been analyzed by the ICP-MS technique. Many of these samples were archive solutions of samples which had been taken earlier in the programme. This report presents a comparison of the analytical results for uranium, plutonium, cesium, strontium and technetium by both the ICP-MS technique, and the previously used analytical methods. For three products, a satisfactory agreement between the results from the various methods was obtained, but for uranium and plutonium the ICP-MS method gave results which were 10-20% higher than the conventional methods. The comparison programme has also shown, not unexpectedly, that significant losses of plutonium from solution had occurred, by precipitation and/or absorption, in the archive solutions during storage. It can be expected that such losses also occur for the other actinides, and consequently, all the analytical results for actinides in older archive solutions must be treated with great caution. 9 refs

  17. Testing statistical significance scores of sequence comparison methods with structure similarity

    Directory of Open Access Journals (Sweden)

    Leunissen Jack AM

    2006-10-01

    Full Text Available Abstract Background In the past years the Smith-Waterman sequence comparison algorithm has gained popularity due to improved implementations and rapidly increasing computing power. However, the quality and sensitivity of a database search is not only determined by the algorithm but also by the statistical significance testing for an alignment. The e-value is the most commonly used statistical validation method for sequence database searching. The CluSTr database and the Protein World database have been created using an alternative statistical significance test: a Z-score based on Monte-Carlo statistics. Several papers have described the superiority of the Z-score as compared to the e-value, using simulated data. We were interested if this could be validated when applied to existing, evolutionary related protein sequences. Results All experiments are performed on the ASTRAL SCOP database. The Smith-Waterman sequence comparison algorithm with both e-value and Z-score statistics is evaluated, using ROC, CVE and AP measures. The BLAST and FASTA algorithms are used as reference. We find that two out of three Smith-Waterman implementations with e-value are better at predicting structural similarities between proteins than the Smith-Waterman implementation with Z-score. SSEARCH especially has very high scores. Conclusion The compute intensive Z-score does not have a clear advantage over the e-value. The Smith-Waterman implementations give generally better results than their heuristic counterparts. We recommend using the SSEARCH algorithm combined with e-values for pairwise sequence comparisons.

  18. Detailed disc assembly temperature prediction: comparison between CFD and simplified engineering methods

    CSIR Research Space (South Africa)

    Snedden, Glen C

    2003-09-01

    Full Text Available Institute of Aeronautics and Astronautics Inc. All rights reserved. DETAILED DISC ASSEMBLY TEMPERATURE PREDICTION: COMPARISON BETWEEN CFD AND SIMPLIFIED ENGINEERING METHODS ISABE-2005-1130 Glen Snedden, Thomas Roos and Kavendra Naidoo CSIR, Defencetek... transfer and conduction code (Gaugler, 1978) Taw Adiabatic Wall Temperature y+ Near wall Reynolds number Introduction In order to calculate life degradation of gas turbine disc assemblies, it is necessary to model the transient thermal and mechanical...

  19. A comprehensive comparison of RNA-Seq-based transcriptome analysis from reads to differential gene expression and cross-comparison with microarrays: a case study in Saccharomyces cerevisiae

    DEFF Research Database (Denmark)

    Nookaew, Intawat; Papini, Marta; Pornputtapong, Natapol

    2012-01-01

    RNA-seq, has recently become an attractive method of choice in the studies of transcriptomes, promising several advantages compared with microarrays. In this study, we sought to assess the contribution of the different analytical steps involved in the analysis of RNA-seq data generated with the I......RNA-seq, has recently become an attractive method of choice in the studies of transcriptomes, promising several advantages compared with microarrays. In this study, we sought to assess the contribution of the different analytical steps involved in the analysis of RNA-seq data generated...... gene expression identification derived from the different statistical methods, as well as their integrated analysis results based on gene ontology annotation are in good agreement. Overall, our study provides a useful and comprehensive comparison between the two platforms (RNA-seq and microrrays...

  20. Mapping biomass with remote sensing: a comparison of methods for the case study of Uganda

    Directory of Open Access Journals (Sweden)

    Henry Matieu

    2011-10-01

    Full Text Available Abstract Background Assessing biomass is gaining increasing interest mainly for bioenergy, climate change research and mitigation activities, such as reducing emissions from deforestation and forest degradation and the role of conservation, sustainable management of forests and enhancement of forest carbon stocks in developing countries (REDD+. In response to these needs, a number of biomass/carbon maps have been recently produced using different approaches but the lack of comparable reference data limits their proper validation. The objectives of this study are to compare the available maps for Uganda and to understand the sources of variability in the estimation. Uganda was chosen as a case-study because it presents a reliable national biomass reference dataset. Results The comparison of the biomass/carbon maps show strong disagreement between the products, with estimates of total aboveground biomass of Uganda ranging from 343 to 2201 Tg and different spatial distribution patterns. Compared to the reference map based on country-specific field data and a national Land Cover (LC dataset (estimating 468 Tg, maps based on biome-average biomass values, such as the Intergovernmental Panel on Climate Change (IPCC default values, and global LC datasets tend to strongly overestimate biomass availability of Uganda (ranging from 578 to 2201 Tg, while maps based on satellite data and regression models provide conservative estimates (ranging from 343 to 443 Tg. The comparison of the maps predictions with field data, upscaled to map resolution using LC data, is in accordance with the above findings. This study also demonstrates that the biomass estimates are primarily driven by the biomass reference data while the type of spatial maps used for their stratification has a smaller, but not negligible, impact. The differences in format, resolution and biomass definition used by the maps, as well as the fact that some datasets are not independent from the

  1. A comparison of land use change accounting methods: seeking common grounds for key modeling choices in biofuel assessments

    DEFF Research Database (Denmark)

    de Bikuna Salinas, Koldo Saez; Hamelin, Lorie; Hauschild, Michael Zwicky

    2018-01-01

    Five currently used methods to account for the global warming (GW) impact of the induced land-use change (LUC) greenhouse gas (GHG) emissions have been applied to four biofuel case studies. Two of the investigated methods attempt to avoid the need of considering a definite occupation -thus...... amortization period by considering ongoing LUC trends as a dynamic baseline. This leads to the accounting of a small fraction (0.8%) of the related emissions from the assessed LUC, thus their validity is disputed. The comparison of methods and contrasting case studies illustrated the need of clearly...... distinguishing between the different time horizons involved in life cycle assessments (LCA) of land-demanding products like biofuels. Absent in ISO standards, and giving rise to several confusions, definitions for the following time horizons have been proposed: technological scope, inventory model, impact...

  2. Comparison of measurement methods for benzene and toluene

    Science.gov (United States)

    Wideqvist, U.; Vesely, V.; Johansson, C.; Potter, A.; Brorström-Lundén, E.; Sjöberg, K.; Jonsson, T.

    Diffusive sampling and active (pumped) sampling (tubes filled with Tenax TA or Carbopack B) were compared with an automatic BTX instrument (Chrompack, GC/FID) for measurements of benzene and toluene. The measurements were made during differing pollution levels and different weather conditions at a roof-top site and in a densely trafficked street canyon in Stockholm, Sweden. The BTX instrument was used as the reference method for comparison with the other methods. Considering all data the Perkin-Elmer diffusive samplers, containing Tenax TA and assuming a constant uptake rate of 0.406 cm3 min-1, showed about 30% higher benzene values compared to the BTX instrument. This discrepancy may be explained by a dose-dependent uptake rate with higher uptake rates at lower dose as suggested by laboratory experiments presented in the literature. After correction by applying the relationship between uptake rate and dose as suggested by Roche et al. (Atmos. Environ. 33 (1999) 1905), the two methods agreed almost perfectly. For toluene there was much better agreement between the two methods. No sign of a dose-dependent uptake could be seen. The mean concentrations and 95% confidence intervals of all toluene measurements (67 values) were (10.80±1.6) μg m -3 for diffusive sampling and (11.3±1.6) μg m -3 for the BTX instrument, respectively. The overall ratio between the concentrations obtained using diffusive sampling and the BTX instrument was 0.91±0.07 (95% confidence interval). Tenax TA was found to be equal to Carbopack B for measuring benzene and toluene in this concentration range, although it has been proposed not to be optimal for benzene. There was also good agreement between the active samplers and the BTX instrument.

  3. Rapid descriptive sensory methodsComparison of Free Multiple Sorting, Partial Napping, Napping, Flash Profiling and conventional profiling

    DEFF Research Database (Denmark)

    Dehlholm, Christian; Brockhoff, Per B.; Meinert, Lene

    2012-01-01

    is a modal restriction of Napping to specific sensory modalities, directing sensation and still allowing a holistic approach to products. The new methods are compared to Flash Profiling, Napping and conventional descriptive sensory profiling. Evaluations are performed by several panels of expert assessors......Two new rapid descriptive sensory evaluation methods are introduced to the field of food sensory evaluation. The first method, free multiple sorting, allows subjects to perform ad libitum free sortings, until they feel that no more relevant dissimilarities among products remain. The second method...... are applied for the graphical validation and comparisons. This allows similar comparisons and is applicable to single-block evaluation designs such as Napping. The partial Napping allows repetitions on multiple sensory modalities, e.g. appearance, taste and mouthfeel, and shows the average...

  4. System Accuracy Evaluation of Four Systems for Self-Monitoring of Blood Glucose Following ISO 15197 Using a Glucose Oxidase and a Hexokinase-Based Comparison Method.

    Science.gov (United States)

    Link, Manuela; Schmid, Christina; Pleus, Stefan; Baumstark, Annette; Rittmeyer, Delia; Haug, Cornelia; Freckmann, Guido

    2015-04-14

    The standard ISO (International Organization for Standardization) 15197 is widely accepted for the accuracy evaluation of systems for self-monitoring of blood glucose (SMBG). Accuracy evaluation was performed for 4 SMBG systems (Accu-Chek Aviva, ContourXT, GlucoCheck XL, GlucoMen LX PLUS) with 3 test strip lots each. To investigate a possible impact of the comparison method on system accuracy data, 2 different established methods were used. The evaluation was performed in a standardized manner following test procedures described in ISO 15197:2003 (section 7.3). System accuracy was assessed by applying ISO 15197:2003 and in addition ISO 15197:2013 criteria (section 6.3.3). For each system, comparison measurements were performed with a glucose oxidase (YSI 2300 STAT Plus glucose analyzer) and a hexokinase (cobas c111) method. All 4 systems fulfilled the accuracy requirements of ISO 15197:2003 with the tested lots. More stringent accuracy criteria of ISO 15197:2013 were fulfilled by 3 systems (Accu-Chek Aviva, ContourXT, GlucoMen LX PLUS) when compared to the manufacturer's comparison method and by 2 systems (Accu-Chek Aviva, ContourXT) when compared to the alternative comparison method. All systems showed lot-to-lot variability to a certain degree; 2 systems (Accu-Chek Aviva, ContourXT), however, showed only minimal differences in relative bias between the 3 evaluated lots. In this study, all 4 systems complied with the evaluated test strip lots with accuracy criteria of ISO 15197:2003. Applying ISO 15197:2013 accuracy limits, differences in the accuracy of the tested systems were observed, also demonstrating that the applied comparison method/system and the lot-to-lot variability can have a decisive influence on accuracy data obtained for a SMBG system. © 2015 Diabetes Technology Society.

  5. Numerical simulation and comparison of two ventilation methods for a restaurant - displacement vs mixed flow ventilation

    Science.gov (United States)

    Chitaru, George; Berville, Charles; Dogeanu, Angel

    2018-02-01

    This paper presents a comparison between a displacement ventilation method and a mixed flow ventilation method using computational fluid dynamics (CFD) approach. The paper analyses different aspects of the two systems, like the draft effect in certain areas, the air temperatureand velocity distribution in the occupied zone. The results highlighted that the displacement ventilation system presents an advantage for the current scenario, due to the increased buoyancy driven flows caused by the interior heat sources. For the displacement ventilation case the draft effect was less prone to appear in the occupied zone but the high heat emissions from the interior sources have increased the temperature gradient in the occupied zone. Both systems have been studied in similar conditions, concentrating only on the flow patterns for each case.

  6. Comparison of selected methods of prediction of wine exports and imports

    Directory of Open Access Journals (Sweden)

    Radka Šperková

    2008-01-01

    Full Text Available For prediction of future events, there exist a number of methods usable in managerial practice. Decision on which of them should be used in a particular situation depends not only on the amount and quality of input information, but also on a subjective managerial judgement. Paper performs a practical application and consequent comparison of results of two selected methods, which are statistical method and deductive method. Both methods were used for predicting wine exports and imports in (from the Czech Republic. Prediction was done in 2003 and it related to the economic years 2003/2004, 2004/2005, 2005/2006, and 2006/2007, within which it was compared with the real values of the given indicators.Within the deductive methods there were characterized the most important factors of external environment including the most important influence according to authors’ opinion, which was the integration of the Czech Republic into the EU from 1st May, 2004. On the contrary, the statistical method of time-series analysis did not regard the integration, which is comes out of its principle. Statistics only calculates based on data from the past, and cannot incorporate the influence of irregular future conditions, just as the EU integration. Because of this the prediction based on deductive method was more optimistic and more precise in terms of its difference from real development in the given field.

  7. Comparison of sample preparation methods for reliable plutonium and neptunium urinalysis using automatic extraction chromatography

    DEFF Research Database (Denmark)

    Qiao, Jixin; Xu, Yihong; Hou, Xiaolin

    2014-01-01

    This paper describes improvement and comparison of analytical methods for simultaneous determination of trace-level plutonium and neptunium in urine samples by inductively coupled plasma mass spectrometry (ICP-MS). Four sample pre-concentration techniques, including calcium phosphate, iron......), it endows urinalysis methods with better reliability and repeatability compared with co-precipitation techniques. In view of the applicability of different pre-concentration techniques proposed previously in the literature, the main challenge behind relevant method development is pointed to be the release...

  8. Traction cytometry: regularization in the Fourier approach and comparisons with finite element method.

    Science.gov (United States)

    Kulkarni, Ankur H; Ghosh, Prasenjit; Seetharaman, Ashwin; Kondaiah, Paturu; Gundiah, Namrata

    2018-05-09

    Traction forces exerted by adherent cells are quantified using displacements of embedded markers on polyacrylamide substrates due to cell contractility. Fourier Transform Traction Cytometry (FTTC) is widely used to calculate tractions but has inherent limitations due to errors in the displacement fields; these are mitigated through a regularization parameter (γ) in the Reg-FTTC method. An alternate finite element (FE) approach computes tractions on a domain using known boundary conditions. Robust verification and recovery studies are lacking but essential in assessing the accuracy and noise sensitivity of the traction solutions from the different methods. We implemented the L2 regularization method and defined a maximum curvature point in the traction with γ plot as the optimal regularization parameter (γ*) in the Reg-FTTC approach. Traction reconstructions using γ* yield accurate values of low and maximum tractions (Tmax) in the presence of up to 5% noise. Reg-FTTC is hence a clear improvement over the FTTC method but is inadequate to reconstruct low stresses such as those at nascent focal adhesions. FE, implemented using a node-by-node comparison, showed an intermediate reconstruction compared to Reg-FTTC. We performed experiments using mouse embryonic fibroblast (MEF) and compared results between these approaches. Tractions from FTTC and FE showed differences of ∼92% and 22% as compared to Reg-FTTC. Selection of an optimum value of γ for each cell reduced variability in the computed tractions as compared to using a single value of γ for all the MEF cells in this study.

  9. Method for a national tariff comparison for natural gas, electricity and heat. Set-up and presentation

    International Nuclear Information System (INIS)

    1998-05-01

    Several groups (within distribution companies and outside those companies) have a need for information and data on energy tariffs. It is the opinion of the ad-hoc working group that a comparison of tariffs on the basis of standard cases is the most practical method to meet the information demand of all the parties involved. Those standard cases are formulated and presented for prices of electricity, natural gas and heat, including applied consumption parameters. A comparison of such tariffs must be made periodically

  10. Comparison of the effect of three autogenous bone harvesting methods on cell viability in rabbits

    Science.gov (United States)

    Moradi Haghgoo, Janet; Arabi, Seyed Reza; Hosseinipanah, Seyyed Mohammad; Solgi, Ghasem; Rastegarfard, Neda; Farhadian, Maryam

    2017-01-01

    Background. This study was designed to compare the viability of autogenous bone grafts, harvested using different methods, in order to determine the best harvesting technique with respect to more viable cells. Methods. In this animal experimental study, three harvesting methods, including manual instrument (chisel), rotary device and piezosurgery, were used for harvesting bone grafts from the lateral body of the mandible on the left and right sides of 10 rabbits. In each group, 20 bone samples were collected and their viability was assessed using MTS kit. Statistical analyses, including ANOVA and post hoc Tukey tests, were used for evaluating significant differences between the groups. Results. One-way ANOVA showed significant differences between all the groups (P=0.000). Data analysis using post hoc Tukey tests indicated that manual instrument and piezosurgery had no significant differences with regard to cell viability (P=0.749) and the cell viability in both groups was higher than that with the use of a rotary instrument (P=0.000). Conclusion. Autogenous bone grafts harvested with a manual instrument and piezosurgery had more viable cells in comparison to the bone chips harvested with a rotary device. PMID:28748046

  11. Modified X-ray method of a study of duodenum

    Energy Technology Data Exchange (ETDEWEB)

    Korolyuk, I.P.; Bugakov, V.M.; Shinkin, V.M.

    A modified X-ray examination of duodenum under hypotension conditions is described. In comparison with the existing method, the above-mentioned modification allows one to investigate the duodenum by using double contrast - the high-concentrated barium suspension and the gas, formed after the gasificated powder dose. 327 patients have been examined by the given method, 126 of them have been diagnosed to suffer from inflammatory diseases of the stomach and the duodenum, 22 of them suffering from the duodenum peptic ulcer, 107 of them - pancreatitis, 48-cholelithiasis, 24 - the tumor of the pancreatoduodenum zone. 65 patients have been operated on. Roentgenomorphologic comparisons have been carried out for 66 patients suffering from inflammatory deseases of the duodenum. Duodenum visualization of 283 patients is found to be good and satisfactory. The given method may be used under any conditions, including polyclinics, due to the sparing nature.

  12. Comparison of combinatorial clustering methods on pharmacological data sets represented by machine learning-selected real molecular descriptors.

    Science.gov (United States)

    Rivera-Borroto, Oscar Miguel; Marrero-Ponce, Yovani; García-de la Vega, José Manuel; Grau-Ábalo, Ricardo del Corazón

    2011-12-27

    Cluster algorithms play an important role in diversity related tasks of modern chemoinformatics, with the widest applications being in pharmaceutical industry drug discovery programs. The performance of these grouping strategies depends on various factors such as molecular representation, mathematical method, algorithmical technique, and statistical distribution of data. For this reason, introduction and comparison of new methods are necessary in order to find the model that best fits the problem at hand. Earlier comparative studies report on Ward's algorithm using fingerprints for molecular description as generally superior in this field. However, problems still remain, i.e., other types of numerical descriptions have been little exploited, current descriptors selection strategy is trial and error-driven, and no previous comparative studies considering a broader domain of the combinatorial methods in grouping chemoinformatic data sets have been conducted. In this work, a comparison between combinatorial methods is performed,with five of them being novel in cheminformatics. The experiments are carried out using eight data sets that are well established and validated in the medical chemistry literature. Each drug data set was represented by real molecular descriptors selected by machine learning techniques, which are consistent with the neighborhood principle. Statistical analysis of the results demonstrates that pharmacological activities of the eight data sets can be modeled with a few of families with 2D and 3D molecular descriptors, avoiding classification problems associated with the presence of nonrelevant features. Three out of five of the proposed cluster algorithms show superior performance over most classical algorithms and are similar (or slightly superior in the most optimistic sense) to Ward's algorithm. The usefulness of these algorithms is also assessed in a comparative experiment to potent QSAR and machine learning classifiers, where they perform

  13. Activity optimization method in nuclear medicine: a comparison with roc analysis

    International Nuclear Information System (INIS)

    Perez Diaz, M.; Diaz Rizo, O.; Lopez, A.; Estevez Aparicio, E.; Roque Diaz, R.

    2006-01-01

    Full text of publication follows: A discriminant method for optimizing the administered activity is validated by comparison with R.O.C. curves. The method is tested in 21 SPECT studies, performed with a Cardiac phantom. Three different cold lesions (L1, L2 and L3) are placed in the myocardium-wall by pairs for each SPECT. Three activities (84 MBq, 37 MBq or 18.5 MBq) of Tc 99 m diluted in water are used as background. The discriminant analysis is used to select the parameters that characterize image quality among the measured variables in the obtained tomographic images. They are a group of Lesion-to-Background (L/B) and Signal-to-Noise (S/N) ratios. Two clusters with different image quality (p=0.021) are obtained following the measured variables. The first one contains the studies performed with 37 MBq and 84 MBq and the second one the studies made with 18.5 MBq. Cluster classifications constitute the dependent variable in the discriminant function. The ratios B/L1, B/L2 and B/L3 are the parameters able to construct the function with 100 % of cases correctly classified into the clusters. The value of 37 MBq is the lowest tested activity that permits good results for the L/B variables, without significant differences respect to 84 MBq (p>0.05). The result is coincident with the applied R.O.C.-analysis, in which 37 MBq permits the highest area under the curve and low false-positive and false-negative rates with significant differences respect to 18.5 MBq (p=0.008)

  14. Comparison of different methods for work accidents investigation in hospitals: A Portuguese case study.

    Science.gov (United States)

    Nunes, Cláudia; Santos, Joana; da Silva, Manuela Vieira; Lourenço, Irina; Carvalhais, Carlos

    2015-01-01

    The hospital environment has many occupational health risks that predispose healthcare workers to various kinds of work accidents. This study aims to compare different methods for work accidents investigation and to verify their suitability in hospital environment. For this purpose, we selected three types of accidents that were related with needle stick, worker fall and inadequate effort/movement during the mobilization of patients. A total of thirty accidents were analysed with six different work accidents investigation methods. The results showed that organizational factors were the group of causes which had the greatest impact in the three types of work accidents. The methods selected to be compared in this paper are applicable and appropriate for the work accidents investigation in hospitals. However, the Registration, Research and Analysis of Work Accidents method (RIAAT) showed to be an optimal technique to use in this context.

  15. Comparison of some effects of modification of a polylactide surface layer by chemical, plasma, and laser methods

    Science.gov (United States)

    Moraczewski, Krzysztof; Rytlewski, Piotr; Malinowski, Rafał; Żenkiewicz, Marian

    2015-08-01

    The article presents the results of studies and comparison of selected properties of the modified PLA surface layer. The modification was carried out with three methods. In the chemical method, a 0.25 M solution of sodium hydroxide in water and ethanol was utilized. In the plasma method, a 50 W generator was used, which produced plasma in the air atmosphere under reduced pressure. In the laser method, a pulsed ArF excimer laser with fluency of 60 mJ/cm2 was applied. Polylactide samples were examined by using the following techniques: scanning electron microscopy (SEM), atomic force microscopy (AFM), goniometry and X-ray photoelectron spectroscopy (XPS). Images of surfaces of the modified samples were recorded, contact angles were measured, and surface free energy was calculated. Qualitative and quantitative analyses of chemical composition of the PLA surface layer were performed as well. Based on the survey it was found that the best modification results are obtained using the plasma method.

  16. Comparative study for methods to determine the seismic response of NPP structures

    International Nuclear Information System (INIS)

    Varpasuo, P.

    1995-01-01

    There are many different important problem areas in evaluating the seismic response of structures. In this study the effort is concentrated on three of these areas. The first task is the mathematical formulation of earthquake excitation. The random vibration theory is taken as the tool in this task. The second area of interest in this study is the soil-structure interaction analysis. The approach of impedance functions is chosen and the focal point of interest is the significance of frequency dependent impedance functions. The third area of interest is the methods to determine the structural response. The following three methods were tested: the mode superposition time history method; the complex frequency response method; the response spectrum method. The comparison was made with the aid of MSC/NASTRAN code. The three methods gave for outer containment building response results which were in good agreement with each other. (author). 4 refs., 5 figs

  17. Comparison of neutron activation analysis with other instrumental methods for elemental analysis of airborne particulate matter

    International Nuclear Information System (INIS)

    Regge, P. de; Lievens, F.; Delespaul, I.; Monsecour, M.

    1976-01-01

    A comparison of instrumental methods, including neutron activation analysis, X-ray fluorescence spectrometry, atomic absorption spectrometry and emission spectrometry, for the analysis of heavy metals in airborne particulate matter is described. The merits and drawbacks of each method for the routine analysis of a large number of samples are discussed. The sample preparation technique, calibration and statistical data relevant to each method are given. Concordant results are obtained by the different methods for Co, Cu, Ni, Pb and Zn. Less good agreement is obtained for Fe, Mn and V. The results are not in agreement for the elements Cd and Cr. Using data obtained on the dust sample distributed by Euratom-ISPRA within the framework of an interlaboratory comparison, the accuracy of each method for the various elements is estimated. Neutron activation analysis was found to be the most sensitive and accurate of the non-destructive analysis methods. Only atomic absorption spectrometry has a comparable sensitivity, but requires considerable preparation work. X-ray fluorescence spectrometry is less sensitive and shows biases for Cr and V. Automatic emission spectrometry with simultaneous measurement of the beam intensities by photomultipliers is the fastest and most economical technique, though at the expense of some precision and sensitivity. (author)

  18. Comparison of two methods for extraction of volatiles from marine PL emulsions

    DEFF Research Database (Denmark)

    Lu, Henna Fung Sieng; Nielsen, Nina Skall; Jacobsen, Charlotte

    2013-01-01

    The dynamic headspace (DHS) thermal desorption principle using Tenax GR tube, as well as the solid phase micro‐extraction (SPME) tool with carboxen/polydimethylsiloxane 50/30 µm CAR/PDMS SPME fiber, both coupled to GC/MS were implemented for the isolation and identification of both lipid...... and Strecker derived volatiles in marine phospholipids (PL) emulsions. Comparison of volatile extraction efficiency was made between the methods. For marine PL emulsions with a highly complex composition of volatiles headspace, a fiber saturation problem was encountered when using CAR/PDMS‐SPME for volatiles...... analysis. However, the CAR/PDMS‐SPME technique was efficient for lipid oxidation analysis in emulsions of less complex headspace. The SPME method extracted volatiles of lower molecular weights more efficient than the DHS method. On the other hand, DHS Tenax GR appeared to be more efficient in extracting...

  19. Comparison of the methods for calculating the interfacial heat transfer coefficient in hot stamping

    International Nuclear Information System (INIS)

    Zhao, Kunmin; Wang, Bin; Chang, Ying; Tang, Xinghui; Yan, Jianwen

    2015-01-01

    This paper presents a hot stamping experimentation and three methods for calculating the Interfacial Heat Transfer Coefficient (IHTC) of 22MnB5 boron steel. Comparison of the calculation results shows an average error of 7.5% for the heat balance method, 3.7% for the Beck's nonlinear inverse estimation method (the Beck's method), and 10.3% for the finite-element-analysis-based optimization method (the FEA method). The Beck's method is a robust and accurate method for identifying the IHTC in hot stamping applications. The numerical simulation using the IHTC identified by the Beck's method can predict the temperature field with a high accuracy. - Highlights: • A theoretical formula was derived for direct calculation of IHTC. • The Beck's method is a robust and accurate method for identifying IHTC. • Finite element method can be used to identify an overall equivalent IHTC

  20. Comparison of radioimmunological and enzyme-immunological methods of determination in thyroid function studies

    International Nuclear Information System (INIS)

    Arnold, L.P.

    1984-01-01

    During the study described here parallel investigations were carried out using radio-immuno-assays (RIA) and enzyme-immuno-assays (EIA) in order to assess the quality and diagnostic value of both these techniques as well as to weigh up their relative advantages and disadvantages in connection with thyroid function studies. The following comparisons were made: T4 RIA versus T4 EIA, T3 RIA versus T3 EIA, T3 uptake versus thyroxine binding capacity, total balance of free hormone bonds. Correlation coefficients of 0.982 for T4 and 0.989 for T3 proved that the test results obtained using either RIA or EIA were in fairly good agreement. Less satisfactory correlation was suggested by a coefficient of 0.807 between the tests for thyroxine binding capacity and T3 uptake, which was even seen to decrease with increasing binding capacity so that the greatest deviations were observed in hyperthyroidism. The same holds true for the free T4 index and total balance, where the correlation coefficient is altered in a similar way by the inclusion of a thyroxine binding capacity EIA. Better agreement was achieved by optimal adjustment of the defined diagnostic ranges for hypothyroidism, euthyroidism and hyperthyroidism. The EIA results were biassed in the presence of hyperbilirubinemia. Intra-assay abd inter-assay veriations were analysed and found to range from 8.1 to 16.9% and 13.6 to 28.1%, respectively. EIA offers advantages over RIA inasmuch as it is free from radioactivity and does not require any special safety measures, although it was judged considerably less favourable than RIA in terms of sensitivity. (TRV) [de

  1. Variation in Results of Volume Measurements of Stumps of Lower-Limb Amputees : A Comparison of 4 Methods

    NARCIS (Netherlands)

    de Boer-Wilzing, Vera G.; Bolt, Arjen; Geertzen, Jan H.; Emmelot, Cornelis H.; Baars, Erwin C.; Dijkstra, Pieter U.

    de Boer-Wilzing VG, Bolt A, Geertzen JH, Emmelot CH, Baars EC, Dijkstra PU. Variation in results of volume measurements of stumps of lower-limb amputees: a comparison of 4 methods. Arch Phys Med Rehabil 2011;92:941-6. Objective: To analyze the reliability of 4 methods (water immersion,

  2. Comparison of the lysis centrifugation method with the conventional blood culture method in cases of sepsis in a tertiary care hospital.

    Science.gov (United States)

    Parikh, Harshal R; De, Anuradha S; Baveja, Sujata M

    2012-07-01

    Physicians and microbiologists have long recognized that the presence of living microorganisms in the blood of a patient carries with it considerable morbidity and mortality. Hence, blood cultures have become critically important and frequently performed test in clinical microbiology laboratories for diagnosis of sepsis. To compare the conventional blood culture method with the lysis centrifugation method in cases of sepsis. Two hundred nonduplicate blood cultures from cases of sepsis were analyzed using two blood culture methods concurrently for recovery of bacteria from patients diagnosed clinically with sepsis - the conventional blood culture method using trypticase soy broth and the lysis centrifugation method using saponin by centrifuging at 3000 g for 30 minutes. Overall bacteria recovered from 200 blood cultures were 17.5%. The conventional blood culture method had a higher yield of organisms, especially Gram positive cocci. The lysis centrifugation method was comparable with the former method with respect to Gram negative bacilli. The sensitivity of lysis centrifugation method in comparison to conventional blood culture method was 49.75% in this study, specificity was 98.21% and diagnostic accuracy was 89.5%. In almost every instance, the time required for detection of the growth was earlier by lysis centrifugation method, which was statistically significant. Contamination by lysis centrifugation was minimal, while that by conventional method was high. Time to growth by the lysis centrifugation method was highly significant (P value 0.000) as compared to time to growth by the conventional blood culture method. For the diagnosis of sepsis, combination of the lysis centrifugation method and the conventional blood culture method with trypticase soy broth or biphasic media is advocable, in order to achieve faster recovery and a better yield of microorganisms.

  3. Comparison of methods of extracting information for meta-analysis of observational studies in nutritional epidemiology

    Directory of Open Access Journals (Sweden)

    Jong-Myon Bae

    2016-01-01

    Full Text Available OBJECTIVES: A common method for conducting a quantitative systematic review (QSR for observational studies related to nutritional epidemiology is the “highest versus lowest intake” method (HLM, in which only the information concerning the effect size (ES of the highest category of a food item is collected on the basis of its lowest category. However, in the interval collapsing method (ICM, a method suggested to enable a maximum utilization of all available information, the ES information is collected by collapsing all categories into a single category. This study aimed to compare the ES and summary effect size (SES between the HLM and ICM. METHODS: A QSR for evaluating the citrus fruit intake and risk of pancreatic cancer and calculating the SES by using the HLM was selected. The ES and SES were estimated by performing a meta-analysis using the fixed-effect model. The directionality and statistical significance of the ES and SES were used as criteria for determining the concordance between the HLM and ICM outcomes. RESULTS: No significant differences were observed in the directionality of SES extracted by using the HLM or ICM. The application of the ICM, which uses a broader information base, yielded more-consistent ES and SES, and narrower confidence intervals than the HLM. CONCLUSIONS: The ICM is advantageous over the HLM owing to its higher statistical accuracy in extracting information for QSR on nutritional epidemiology. The application of the ICM should hence be recommended for future studies.

  4. Experimental Sentinel-2 LAI estimation using parametric, non-parametric and physical retrieval methods - A comparison

    NARCIS (Netherlands)

    Verrelst, Jochem; Rivera, Juan Pablo; Veroustraete, Frank; Muñoz-Marí, Jordi; Clevers, J.G.P.W.; Camps-Valls, Gustau; Moreno, José

    2015-01-01

    Given the forthcoming availability of Sentinel-2 (S2) images, this paper provides a systematic comparison of retrieval accuracy and processing speed of a multitude of parametric, non-parametric and physically-based retrieval methods using simulated S2 data. An experimental field dataset (SPARC),

  5. Ground Snow Measurements: Comparisons of the Hotplate, Weighing and Manual Methods

    Science.gov (United States)

    Wettlaufer, A.; Snider, J.; Campbell, L. S.; Steenburgh, W. J.; Burkhart, M.

    2015-12-01

    The Yankee Environmental Systems (YES) Hotplate was developed to avoid some of the problems associated with weighing snowfall sensors. This work compares Hotplate, weighing sensor (ETI NOAH-II) and manual measurements of liquid-equivalent depth. The main field site was at low altitude in western New York; Hotplate and ETI comparisons were also made at two forested subalpine sites in southeastern Wyoming. The manual measurement (only conducted at the New York site) was derived by weighing snow cores sampled from a snow board. The two recording gauges (Hotplate and ETI) were located within 5 m of the snow board. Hotplate-derived accumulations were corrected using a wind-speed dependent catch efficiency and the ETI orifice was heated and alter shielded. Three important findings are evident from the comparisons: 1) The Yes-derived accumulations, recorded in a user-accessible file, were compared to accumulations derived using an in-house calibration and fundamental measurements (plate power, long and shortwave radiances, wind speed, and temperature). These accumulations are highly correlated (N=24; r2=0.99), but the YES-derived values are larger by 20%. 2) The in-house Hotplate accumulations are in good agreement with ETI-based accumulations but with larger variability (N=24; r2=0.88). 3) The comparison of in-house Hotplate accumulation versus manual accumulation, expressed as mm of liquid, exhibits a fitted linear relationship Y (in-house) versus X (manual) given by Y = -0.2 (±1.4) + 0.9 (±0.1) · X (N= 20; r2=0.89). Thus, these two methods agree within statistical uncertainty.

  6. Mixing studies at low grade vacuum pan using the radiotracer method

    International Nuclear Information System (INIS)

    Griffith, J.M

    1999-01-01

    In this paper, some preliminary results achieved in the evaluation of the homogenization time at a vacuum pan for massecuite b and seed preparation , using two approaches of the radiotracer method, are presented. Practically no difference between the o n line , using small size detector and the sampling methods, in mixing studies performed at the high-grade massecuite was detected. Results achieved during the trials performed at the vacuum station show that the mechanical agitation in comparison with normal agitation improves the performance of mixing in high-grade massecuite b and in seed preparation at the vacuum pan

  7. Mixing studies at low grade vacuum pan using the radiotracer method

    International Nuclear Information System (INIS)

    Griffith, J.M

    1999-01-01

    In this paper, some preliminary results achieved in the evaluation of the homogenization time at a Vacuum Pan for massecuite B and seed preparation , using two approaches of the radiotracer method, are presented. Practically no difference between the Ion Line , using small size detector and the sampling methods, in mixing studies performed at the high-grade massecuite was detected. Results achieved during the trials performed at the vacuum station show that the mechanical agitation in comparison with normal agitation improves the performance of mixing in high-grade massecuite b and in seed preparation at the vacuum pan

  8. Comparison of photoacoustic radiometry to gas chromatography/mass spectrometry methods for monitoring chlorinated hydrocarbons

    International Nuclear Information System (INIS)

    Sollid, J.E.; Trujillo, V.L.; Limback, S.P.; Woloshun, K.A.

    1996-01-01

    A comparison of two methods of gas chromatography mass spectrometry (GCMS) and a nondispersive infrared technique, photoacoustic radiometry (PAR), is presented in the context of field monitoring a disposal site. First is presented an historical account describing the site and early monitoring to provide an overview. The intent and nature of the monitoring program changed when it was proposed to expand the Radiological Waste Site close to the Hazardous Waste Site. Both the sampling methods and analysis techniques were refined in the course of this exercise

  9. Comparison of the effectiveness of sterilizing endodontic files by 4 different methods: An in vitro study

    Directory of Open Access Journals (Sweden)

    Venkatasubramanian R

    2010-03-01

    Full Text Available Sterilization is the best method to counter the threats of microorganisms. The purpose of sterilization in the field of health care is to prevent the spread of infectious diseases. In dentistry, it primarily relates to processing reusable instruments to prevent cross-infection. The aim of this study was to investigate the efficacy of 4 methods of sterilizing endodontic instruments: Autoclaving, carbon dioxide laser sterilization, chemical sterilization (with glutaraldehyde and glass-bead sterilization. The endodontic file was sterilized by 4 different methods after contaminating it with bacillus stearothermophillus and then checked for sterility by incubating after putting it in test tubes containing thioglycollate medium. The study showed that the files sterilized by autoclave and lasers were completely sterile. Those sterilized by glass bead were 90% sterile and those with glutaraldehyde were 80% sterile. The study concluded that autoclave or laser could be used as a method of sterilization in clinical practice and in advanced clinics; laser can be used also as a chair side method of sterilization.

  10. Comparison study of time history and response spectrum responses for multiply supported piping systems

    International Nuclear Information System (INIS)

    Wang, Y.K.; Subudhi, M.; Bezler, P.

    1983-01-01

    In the past decade, several investigators have studied the problem of independent support excitation of a multiply supported piping system to identify the real need for such an analysis. This approach offers an increase in accuracy at a small increase in computational costs. To assess the method, studies based on the response spectrum approach using independent support motions for each group of commonly connected supports were performed. The results obtained from this approach were compared with the conventional envelope spectrum and time history solutions. The present study includes a mathematical formulation of the independent support motion analysis method suitable for implementation into an existing all purpose piping code PSAFE2 and a comparison of the solutions for some typical piping system using both Time History and Response Spectrum Methods. The results obtained from the Response Spectrum Methods represent the upper bound solution at most points in the piping system. Similarly, the Seismic Anchor Movement analysis based on the SRP method over predicts the responses near the support points and under predicts at points away from the supports

  11. A comparison of different interpolation methods for wind data in Central Asia

    Science.gov (United States)

    Reinhardt, Katja; Samimi, Cyrus

    2017-04-01

    For the assessment of the global climate change and its consequences, the results of computer based climate models are of central importance. The quality of these results and the validity of the derived forecasts are strongly determined by the quality of the underlying climate data. However, in many parts of the world high resolution data are not available. This is particularly true for many regions in Central Asia, where the density of climatological stations has often to be described as thinned out. Due to this insufficient data base the use of statistical methods to improve the resolution of existing climate data is of crucial importance. Only this can provide a substantial data base for a well-founded analysis of past climate changes as well as for a reliable forecast of future climate developments for the particular region. The study presented here shows a comparison of different interpolation methods for the wind components u and v for a region in Central Asia with a pronounced topography. The aim of the study is to find out whether there is an optimal interpolation method which can equally be applied for all pressure levels or if different interpolation methods have to be applied for each pressure level. The European reanalysis data Era-Interim for the years 1989 - 2015 are used as input data for the pressure levels of 850 hPa, 500 hPa and 200 hPa. In order to improve the input data, two different interpolation procedures were applied: On the one hand pure interpolation methods were used, such as inverse distance weighting and ordinary kriging. On the other hand machine learning algorithms, generalized additive models and regression kriging were applied, considering additional influencing factors, e.g. geopotential and topography. As a result it can be concluded that regression kriging provides the best results for all pressure levels, followed by support vector machine, neural networks and ordinary kriging. Inverse distance weighting showed the worst

  12. Kernel method for air quality modelling. II. Comparison with analytic solutions

    Energy Technology Data Exchange (ETDEWEB)

    Lorimer, G S; Ross, D G

    1986-01-01

    The performance of Lorimer's (1986) kernel method for solving the advection-diffusion equation is tested for instantaneous and continuous emissions into a variety of model atmospheres. Analytical solutions are available for comparison in each case. The results indicate that a modest minicomputer is quite adequate for obtaining satisfactory precision even for the most trying test performed here, which involves a diffusivity tensor and wind speed which are nonlinear functions of the height above ground. Simulations of the same cases by the particle-in-cell technique are found to provide substantially lower accuracy even when use is made of greater computer resources.

  13. International comparison of methods to test the validity of dead-time and pile-up corrections for high-precision. gamma. -ray spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Houtermans, H.; Schaerf, K.; Reichel, F. (International Atomic Energy Agency, Vienna (Austria)); Debertin, K. (Physikalisch-Technische Bundesanstalt, Braunschweig (Germany, F.R.))

    1983-02-01

    The International Atomic Energy Agency organized an international comparison of methods applied in high-precision ..gamma..-ray spectrometry for the correction of dead-time and pile-up losses. Results of this comparison are reported and discussed.

  14. Onset Detection in Surface Electromyographic Signals: A Systematic Comparison of Methods

    Directory of Open Access Journals (Sweden)

    Claus Flachenecker

    2001-06-01

    Full Text Available Various methods to determine the onset of the electromyographic activity which occurs in response to a stimulus have been discussed in the literature over the last decade. Due to the stochastic characteristic of the surface electromyogram (SEMG, onset detection is a challenging task, especially in weak SEMG responses. The performance of the onset detection methods were tested, mostly by comparing their automated onset estimations to the manually determined onsets found by well-trained SEMG examiners. But a systematic comparison between methods, which reveals the benefits and the drawbacks of each method compared to the other ones and shows the specific dependence of the detection accuracy on signal parameters, is still lacking. In this paper, several classical threshold-based approaches as well as some statistically optimized algorithms were tested on large samples of simulated SEMG data with well-known signal parameters. Rating between methods is performed by comparing their performance to that of a statistically optimal maximum likelihood estimator which serves as reference method. In addition, performance was evaluated on real SEMG data obtained in a reaction time experiment. Results indicate that detection behavior strongly depends on SEMG parameters, such as onset rise time, signal-to-noise ratio or background activity level. It is shown that some of the threshold-based signal-power-estimation procedures are very sensitive to signal parameters, whereas statistically optimized algorithms are generally more robust.

  15. Comparison of methods to determine methane emissions from dairy cows in farm conditions.

    Science.gov (United States)

    Huhtanen, P; Cabezas-Garcia, E H; Utsumi, S; Zimmerman, S

    2015-05-01

    Nutritional and animal-selection strategies to mitigate enteric methane (CH4) depend on accurate, cost-effective methods to determine emissions from a large number of animals. The objective of the present study was to compare 2 spot-sampling methods to determine CH4 emissions from dairy cows, using gas quantification equipment installed in concentrate feeders or automatic milking stalls. In the first method (sniffer method), CH4 and carbon dioxide (CO2) concentrations were measured in close proximity to the muzzle of the animal, and average CH4 concentrations or CH4/CO2 ratio was calculated. In the second method (flux method), measurement of CH4 and CO2 concentration was combined with an active airflow inside the feed troughs for capture of emitted gas and measurements of CH4 and CO2 fluxes. A muzzle sensor was used allowing data to be filtered when the muzzle was not near the sampling inlet. In a laboratory study, a model cow head was built that emitted CO2 at a constant rate. It was found that CO2 concentrations using the sniffer method decreased up to 39% when the distance of the muzzle from the sampling inlet increased to 30cm, but no muzzle-position effects were observed for the flux method. The methods were compared in 2 on-farm studies conducted using 32 (experiment 1) or 59 (experiment 2) cows in a switch-back design of 5 (experiment 1) or 4 (experiment 2) periods for replicated comparisons between methods. Between-cow coefficient of variation (CV) in CH4 was smaller for the flux than the sniffer method (experiment 1, CV=11.0 vs. 17.5%, and experiment 2, 17.6 vs. 28.0%). Repeatability of the measurements from both methods were high (0.72-0.88), but the relationship between the sniffer and flux methods was weak (R(2)=0.09 in both experiments). With the flux method CH4 was found to be correlated to dry matter intake or body weight, but this was not the case with the sniffer method. The CH4/CO2 ratio was more highly correlated between the flux and sniffer

  16. Study of the orbital correction method

    International Nuclear Information System (INIS)

    Meserve, R.A.

    1976-01-01

    Two approximations of interest in atomic, molecular, and solid state physics are explored. First, a procedure for calculating an approximate Green's function for use in perturbation theory is derived. In lowest order it is shown to be equivalent to treating the contribution of the bound states of the unperturbed Hamiltonian exactly and representing the continuum contribution by plane waves orthogonalized to the bound states (OPW's). If the OPW approximation were inadequate, the procedure allows for systematic improvement of the approximation. For comparison purposes an exact but more limited procedure for performing second-order perturbation theory, one that involves solving an inhomogeneous differential equation, is also derived. Second, the Kohn-Sham many-electron formalism is discussed and formulae are derived and discussed for implementing perturbation theory within the formalism so as to find corrections to the total energy of a system through second order in the perturbation. Both approximations were used in the calculation of the polarizability of helium, neon, and argon. The calculation included direct and exchange effects by the Kohn-Sham method and full self-consistency was demanded. The results using the differential equation method yielded excellent agreement with the coupled Hartree-Fock results of others and with experiment. Moreover, the OPW approximation yielded satisfactory comparison with the results of calculation by the exact differential equation method. Finally, both approximations were used in the calculation of properties of hydrogen fluoride and methane. The appendix formulates a procedure using group theory and the internal coordinates of a molecular system to simplify the calculation of vibrational frequencies

  17. A comparison of methods used to calculate normal background concentrations of potentially toxic elements for urban soil

    Energy Technology Data Exchange (ETDEWEB)

    Rothwell, Katherine A., E-mail: k.rothwell@ncl.ac.uk; Cooke, Martin P., E-mail: martin.cooke@ncl.ac.uk

    2015-11-01

    To meet the requirements of regulation and to provide realistic remedial targets there is a need for the background concentration of potentially toxic elements (PTEs) in soils to be considered when assessing contaminated land. In England, normal background concentrations (NBCs) have been published for several priority contaminants for a number of spatial domains however updated regulatory guidance places the responsibility on Local Authorities to set NBCs for their jurisdiction. Due to the unique geochemical nature of urban areas, Local Authorities need to define NBC values specific to their area, which the national data is unable to provide. This study aims to calculate NBC levels for Gateshead, an urban Metropolitan Borough in the North East of England, using freely available data. The ‘median + 2MAD’, boxplot upper whisker and English NBC (according to the method adopted by the British Geological Survey) methods were compared for test PTEs lead, arsenic and cadmium. Due to the lack of systematically collected data for Gateshead in the national soil chemistry database, the use of site investigation (SI) data collected during the planning process was investigated. 12,087 SI soil chemistry data points were incorporated into a database and 27 comparison samples were taken from undisturbed locations across Gateshead. The SI data gave high resolution coverage of the area and Mann–Whitney tests confirmed statistical similarity for the undisturbed comparison samples and the SI data. SI data was successfully used to calculate NBCs for Gateshead and the median + 2MAD method was selected as most appropriate by the Local Authority according to the precautionary principle as it consistently provided the most conservative NBC values. The use of this data set provides a freely available, high resolution source of data that can be used for a range of environmental applications. - Highlights: • The use of site investigation data is proposed for land contamination studies

  18. An Algorithmic Comparison of the Hyper-Reduction and the Discrete Empirical Interpolation Method for a Nonlinear Thermal Problem

    Directory of Open Access Journals (Sweden)

    Felix Fritzen

    2018-02-01

    Full Text Available A novel algorithmic discussion of the methodological and numerical differences of competing parametric model reduction techniques for nonlinear problems is presented. First, the Galerkin reduced basis (RB formulation is presented, which fails at providing significant gains with respect to the computational efficiency for nonlinear problems. Renowned methods for the reduction of the computing time of nonlinear reduced order models are the Hyper-Reduction and the (Discrete Empirical Interpolation Method (EIM, DEIM. An algorithmic description and a methodological comparison of both methods are provided. The accuracy of the predictions of the hyper-reduced model and the (DEIM in comparison to the Galerkin RB is investigated. All three approaches are applied to a simple uncertainty quantification of a planar nonlinear thermal conduction problem. The results are compared to computationally intense finite element simulations.

  19. Comparison of Three Methods of Reducing Test Anxiety: Systematic Desensitization, Implosive Therapy, and Study Counseling

    Science.gov (United States)

    Cornish, Richard D.; Dilley, Josiah S.

    1973-01-01

    Systematic desensitization, implosive therapy, and study counseling have all been effective in reducing test anxiety. In addition, systematic desensitization has been compared to study counseling for effectiveness. This study compares all three methods and suggests that systematic desentization is more effective than the others, and that implosive…

  20. Comparison of Farfan modified and Frobin methods to evaluate the intervertebral disc height

    Directory of Open Access Journals (Sweden)

    Michel Kanas

    2014-03-01

    Full Text Available OBJECTIVE: To evaluate the reliability and reproducibility of Farfan modified and Frobin methods to measure the intervertebral disc height in radiographs with inter- and intraobserver comparison. METHOD: Six radiographs of different patients treated for low back pain have been collected and digitized, and five lumbar disc of each patient were evaluated by six examiners with different levels of experience. The measures were done in Image Pro Plus 6.0 software. RESULTS: When compared, both methods showed more than 95% concordance. In intraexaminer analysis, both also shown to be reliable and reproducible, with a high level of concordance. By comparing the correlation between classes of examiners, the higher the level of experience, the greater the agreement for both methods. CONCLUSION: Farfan modified and Frobin are reliable methods to measure the disc height in the lateral radiographs. The higher level of experience of the examiner, the higher was the correlation between measurements.

  1. [Clinical evaluation of TANDEM PSA in Japanese cases and comparison with other methods].

    Science.gov (United States)

    Kuriyama, M; Yamamoto, N; Shinoda, I; Kawada, Y; Akimoto, S; Shimazaki, J

    1995-01-01

    Clinical evaluation of TANDEM PSA which is the most frequently used prostate specific antigen (PSA) assay method in the world and a comparison with other methods were performed in Japanese cases in a cooperative research fashion. The minimum detectable level of the method was found to be 0.50 ng of PSA in one ml of serum and 1.9 ng/ml was regarded as the upper normal value in Japanese males. The distribution of serum PSA showed a significant difference between the benign prostate hypertrophy (BPH) cases and patients with stage C or D prostate cancer. The sero-diagnosis prostate cancer at an early stage with the TANDEM PSA was difficult. The correlation to other methods of PSA detection was very high. Furthermore, the clinical use of the method in following-up the clinical course of prostate cancer patients was very useful. These findings suggested that the PSA detection using TANDEM PSA is applicable even in Japanese cases although the upper cut-off level is decreased.

  2. Statistical Group Comparison

    CERN Document Server

    Liao, Tim Futing

    2011-01-01

    An incomparably useful examination of statistical methods for comparisonThe nature of doing science, be it natural or social, inevitably calls for comparison. Statistical methods are at the heart of such comparison, for they not only help us gain understanding of the world around us but often define how our research is to be carried out. The need to compare between groups is best exemplified by experiments, which have clearly defined statistical methods. However, true experiments are not always possible. What complicates the matter more is a great deal of diversity in factors that are not inde

  3. Standard setting: comparison of two methods.

    Science.gov (United States)

    George, Sanju; Haque, M Sayeed; Oyebode, Femi

    2006-09-14

    The outcome of assessments is determined by the standard-setting method used. There is a wide range of standard-setting methods and the two used most extensively in undergraduate medical education in the UK are the norm-reference and the criterion-reference methods. The aims of the study were to compare these two standard-setting methods for a multiple-choice question examination and to estimate the test-retest and inter-rater reliability of the modified Angoff method. The norm-reference method of standard-setting (mean minus 1 SD) was applied to the 'raw' scores of 78 4th-year medical students on a multiple-choice examination (MCQ). Two panels of raters also set the standard using the modified Angoff method for the same multiple-choice question paper on two occasions (6 months apart). We compared the pass/fail rates derived from the norm reference and the Angoff methods and also assessed the test-retest and inter-rater reliability of the modified Angoff method. The pass rate with the norm-reference method was 85% (66/78) and that by the Angoff method was 100% (78 out of 78). The percentage agreement between Angoff method and norm-reference was 78% (95% CI 69% - 87%). The modified Angoff method had an inter-rater reliability of 0.81-0.82 and a test-retest reliability of 0.59-0.74. There were significant differences in the outcomes of these two standard-setting methods, as shown by the difference in the proportion of candidates that passed and failed the assessment. The modified Angoff method was found to have good inter-rater reliability and moderate test-retest reliability.

  4. Comparison between Hilbert-Huang transform and scalogram methods on non-stationary biomedical signals: application to laser Doppler flowmetry recordings

    International Nuclear Information System (INIS)

    Roulier, Remy; Humeau, Anne; Flatley, Thomas P; Abraham, Pierre

    2005-01-01

    A significant transient increase in laser Doppler flowmetry (LDF) signals is observed in response to a local and progressive cutaneous pressure application on healthy subjects. This reflex may be impaired in diabetic patients. The work presents a comparison between two signal processing methods that provide a clarification of this phenomenon. Analyses by the scalogram and the Hilbert-Huang transform (HHT) of LDF signals recorded at rest and during a local and progressive cutaneous pressure application are performed on healthy and type 1 diabetic subjects. Three frequency bands, corresponding to myogenic, neurogenic and endothelial related metabolic activities, are studied at different time intervals in order to take into account the dynamics of the phenomenon. The results show that both the scalogram and the HHT methods lead to the same conclusions concerning the comparisons of the myogenic, neurogenic and endothelial related metabolic activities-during the progressive pressure and at rest-in healthy and diabetic subjects. However, the HHT shows more details that may be obscured by the scalogram. Indeed, the non-locally adaptative limitations of the scalogram can remove some definition from the data. These results may improve knowledge on the above-mentioned reflex as well as on non-stationary biomedical signal processing methods

  5. Practical implications of procedures developed in IDEA project - Comparison with traditional methods

    International Nuclear Information System (INIS)

    Andrasi, A.; Bouvier, C.; Brandl, A.; De Carlan, L.; Fischer, H.; Franck, D.; Hoellriegl, V.; Li, W. B.; Oeh, U.; Ritt, J.; Roth, P.; Schlagbauer, M.; Schmitzer, Ch; Wahl, W.; Zombori, P.

    2007-01-01

    The idea of the IDEA project aimed to improve assessment of incorporated radionuclides through developments of more reliable and possibly faster in vivo and bioassay monitoring techniques and making use of such enhancements for improvements in routine monitoring. In direct in vivo monitoring technique the optimum choice of the detectors to be applied for different monitoring tasks has been investigated in terms of material, size and background in order to improve conditions namely to increase counting efficiency and reduce background. Detailed studies have been performed to investigate the manifold advantageous applications and capabilities of numerical simulation method for the calibration and optimisation of in vivo counting systems. This calibration method can be advantageously applied especially in the measurement of low-energy photon emitting radionuclides, where individual variability is a significant source of uncertainty. In bioassay measurements the use of inductively coupled plasma mass spectrometry (ICP-MS) can improve considerably both the measurement speed and the lower limit of detection currently achievable with alpha spectrometry for long-lived radionuclides. The work carried out in this project provided detailed guidelines for optimum performance of the technique of ICP-MS applied mainly for the determination of uranium and thorium nuclides in the urine including sampling procedure, operational parameters of the instruments and interpretation of the measured data. The paper demonstrates the main advantages of investigated techniques in comparison with the performances of methods commonly applied in routine monitoring practice. (authors)

  6. Standard setting: Comparison of two methods

    Directory of Open Access Journals (Sweden)

    Oyebode Femi

    2006-09-01

    Full Text Available Abstract Background The outcome of assessments is determined by the standard-setting method used. There is a wide range of standard – setting methods and the two used most extensively in undergraduate medical education in the UK are the norm-reference and the criterion-reference methods. The aims of the study were to compare these two standard-setting methods for a multiple-choice question examination and to estimate the test-retest and inter-rater reliability of the modified Angoff method. Methods The norm – reference method of standard -setting (mean minus 1 SD was applied to the 'raw' scores of 78 4th-year medical students on a multiple-choice examination (MCQ. Two panels of raters also set the standard using the modified Angoff method for the same multiple-choice question paper on two occasions (6 months apart. We compared the pass/fail rates derived from the norm reference and the Angoff methods and also assessed the test-retest and inter-rater reliability of the modified Angoff method. Results The pass rate with the norm-reference method was 85% (66/78 and that by the Angoff method was 100% (78 out of 78. The percentage agreement between Angoff method and norm-reference was 78% (95% CI 69% – 87%. The modified Angoff method had an inter-rater reliability of 0.81 – 0.82 and a test-retest reliability of 0.59–0.74. Conclusion There were significant differences in the outcomes of these two standard-setting methods, as shown by the difference in the proportion of candidates that passed and failed the assessment. The modified Angoff method was found to have good inter-rater reliability and moderate test-retest reliability.

  7. Comparison of three methods for the estimation of cross-shock electric potential using Cluster data

    Directory of Open Access Journals (Sweden)

    A. P. Dimmock

    2011-05-01

    Full Text Available Cluster four point measurements provide a comprehensive dataset for the separation of temporal and spatial variations, which is crucial for the calculation of the cross shock electrostatic potential using electric field measurements. While Cluster is probably the most suited among present and past spacecraft missions to provide such a separation at the terrestrial bow shock, it is far from ideal for a study of the cross shock potential, since only 2 components of the electric field are measured in the spacecraft spin plane. The present paper is devoted to the comparison of 3 different techniques that can be used to estimate the potential with this limitation. The first technique is the estimate taking only into account the projection of the measured components onto the shock normal. The second uses the ideal MHD condition E·B = 0 to estimate the third electric field component. The last method is based on the structure of the electric field in the Normal Incidence Frame (NIF for which only the potential component along the shock normal and the motional electric field exist. All 3 approaches are used to estimate the potential for a single crossing of the terrestrial bow shock that took place on the 31 March 2001. Surprisingly all three methods lead to the same order of magnitude for the cross shock potential. It is argued that the third method must lead to more reliable results. The effect of the shock normal inaccuracy is investigated for this particular shock crossing. The resulting electrostatic potential appears too high in comparison with the theoretical results for low Mach number shocks. This shows the variability of the potential, interpreted in the frame of the non-stationary shock model.

  8. Comparison of two percutaneous tracheostomy techniques, guide wire dilating forceps and Ciaglia Blue Rhino: a sequential cohort study.

    NARCIS (Netherlands)

    Fikkers, B.G.; Staatsen, M; Lardenoije, S.G.; Hoogen, F.J.A. van den; Hoeven, J.G. van der

    2004-01-01

    INTRODUCTION: To evaluate and compare the peri-operative and postoperative complications of the two most frequently used percutaneous tracheostomy techniques, namely guide wire dilating forceps (GWDF) and Ciaglia Blue Rhino (CBR). METHODS: A sequential cohort study with comparison of short-term and

  9. Specific activity measurement of 64Cu: A comparison of methods

    International Nuclear Information System (INIS)

    Mastren, Tara; Guthrie, James; Eisenbeis, Paul; Voller, Tom; Mebrahtu, Efrem; Robertson, J. David; Lapi, Suzanne E.

    2014-01-01

    Effective specific activity of 64 Cu (amount of radioactivity per µmol metal) is important in order to determine purity of a particular 64 Cu lot and to assist in optimization of the purification process. Metal impurities can affect effective specific activity and therefore it is important to have a simple method that can measure trace amounts of metals. This work shows that ion chromatography (IC) yields similar results to ICP mass spectrometry for copper, nickel and iron contaminants in 64 Cu production solutions. - Highlights: • Comparison of TETA titration, ICP mass spectrometry, and ion chromatography to measure specific activity. • Validates ion chromatography by using ICP mass spectrometry as the “gold standard”. • Shows different types and amounts of metal impurities present in 64 Cu

  10. Comparison of Witczak NCHRP 1-40D & Hirsh dynamic modulus models based on different binder characterization methods: a case study

    Directory of Open Access Journals (Sweden)

    Khattab Ahmed M.

    2017-01-01

    Full Text Available The Pavement ME Design method considers the hot mix asphalt (HMA dynamic modulus (E* as the main mechanistic property that affects pavement performance. For the HMA, E* can be determined directly by laboratory testing (level 1 or it can be estimated using predictive equations (levels 2 and 3. Pavement-ME Design introduced the NCHRP1-40D model as the latest model for predicting E* when levels 2 or 3 HMA inputs are used. This study focused on utilizing laboratory measured E* data to compare NCHRP1-40D model with Hirsh model. This comparison included the evaluation of the binder characterization level as per Pavement ME Design and its influence on the performance of these models. E*tests were conducted in the laboratory on 25 local mixes representing different road construction projects in the kingdom of Saudi Arabia. The main tests for the mix binders were dynamic Shear Rheometer (DSR and Brookfield Rotational Viscometer (RV. Results showed that both models with level 3 binder data produced very similar accuracy. The highest accuracy and lowest bias for both models occurred with level 3 binder data. Finally, the accuracy of prediction and level of bias for both models were found to be a function of the binder input level.

  11. The Convergence Study of the Homotopy Analysis Method for Solving Nonlinear Volterra-Fredholm Integrodifferential Equations

    Directory of Open Access Journals (Sweden)

    Behzad Ghanbari

    2014-01-01

    Full Text Available We aim to study the convergence of the homotopy analysis method (HAM in short for solving special nonlinear Volterra-Fredholm integrodifferential equations. The sufficient condition for the convergence of the method is briefly addressed. Some illustrative examples are also presented to demonstrate the validity and applicability of the technique. Comparison of the obtained results HAM with exact solution shows that the method is reliable and capable of providing analytic treatment for solving such equations.

  12. Comparison of different surface quantitative analysis methods. Application to corium

    International Nuclear Information System (INIS)

    Guilbaud, N.; Blin, D.; Perodeaud, Ph.; Dugne, O.; Gueneau, Ch.

    2000-01-01

    In case of a severe hypothetical accident in a pressurized water reactor, the reactor assembly melts partially or completely. The material formed, called corium, flows out and spreads at the bottom of the reactor. To limit and control the consequences of such an accident, the specifications of the O-U-Zr basic system must be known accurately. To achieve this goal, the corium mix was melted by electron bombardment at very high temperature (3000 K) followed by quenching of the ingot in the Isabel 1 evaporator. Metallographic analyses were then required to validate the thermodynamic databases set by the Thermo-Calc software. The study consists in defining an overall surface quantitative analysis method that is fast and reliable, in order to determine the overall corium composition. The analyzed ingot originated in a [U+Fe+Y+UO 2 +ZrO 2 ) mix, with a total mass of 2253.7 grams. Several successive heating with average power were performed before a very brief plateau at very high temperature, so that the ingot was formed progressively and without any evaporation liable to modify its initial composition. The central zone of the ingot was then analyzed by qualitative and quantitative global surface methods, to yield the volume composition of the analyzed zone. Corium sample analysis happens to be very complex because of the variety and number of elements present, and also because of the presence of oxygen in a heavy element like the uranium based matrix. Three different global quantitative surface analysis methods were used: global EDS analysis (Energy Dispersive Spectrometry), with SEM, global WDS analysis (Wavelength Dispersive Spectrometry) with EPMA, and coupling of image analysis with EDS or WDS point spectroscopic analyses. The difficulties encountered during the study arose from sample preparation (corium is very sensitive to oxidation), and the choice of acquisition parameters of the images and analyses. The corium sample studied consisted of two zones displaying

  13. Comparison between two sampling methods by results obtained using petrographic techniques, specially developed for minerals of the Itataia uranium phosphate deposit, Ceara, Brazil

    International Nuclear Information System (INIS)

    Salas, H.T.; Murta, R.L.L.

    1985-01-01

    The results of comparison of two sampling methods applied to a gallery of the uranium-phosphate ore body of Itataia-Ceara State, Brazil, along 235 metres of mineralized zone, are presented. The results were obtained through petrographic techniques especially developed and applied to both samplings. In the first one it was studied hand samples from a systematically sampling made at intervals of 2 metres. After that, the estimated mineralogical composition studies were carried out. Some petrogenetic observations were for the first time verified. The second sampling was made at intervals of 20 metres and 570 tons of ore extracted and distributed in sections and a sample representing each section was studied after crushing at -65. Their mineralogy were quantified and the degree of liberation of apatite calculated. Based on the mineralogical data obtained it was possible to represent both samplings and to make the comparison of the main mineralogical groups (phosphates, carbonates and silicates). In spite of utilizing different methods and methodology and the kind of mineralization, stockwork, being quite irregular, the results were satisfactory. (Author) [pt

  14. A method for comparison of animal and human alveolar dose and toxic effect of inhaled ozone

    International Nuclear Information System (INIS)

    Hatch, G.E.; Koren, H.; Aissa, M.

    1989-01-01

    Present models for predicting the pulmonary toxicity of O 3 in humans from the toxic effects observed in animals rely on dosimetric measurements of O 3 mass balance and species comparisons of mechanisms that protect tissue against O 3 . The goal of the study described was to identify a method to directly compare O 3 dose and effect in animals and humans using bronchoalveolar lavage fluid markers. The feasibility of estimating O 3 dose to alveoli of animals and humans was demonstrated through assay of reaction products of 18 O-labeled O 3 in lung surfactant and macrophage pellets of rabbits. The feasibility of using lung lavage fluid protein measurements to quantify the O 3 toxic response in humans was demonstrated by the finding of significantly increased lung lavage protein in 10 subjects exposed to 0.4 ppm O 3 for 2 h with intermittent periods of heavy exercise. The validity of using the lavage protein marker to quantify the response in animals has already been established. The positive results obtained in both the 18 O 3 and the lavage protein studies reported here suggest that it should be possible to obtain a direct comparison of both alveolar dose and toxic effect of O 3 to alveoli of animals or humans

  15. Indirect comparisons of second-generation tyrosine kinase inhibitors in CML: case study using baseline population characteristics

    Directory of Open Access Journals (Sweden)

    Kimbach Tran Carpiuc

    2010-10-01

    Full Text Available Kimbach Tran Carpiuc1, Gianantonio Rosti2, Fausto Castagnetti2, Maarten Treur3, Jennifer Stephens11Pharmerit North America LLC, Bethesda, MD, USA; 2Department of Hematology and Oncology, S Orsola-Malpighi University Hospital, Bologna, Italy; 3Pharmerit Europe, Rotterdam, The NetherlandsAbstract: The use of indirect comparisons to evaluate the relative effectiveness between two or more treatments is widespread in the literature and continues to grow each year. Appropriate methodologies will be essential for integrating data from various published clinical trials into a systematic framework as part of the increasing emphasis on comparative effectiveness research. This article provides a case study example for clinicians using the baseline study population characteristics and response rates of the tyrosine kinase inhibitors in imatinib-resistant or imatinib-intolerant chronic myelogenous leukemia followed by a discussion of indirect comparison methods that are being increasingly implemented to address challenges with these types of comparisons.Keywords: comparative effectiveness research, meta-analysis, BCR–ABL-positive chronic myelogenous leukemia, imatinib mesylate, nilotinib, dasatinib 

  16. The comparison of placental removal methods on operative blood loss

    International Nuclear Information System (INIS)

    Waqar, F.; Fawad, A.

    2008-01-01

    On an average 1 litre of blood is lost during Caesarean Section. Many variable techniques have been tried to reduce this blood loss. Many study trials have shown the spontaneous delivery of placenta method to be superior over manual method because of reduced intra operative blood loss and reduced incidence of post operative endometritis. The main objective of our study was to compare the risk of blood loss associated with spontaneous and manual removal of the placenta during caesarean section. This study was conducted at Department of Obstetrics and Gynaecology, Islamic International Medical Complex, Islamabad from September 2004 to September 2005. All Women undergoing elective or emergency caesarean section were included in the study. Exclusion criteria were pregnancy below 37 weeks, severe maternal anaemia, and prolonged rupture of the membranes with fever, placenta praevia, placenta accreta and clotting disorders. Patients were allocated to the two groups randomly. Group A comprised of women in whom the obstetrician waited a maximum of 5 minutes till the placenta delivered spontaneously. In group B the obstetrician manually cleaved out the placenta as soon as the infant was delivered. The primary outcome measures noted were difference in haemoglobin of >2 gm/dl (preoperatively and postoperatively), time interval between delivery of baby and placenta, significant blood loss (>1000 cc), additional use of oxytocics, total operating time and blood transfusions. Data was analysed by SPSS. Statistical tests used for specific comparison were chi square-test and Student's t-test. One hundred and forty-five patients were allocated to two groups randomly. Seventy-eight patients were allocated to group A and 67 patients allocated to group B. Mean maternal age, birth weight, and total operating time were the same in two groups, but blood loss as measured by a difference in haemoglobin of greater then 2 grams/dl was statistically significant. Significant blood loss (>1000 cc

  17. Autoradiographic methods for studying marked volatile substances (1961)

    International Nuclear Information System (INIS)

    Cohen, Y.; Wepierre, J.

    1961-01-01

    The autoradiographic methods for animals used up to the present do not make it possible to localise exactly the distribution of marked volatile molecules. The Ullberg method (1954) which we have modified (Cohen, Delassue, 1959) involves cold desiccant. The method due to Pellerin (1957) avoids this desiccant but the histological comparison of the autoradiography with the biological document itself is difficult, if not impossible. Nevertheless, we have adopted certain points in the two methods and propose the following technique for the autoradiographic study of marked volatile molecules: 1- The surface of the frozen sample to be studied is prepared using a freezing microtome. 2- The last section, which is 20 μ thick and whose histological elements are parallel to those of the block, is dried by cooling and is used as the biological reference document for the autoradiography obtained, as is indicated in 3; 3- The radiography films are applied to the frozen block at -30 deg. C. The autoradiographs correspond to the radioactivity of the volatile molecule and of its non-volatile degradation products. 4- The radiographic film is also applied to the 20 μ section previously dried at -20 deg. C. This autoradiography corresponds to the radioactivity of the non-volatile degradation products of the molecule. 5- We confirmed the absence of diffusion of the volatile molecule and of pseudo-radiographic effects (photochemical and others). This method, which has enabled us to study the distribution of a carbide, para-cymene (C 14 ) 7, macroscopically in the case of a whole mouse and microscopically on the skin of a dog, can find general applications. (authors) [fr

  18. Analytical Comparison of Miniaturized Methods for Selected PAH Determination in Clean Waters; Comparacion Analitica de 4 Metodos Miniaturizados de Determinacion de PAHs mediante HPLC en Aguas

    Energy Technology Data Exchange (ETDEWEB)

    Garcia, S.; Perez, R. M.; Fernandez, O.

    2012-04-11

    A study on the comparison and evaluation of 4 miniaturized extraction methods for the determination of selected PAHs in clear waters is presented. Four types of liquid-liquid extraction were used for chromatographic analysis by HPLC/ FD. The main objective was the optimization and development of simple, rapid and low cost methods, minimizing the use of extracting solvent volume. The work also includes a study on the scope of the methods developed at low and high levels of concentration. (Author) 13 refs.

  19. Comparison of shear-wave velocity measurements by crosshole, downhole and seismic cone penetration test methods

    Energy Technology Data Exchange (ETDEWEB)

    Suthaker, N.; Tweedie, R. [Thurber Engineering Ltd., Edmonton, AB (Canada)

    2009-07-01

    Shear wave velocity measurements are an integral part of geotechnical studies for major structures and are an important tool in their design for site specific conditions such as site-specific earthquake response. This paper reported on a study in which shear wave velocities were measured at a proposed petrochemical plant site near Edmonton, Alberta. The proposed site is underlain by lacustrine clay, glacial till and upper Cretaceous clay shale and sandstone bedrock. The most commonly used methods for determining shear wave velocity include crosshole seismic tests, downhole seismic tests, and seismic cone penetration tests (SCPT). This paper presented the results of all 3 methods used in this study and provided a comparison of the various test methods and their limitations. The crosshole test results demonstrated a common trend of increasing shear wave velocity with depth to about 15 m, below which the velocities remained relatively constant. An anomaly was noted at one site, where the shear wave velocity was reduced at a zone corresponding to clay till containing stiff high plastic clay layers. The field study demonstrated that reasonable agreement in shear wave velocity measurements can be made using crosshole, downhole and seismic tests in the same soil conditions. The National Building Code states that the shear wave velocity is the fundamental method for determining site classification, thus emphasizing the importance of obtaining shear wave velocity measurements for site classification. It was concluded that an SCPT program can be incorporated into the field program without much increase in cost and can be supplemented by downhole or crosshole techniques. 5 refs., 2 tabs., 10 figs.

  20. A comparison of different methods for in-situ determination of heat losses form district heating pipes

    Energy Technology Data Exchange (ETDEWEB)

    Boehm, Benny [Technical Univ. of Denmark, Dept. of Energy Engineering (Denmark)

    1996-11-01

    A comparison of different methods for in-situ determination of heat losses has been carried out on a 273 mm transmission line in Copenhagen. Instrumentation includes temperature sensors, heat flux meters and an infrared camera. The methods differ with regard to time consumption and costs of applying the specific method, demand on accuracy of temperature measurements, sensitivity to computational parameters, e.g. the thermal conductivity of the soil, response to transients in water temperature and the ground, and steady state assumptions in the model used in the interpretation of the measurements. Several of the applied methods work well. (au)

  1. A comparison of goniophotometric measurement facilities

    DEFF Research Database (Denmark)

    Thorseth, Anders; Lindén, Johannes; Dam-Hansen, Carsten

    2016-01-01

    In this paper, we present the preliminary results of a comparison between widely different goniophotometric and goniospectroradiometric measurement facilities. The objective of the comparison is to increase consistency and clarify the capabilities among Danish test laboratories. The study will seek...... to find the degree of equivalence between the various facilities and methods. The collected data is compared by using a three-way variation of principal component analysis, which is well suited for modelling large sets of correlated data. This method drastically decreases the number of numerical values...

  2. A comparison of two methods of logMAR visual acuity data scoring for statistical analysis

    Directory of Open Access Journals (Sweden)

    O. A. Oduntan

    2009-12-01

    Full Text Available The purpose of this study was to compare two methods of logMAR visual acuity (VA scoring. The two methods are referred to as letter scoring (method 1 and line scoring (method 2. The two methods were applied to VA data obtained from one hundred and forty (N=140 children with oculocutaneous albinism. Descriptive, correlation andregression statistics were then used to analyze the data.  Also, where applicable, the Bland and Altman analysis was used to compare sets of data from the two methods.  The right and left eyes data were included in the study, but because the findings were similar in both eyes, only the results for the right eyes are presented in this paper.  For method 1, the mean unaided VA (mean UAOD1 = 0.39 ±0.15 logMAR. The mean aided (mean ADOD1 VA = 0.50 ± 0.16 logMAR.  For method 2, the mean unaided (mean UAOD2 VA = 0.71 ± 0.15 logMAR, while the mean aided VA (mean ADOD2 = 0.60 ± 0.16 logMAR. The range and mean values of the improvement in VA for both methods were the same. The unaided VAs (UAOD1, UAOD2 and aided (ADOD1, ADOD2 for methods 1 and 2 correlated negatively (Unaided, r = –1, p<0.05, (Aided, r = –1, p<0.05.  The improvement in VA (differences between the unaided and aided VA values (DOD1 and DOD2 were positively correlated (r = +1, p <0.05. The Bland and Altman analyses showed that the VA improvement (unaided – aided VA values (DOD1 and DOD2 were similar for the two methods. Findings indicated that only the improvement in VA could be compared when different scoring methods are used. Therefore the scoring method used in any VA research project should be stated in the publication so that appropriate comparisons could be made by other researchers.

  3. The Comparison Study of Six University Archives

    Directory of Open Access Journals (Sweden)

    Chun-Fen Liu

    2004-03-01

    Full Text Available The university archives is not only the extension of a building, but also includes the archival records, archivists and equipments. The university archives is the historical memory of a university, which could let people to predict the future by reviewing the past. The university archives has abundant collections, both teachers and students can review history of this university. This paper mainly compares six university archives of Taiwan, and the interviewing method is used in this research. After comparison of the six university archives, we have found the six university archives have different organizational structures, budgets, and functions. Finally the authors propose some suggestions.[Article content in Chinese

  4. Comparison On Matching Methods Used In Pose Tracking For 3D Shape Representation

    Directory of Open Access Journals (Sweden)

    Khin Kyu Kyu Win

    2017-01-01

    Full Text Available In this work three different algorithms such as Brute Force Delaunay Triangulation and k-d Tree are analyzed on matching comparison for 3D shape representation. It is intended for developing the pose tracking of moving objects in video surveillance. To determine 3D pose of moving objects some tracking system may require full 3D pose estimation of arbitrarily shaped objects in real time. In order to perform 3D pose estimation in real time each step in the tracking algorithm must be computationally efficient. This paper presents method comparison for the computationally efficient registration of 3D shapes including free-form surfaces. Matching of free-form surfaces are carried out by using geometric point matching algorithm ICP. Several aspects of the ICP algorithm are investigated and analyzed by using specified surface setup. The surface setup processed in this system is represented by simple geometric primitive dealing with objects of free-from shape. Considered representations are a cloud of points.

  5. Comparison of radon diffusion coefficients measured by transient-diffusion and steady-state laboratory methods

    International Nuclear Information System (INIS)

    Kalwarf, D.R.; Nielson, K.K.; Rich, D.C.; Rogers, V.C.

    1982-11-01

    A method was developed and used to determine radon diffusion coefficients in compacted soils by transient-diffusion measurements. A relative standard deviation of 12% was observed in repeated measurements with a dry soil by the transient-diffusion method, and a 40% uncertainty was determined for moistures exceeding 50% of saturation. Excellent agreement was also obtained between values of the diffusion coefficient for radon in air, as measured by the transient-diffusion method, and those in the published literature. Good agreement was also obtained with diffusion coefficients measured by a steady-state method on the same soils. The agreement was best at low moistures, averaging less than ten percent difference, but differences of up to a factor of two were observed at high moistures. The comparison of the transient-diffusion and steady-state methods at low moistures provides an excellent verification of the theoretical validity and technical accuracy of these approaches, which are based on completely independent experimental conditions, measurement methods and mathematical interpretations

  6. Pyrrolizidine alkaloids in honey: comparison of analytical methods.

    Science.gov (United States)

    Kempf, M; Wittig, M; Reinhard, A; von der Ohe, K; Blacquière, T; Raezke, K-P; Michel, R; Schreier, P; Beuerle, T

    2011-03-01

    ragwort pollen were detected (0-6.3%), in some cases extremely high PA values were detected (n = 31; ranging from 0 to 13019 µg kg(-1), average = 1261 or 76 µg kg(-1) for GC-MS and LC-MS, respectively). Here the results showed significantly different quantification results. The GC-MS sum parameter showed in average higher values (on average differing by a factor 17). The main reason for the discrepancy is most likely the incomplete coverage of the J. vulgaris PA pattern. Major J. vulgaris PAs like jacobine-type PAs or erucifoline/acetylerucifoline were not available as reference compounds for the LC-MS target approach. Based on the direct comparison, both methods are considered from various perspectives and the respective individual strengths and weaknesses for each method are presented in detail.

  7. Comparison of global sensitivity analysis methods – Application to fuel behavior modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ikonen, Timo, E-mail: timo.ikonen@vtt.fi

    2016-02-15

    Highlights: • Several global sensitivity analysis methods are compared. • The methods’ applicability to nuclear fuel performance simulations is assessed. • The implications of large input uncertainties and complex models are discussed. • Alternative strategies to perform sensitivity analyses are proposed. - Abstract: Fuel performance codes have two characteristics that make their sensitivity analysis challenging: large uncertainties in input parameters and complex, non-linear and non-additive structure of the models. The complex structure of the code leads to interactions between inputs that show as cross terms in the sensitivity analysis. Due to the large uncertainties of the inputs these interactions are significant, sometimes even dominating the sensitivity analysis. For the same reason, standard linearization techniques do not usually perform well in the analysis of fuel performance codes. More sophisticated methods are typically needed in the analysis. To this end, we compare the performance of several sensitivity analysis methods in the analysis of a steady state FRAPCON simulation. The comparison of importance rankings obtained with the various methods shows that even the simplest methods can be sufficient for the analysis of fuel maximum temperature. However, the analysis of the gap conductance requires more powerful methods that take into account the interactions of the inputs. In some cases, moment-independent methods are needed. We also investigate the computational cost of the various methods and present recommendations as to which methods to use in the analysis.

  8. Transgingival Probing and Ultrasonographic Methods for Determination of Gingival Thickness- A Comparative Study

    Directory of Open Access Journals (Sweden)

    Rakhi Issrani

    2013-01-01

    Results : It was observed that the younger age group had significantly thicker gingiva than older age group . The gingiva was found to be thinner in females than males and in the mandibular arch than the maxilla. Within the limits of the present study it was demonstrated that thickness of gingiva varies with the tooth sites, i.e. midbuccally and interdental papillary region and also with morphology of the crown. Conclusions : In the present study, it was concluded that GTH varies according to site, age, gender tooth and dental arch wise. In comparison to TGP method, USG method assesses GTH more accurately, rapidl y and atraumatically.

  9. A comparison of radiological risk assessment methods for environmental restoration

    International Nuclear Information System (INIS)

    Dunning, D.E. Jr.; Peterson, J.M.

    1993-01-01

    Evaluation of risks to human health from exposure to ionizing radiation at radioactively contaminated sites is an integral part of the decision-making process for determining the need for remediation and selecting remedial actions that may be required. At sites regulated under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), a target risk range of 10 -4 to 10 -6 incremental cancer incidence over a lifetime is specified by the US Environmental Protection Agency (EPA) as generally acceptable, based on the reasonable maximum exposure to any individual under current and future land use scenarios. Two primary methods currently being used in conducting radiological risk assessments at CERCLA sites are compared in this analysis. Under the first method, the radiation dose equivalent (i.e., Sv or rem) to the receptors of interest over the appropriate period of exposure is estimated and multiplied by a risk factor (cancer risk/Sv). Alternatively, incremental cancer risk can be estimated by combining the EPA's cancer slope factors (previously termed potency factors) for radionuclides with estimates of radionuclide intake by ingestion and inhalation, as well as radionuclide concentrations in soil that contribute to external dose. The comparison of the two methods has demonstrated that resulting estimates of lifetime incremental cancer risk under these different methods may differ significantly, even when all other exposure assumptions are held constant, with the magnitude of the discrepancy depending upon the dominant radionuclides and exposure pathways for the site. The basis for these discrepancies, the advantages and disadvantages of each method, and the significance of the discrepant results for environmental restoration decisions are presented

  10. The neural correlates of internal and external comparisons: an fMRI study.

    Science.gov (United States)

    Wen, Xue; Xiang, Yanhui; Cant, Jonathan S; Wang, Tingting; Cupchik, Gerald; Huang, Ruiwang; Mo, Lei

    2017-01-01

    Many previous studies have suggested that various comparisons rely on the same cognitive and neural mechanisms. However, little attention has been paid to exploring the commonalities and differences between the internal comparison based on concepts or rules and the external comparison based on perception. In the present experiment, moral beauty comparison and facial beauty comparison were selected as the representatives of internal comparison and external comparison, respectively. Functional magnetic resonance imaging (fMRI) was used to record brain activity while participants compared the level of moral beauty of two scene drawings containing moral acts or the level of facial beauty of two face photos. In addition, a physical size comparison task with the same stimuli as the beauty comparison was included. We observed that both the internal moral beauty comparison and external facial beauty comparison obeyed a typical distance effect and this behavioral effect recruited a common frontoparietal network involved in comparisons of simple physical magnitudes such as size. In addition, compared to external facial beauty comparison, internal moral beauty comparison induced greater activity in more advanced and complex cortical regions, such as the bilateral middle temporal gyrus and middle occipital gyrus, but weaker activity in the putamen, a subcortical region. Our results provide novel neural evidence for the comparative process and suggest that different comparisons may rely on both common cognitive processes as well as distinct and specific cognitive components.

  11. Comparison of physical fitness tests in swimming

    OpenAIRE

    Dostálová, Sabina

    2015-01-01

    Title: Comparison of physical fitness tests in swimming. Objective: The aim of this thesis is to evaluate specific tests, used while testing selected physical abilities in swimming. By specific tests we mean tests realized in the water. Selected tests are intended for swim coaches, who train junior to senior age groups. Methods: The chosen method was a comparison of studies, that pursue selected specific tests. We created partial conclusions for every test by summing up the results of differe...

  12. Project on Alternative Systems Study - PASS. Comparison of technology of KBS-3, MLH, VLH and VDH concepts by using an expert group

    International Nuclear Information System (INIS)

    Olsson, Lars; Sandstedt, H.

    1992-09-01

    This report constitutes a technical comparison and ranking of four repository concepts for final disposal of spent nuclear fuel, that have been studied by SKB: KBS-3, Medium Long Holes (MLH), Very Long Holes (VLH) and Very Deep Holes (VDH). The technical comparison is part of the project 'Project on Alternative Systems Study, PASS', which was initiated by SKB. With the objective of presenting a ranking of the four concepts. Besides this comparison of Technology the ranking is separately made for Long-term Performance and Safety, and Costs before the merging into one verdict. The ranking regarding Technology was carried out in accordance with the method Analytical Hierarchy Process, AHP, and by the aid of expert judgement in the form of a group consisting of six experts. The AHP method implies that the criteria for comparison are ordered in a hierarchy and that the ranking is carried out by pairwise comparison of the criteria. In the evaluation process a measure of the relative importance of each criterion is obtained. The result of the expert judgement exercise was that each expert individually ranked the four concepts in the following order with the top ranked alternative first: KBS-3, MLH, VLH and VDH. The common opinion among the experts was that the top ranking of KBS-3 is significant and the the major criteria used in the study could change substantially without changing the top ranking of KBS-3

  13. Comparison of INSURE method with conventional mechanical ventilation after surfactant administration in preterm infants with respiratory distress syndrome: therapeutic challenge.

    Directory of Open Access Journals (Sweden)

    Fatemeh Sadat Nayeri

    2014-08-01

    Full Text Available Administration of endotracheal surfactant is potentially the main treatment for neonates suffering from RDS (Respiratory Distress Syndrome, which is followed by mechanical ventilation. Late and severe complications may develop as a consequence of using mechanical ventilation. In this study, conventional methods for treatment of RDS are compared with surfactant administration, use of mechanical ventilation for a brief period and NCPAP (Nasal Continuous Positive Airway Pressure, (INSURE method ((Intubation, Surfactant administration and extubation. A randomized clinical trial study was performed, including all newborn infants with diagnosed RDS and a gestational age of 35 weeks or less, who were admitted in NICU of Valiasr hospital. The patients were then divided randomly into two CMV (Conventional Mechanical Ventilation and INSURE groups. Surfactant administration and consequent long-term mechanical ventilation were done in the first group (CMV group. In the second group (INSURE group, surfactant was administered followed by a short-term period of mechanical ventilation. The infants were then extubated, and NCPAP was embedded. The comparison included crucial duration of mechanical ventilation and oxygen therapy, IVH (Intraventricular Hemorrhage, PDA (Patent Ductus Arteriosus, air-leak syndromes, BPD (Broncho-Pulmonary Dysplasia and mortality rate. The need for mechanical ventilation in 5th day of admission was 43% decreased (P=0.005 in INSURE group in comparison to CMV group. A decline (P=0.01 in the incidence of IVH and PDA was also achieved. Pneumothorax, chronic pulmonary disease and mortality rates, were not significantly different among two groups. (P=0.25, P=0.14, P=0.25, respectively. This study indicated that INSURE method in the treatment of RDS decreases the need for mechanical ventilation and oxygen-therapy in preterm neonates. Moreover, relevant complications as IVH and PDA were observed to be reduced. Thus, it seems rationale to

  14. Comparison of green method for chitin deacetylation

    Science.gov (United States)

    Anwar, Muslih; Anggraeni, Ayu Septi; Amin, M. Harisuddin Al

    2017-03-01

    Developing highly environmentally friendly and cost-effective approaches for the chitosan production has paramount important in the future technology. Deacetylation process is one of the most importing steps to classify the quality of chitosan. This research aimed to study the best method for deacetylation of chitin considered by several factors like the concentration of base, temperature, time and reaction method. From the green chemistry point of view, conventional refluxing method relatively wasted energy compared to another method such as maceration, grinding and sonication. The degree of deacetylation (DD) of chitosan was studied by sonication, resulted in slightly increasing of DD from 73.14 to 73.28% during the time from 0.5 h to 1 h. Deacetylation of chitin with various sodium hydroxide concentration 60, 70 and 80% gave 73.14, 76.36 and 77.88% of DD, respectively. Variation of temperature at 40, 60, and 80 °C was slightly affected on increasing DD from 67.53, 72.84 and 73.14%, respectively. The DD of chitosan significantly increased from 60.19, 74.27 and 81.20% respectively correspondent to varying NaOH concentration 60, 70 and 80% using the maceration method. Solid phase grinding method for half hour resulted in 79.49% of DD. The application of ultrasound grinding method not only was enhanced toward the deacetylation but also favoured the depolymerization process. Moreover, maceration for 7 days with 80% NaOH can be as an alternative method.

  15. Estimating human exposure to perfluoroalkyl acids via solid food and drinks : Implementation and comparison of different dietary assessment methods

    NARCIS (Netherlands)

    Papadopoulou, Eleni; Poothong, Somrutai; Koekkoek, Jacco; Lucattini, Luisa; Padilla-Sánchez, Juan Antonio; Haugen, Margaretha; Herzke, Dorte; Valdersnes, Stig; Maage, Amund; Cousins, Ian T.; Leonards, Pim E.G.; Småstuen Haug, Line

    2017-01-01

    Background Diet is a major source of human exposure to hazardous environmental chemicals, including many perfluoroalkyl acids (PFAAs). Several assessment methods of dietary exposure to PFAAs have been used previously, but there is a lack of comparisons between methods. Aim To assess human exposure

  16. Comparison of sequencing based CNV discovery methods using monozygotic twin quartets.

    Directory of Open Access Journals (Sweden)

    Marc-André Legault

    Full Text Available The advent of high throughput sequencing methods breeds an important amount of technical challenges. Among those is the one raised by the discovery of copy-number variations (CNVs using whole-genome sequencing data. CNVs are genomic structural variations defined as a variation in the number of copies of a large genomic fragment, usually more than one kilobase. Here, we aim to compare different CNV calling methods in order to assess their ability to consistently identify CNVs by comparison of the calls in 9 quartets of identical twin pairs. The use of monozygotic twins provides a means of estimating the error rate of each algorithm by observing CNVs that are inconsistently called when considering the rules of Mendelian inheritance and the assumption of an identical genome between twins. The similarity between the calls from the different tools and the advantage of combining call sets were also considered.ERDS and CNVnator obtained the best performance when considering the inherited CNV rate with a mean of 0.74 and 0.70, respectively. Venn diagrams were generated to show the agreement between the different algorithms, before and after filtering out familial inconsistencies. This filtering revealed a high number of false positives for CNVer and Breakdancer. A low overall agreement between the methods suggested a high complementarity of the different tools when calling CNVs. The breakpoint sensitivity analysis indicated that CNVnator and ERDS achieved better resolution of CNV borders than the other tools. The highest inherited CNV rate was achieved through the intersection of these two tools (81%.This study showed that ERDS and CNVnator provide good performance on whole genome sequencing data with respect to CNV consistency across families, CNV breakpoint resolution and CNV call specificity. The intersection of the calls from the two tools would be valuable for CNV genotyping pipelines.

  17. Comparison of residual strength-grounding damage index diagrams for tankers produced by the ALPS/HULL ISFEM and design formula method

    Directory of Open Access Journals (Sweden)

    Do Kyun Kim

    2013-03-01

    Full Text Available This study compares the Residual ultimate longitudinal strength – grounding Damage index (R-D diagrams produced by two analysis methods: the ALPS/HULL Intelligent Supersize Finite Element Method (ISFEM and the design formula (modified Paik and Mansour method – used to assess the safety of damaged ships. The comparison includes four types of double-hull oil tankers: Panamax, Aframax, Suezmax and VLCC. The R-D diagrams were calculated for a series of 50 grounding scenarios. The diagrams were efficiently sampled using the Latin Hypercube Sampling (LHS technique and comprehensively analysed based on ship size. Finally, the two methods were compared by statistically analysing the differences between their grounding damage indices and ultimate longitudinal strength predictions. The findings provide a useful example of how to apply the ultimate longitudinal strength analysis method to grounded ships.

  18. Comparison of methods used for estimating pharmacist counseling behaviors.

    Science.gov (United States)

    Schommer, J C; Sullivan, D L; Wiederholt, J B

    1994-01-01

    To compare the rates reported for provision of types of information conveyed by pharmacists among studies for which different methods of estimation were used and different dispensing situations were studied. Empiric studies conducted in the US, reported from 1982 through 1992, were selected from International Pharmaceutical Abstracts, MEDLINE, and noncomputerized sources. Empiric studies were selected for review if they reported the provision of at least three types of counseling information. Four components of methods used for estimating pharmacist counseling behaviors were extracted and summarized in a table: (1) sample type and area, (2) sampling unit, (3) sample size, and (4) data collection method. In addition, situations that were investigated in each study were compiled. Twelve studies met our inclusion criteria. Patients were interviewed via telephone in four studies and were surveyed via mail in two studies. Pharmacists were interviewed via telephone in one study and surveyed via mail in two studies. For three studies, researchers visited pharmacy sites for data collection using the shopper method or observation method. Studies with similar methods and situations provided similar results. Data collected by using patient surveys, pharmacist surveys, and observation methods can provide useful estimations of pharmacist counseling behaviors if researchers measure counseling for specific, well-defined dispensing situations.

  19. Comparison of attitudes related with family planning methods before and after effective family planning counseling.

    Directory of Open Access Journals (Sweden)

    Esra Esim Büyükbayrak

    2010-09-01

    Full Text Available Objective: To evaluate the effect of family planning counseling on the changeover of the family planning method and to determine level of knowledge of participants on family planning methods and their attitude towards changeover of the method after counseling. Setting: Kartal education and reseach hospital obstetrics and gynecology clinic, department of family planning. Patients. 500 consecutive women applying to family planning department for any reason. Interventions: Effective family planning counseling service was given to each participant then a questioner containin 14 questions was applied with face to face technique. Main Outcome Measures: Attitude towards family planning counseling, comparison of the preference of family planning method before and after family planning counseling service and influential sociodemographic parameters on method choise were studied. Results: 45,2% of the participants were not taken family planning counseling service before. knowledge on family planning methods was sufficient in 25,2% of the participants, insufficient in 56,8% of the participants and 18% of the participants reported that they have no idea. 57,8% of the participants change mind about family planning counseling. 52,2% of the participants changeover perious method after counseling. 99,4% of the participants said that family planning counseling service should be given to every women. Preference of family planning method before and after family planning counseling service was statistically significantly different (p<0.01. Educational level, income and age were found to be influential sociodemographic factors for method preference. Conclusions: Effective family planning counseling service is found to have favorable effect on attitude and knowledge about family planning methods. Modern method usage increase as educational level and income of the participants increase.

  20. Comparative study of telmisartan tablets prepared via the wet granulation method and pritor™ prepared using the spray-drying method.

    Science.gov (United States)

    Park, Junsung; Park, Hee Jun; Cho, Wonkyung; Cha, Kwang-Ho; Yeon, Wonki; Kim, Min-Soo; Kim, Jeong-Soo; Hwang, Sung-Joo

    2011-03-01

    The wet granulation method was successfully used to manufacture amorphous telmisartan tablets (CNU) for comparison with the spray-drying method, used for Pritor™. Drug crystallinity in the tablet was characterized using differential scanning calorimetry and powder X-ray diffraction, and pharmaceutical properties of the tablets such as hardness, friability, water absorption, and in vitro dissolution in pH 1.2, 4.0, 6.8 and 7.5 were characterized. Especially with regard to the water absorption feature, the CNU tablets showed better performance by maintaining their original structures and by absorbing less water. Since both Pritor™ and CNU tablets had similar physical properties of crystallinity, hardness, friability, and > 50 f(2) value in an in vitro dissolution study, the bioequivalence of CNU tablets should be analyzed in a future in vivo study. Therefore, telmisartan tablets can be produced using a more economical and easier method than that used to produce Pritor™ tablets.

  1. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    Science.gov (United States)

    Lynnes, Chris; Little, Mike; Huang, Thomas; Jacob, Joseph; Yang, Phil; Kuo, Kwo-Sen

    2016-01-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based file systems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  2. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    Science.gov (United States)

    Lynnes, C.; Little, M. M.; Huang, T.; Jacob, J. C.; Yang, C. P.; Kuo, K. S.

    2016-12-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based filesystems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  3. Comparison of Electronic Data Capture (EDC) with the Standard Data Capture Method for Clinical Trial Data

    Science.gov (United States)

    Walther, Brigitte; Hossin, Safayet; Townend, John; Abernethy, Neil; Parker, David; Jeffries, David

    2011-01-01

    Background Traditionally, clinical research studies rely on collecting data with case report forms, which are subsequently entered into a database to create electronic records. Although well established, this method is time-consuming and error-prone. This study compares four electronic data capture (EDC) methods with the conventional approach with respect to duration of data capture and accuracy. It was performed in a West African setting, where clinical trials involve data collection from urban, rural and often remote locations. Methodology/Principal Findings Three types of commonly available EDC tools were assessed in face-to-face interviews; netbook, PDA, and tablet PC. EDC performance during telephone interviews via mobile phone was evaluated as a fourth method. The Graeco Latin square study design allowed comparison of all four methods to standard paper-based recording followed by data double entry while controlling simultaneously for possible confounding factors such as interview order, interviewer and interviewee. Over a study period of three weeks the error rates decreased considerably for all EDC methods. In the last week of the study the data accuracy for the netbook (5.1%, CI95%: 3.5–7.2%) and the tablet PC (5.2%, CI95%: 3.7–7.4%) was not significantly different from the accuracy of the conventional paper-based method (3.6%, CI95%: 2.2–5.5%), but error rates for the PDA (7.9%, CI95%: 6.0–10.5%) and telephone (6.3%, CI95% 4.6–8.6%) remained significantly higher. While EDC-interviews take slightly longer, data become readily available after download, making EDC more time effective. Free text and date fields were associated with higher error rates than numerical, single select and skip fields. Conclusions EDC solutions have the potential to produce similar data accuracy compared to paper-based methods. Given the considerable reduction in the time from data collection to database lock, EDC holds the promise to reduce research-associated costs

  4. Comparison of electronic data capture (EDC with the standard data capture method for clinical trial data.

    Directory of Open Access Journals (Sweden)

    Brigitte Walther

    Full Text Available BACKGROUND: Traditionally, clinical research studies rely on collecting data with case report forms, which are subsequently entered into a database to create electronic records. Although well established, this method is time-consuming and error-prone. This study compares four electronic data capture (EDC methods with the conventional approach with respect to duration of data capture and accuracy. It was performed in a West African setting, where clinical trials involve data collection from urban, rural and often remote locations. METHODOLOGY/PRINCIPAL FINDINGS: Three types of commonly available EDC tools were assessed in face-to-face interviews; netbook, PDA, and tablet PC. EDC performance during telephone interviews via mobile phone was evaluated as a fourth method. The Graeco Latin square study design allowed comparison of all four methods to standard paper-based recording followed by data double entry while controlling simultaneously for possible confounding factors such as interview order, interviewer and interviewee. Over a study period of three weeks the error rates decreased considerably for all EDC methods. In the last week of the study the data accuracy for the netbook (5.1%, CI95%: 3.5-7.2% and the tablet PC (5.2%, CI95%: 3.7-7.4% was not significantly different from the accuracy of the conventional paper-based method (3.6%, CI95%: 2.2-5.5%, but error rates for the PDA (7.9%, CI95%: 6.0-10.5% and telephone (6.3%, CI95% 4.6-8.6% remained significantly higher. While EDC-interviews take slightly longer, data become readily available after download, making EDC more time effective. Free text and date fields were associated with higher error rates than numerical, single select and skip fields. CONCLUSIONS: EDC solutions have the potential to produce similar data accuracy compared to paper-based methods. Given the considerable reduction in the time from data collection to database lock, EDC holds the promise to reduce research

  5. A Comparison of Three Methods for Measuring Distortion in Optical Windows

    Science.gov (United States)

    Youngquist, Robert C.; Nurge, Mark A.; Skow, Miles

    2015-01-01

    It's important that imagery seen through large-area windows, such as those used on space vehicles, not be substantially distorted. Many approaches are described in the literature for measuring the distortion of an optical window, but most suffer from either poor resolution or processing difficulties. In this paper a new definition of distortion is presented, allowing accurate measurement using an optical interferometer. This new definition is shown to be equivalent to the definitions provided by the military and the standards organizations. In order to determine the advantages and disadvantages of this new approach, the distortion of an acrylic window is measured using three different methods: image comparison, moiré interferometry, and phase-shifting interferometry.

  6. Comparisons of coded aperture imaging using various apertures and decoding methods

    International Nuclear Information System (INIS)

    Chang, L.T.; Macdonald, B.; Perez-Mendez, V.

    1976-07-01

    The utility of coded aperture γ camera imaging of radioisotope distributions in Nuclear Medicine is in its ability to give depth information about a three dimensional source. We have calculated imaging with Fresnel zone plate and multiple pinhole apertures to produce coded shadows and reconstruction of these shadows using correlation, Fresnel diffraction, and Fourier transform deconvolution. Comparisons of the coded apertures and decoding methods are made by evaluating their point response functions both for in-focus and out-of-focus image planes. Background averages and standard deviations were calculated. In some cases, background subtraction was made using combinations of two complementary apertures. Results using deconvolution reconstruction for finite numbers of events are also given

  7. Comparison of biamperometric and voltammetric method for plutonium and uranium determination in FBTR fuel samples

    International Nuclear Information System (INIS)

    Jayachandran, Kavitha; Gupta, Ruma; Gamare, Jayashree; Kamat, J.V.; Aggarwal, S.K.

    2015-01-01

    Accurate and precise determination of Pu and U in reactor fuel materials is essential for the characterization of the fuel as well as for fissile material accounting. Biamperometric method has been in routine use for Pu and U determination in a variety of nuclear fuel materials in our laboratory for past 25 years. A new methodology based on differential pulse voltammetry at single walled carbon nanotube modified gold electrode has been developed by us for the simultaneous determination of Pu and U in fuel samples. In order to validate the status of the methodologies employed for Pu and U determination, comparison experiments were performed involving Pu and U determination in FBTR fuel samples. Results of these studies are reported in this paper. (author)

  8. ABCD Matrix Method a Case Study

    CERN Document Server

    Seidov, Zakir F; Yahalom, Asher

    2004-01-01

    In the Israeli Electrostatic Accelerator FEL, the distance between the accelerator's end and the wiggler's entrance is about 2.1 m, and 1.4 MeV electron beam is transported through this space using four similar quadrupoles (FODO-channel). The transfer matrix method (ABCD matrix method) was used for simulating the beam transport, a set of programs is written in the several programming languages (MATHEMATICA, MATLAB, MATCAD, MAPLE) and reasonable agreement is demonstrated between experimental results and simulations. Comparison of ABCD matrix method with the direct "numerical experiments" using EGUN, ELOP, and GPT programs with and without taking into account the space-charge effects showed the agreement to be good enough as well. Also the inverse problem of finding emittance of the electron beam at the S1 screen position (before FODO-channel), by using the spot image at S2 screen position (after FODO-channel) as function of quad currents, is considered. Spot and beam at both screens are described as tilted eel...

  9. Comparison of the GUM and Monte Carlo methods on the flatness uncertainty estimation in coordinate measuring machine

    Directory of Open Access Journals (Sweden)

    Jalid Abdelilah

    2016-01-01

    Full Text Available In engineering industry, control of manufactured parts is usually done on a coordinate measuring machine (CMM, a sensor mounted at the end of the machine probes a set of points on the surface to be inspected. Data processing is performed subsequently using software, and the result of this measurement process either validates or not the conformity of the part. Measurement uncertainty is a crucial parameter for making the right decisions, and not taking into account this parameter can, therefore, sometimes lead to aberrant decisions. The determination of the uncertainty measurement on CMM is a complex task for the variety of influencing factors. Through this study, we aim to check if the uncertainty propagation model developed according to the guide to the expression of uncertainty in measurement (GUM approach is valid, we present here a comparison of the GUM and Monte Carlo methods. This comparison is made to estimate a flatness deviation of a surface belonging to an industrial part and the uncertainty associated to the measurement result.

  10. Comparison of Invasive and Noninvasive Mechanical Ventilation for Patients with COPD:Randomised Prospective Study

    Directory of Open Access Journals (Sweden)

    Ivo Matic

    2008-01-01

    Full Text Available Acute respiratory failure due to chronic obstructive pulmonary disease presents an increasing problem for both health and economics in the modern world. The goal of this study was to compare invasive and noninvasive mechani-cal ventilation for patients with COPD. A prospective, randomized trial was performed in a multidisciplinary intensive care unit. Of 614 patients requiring mechanical ventilation (MV longer than 24h, after excluding those who didn′t meet the inclusion criteria, 72 patients with COPD remained the research sample. The MV procedure was per-formed using standard methods, applying two MV methods: invasive MV and noninvasive MV. Patients were ran-domized into two groups for MV application using closed, non transparent envelopes. Comparison was made based on patient characteristics, objective parameters 1h, 4h, 24h, and 48h after admission and finally treatment outcome. In patients with COPD NIMV had statistically better outcome compared to IMV with MV duration NIMV:IMV 102:187h, p < 0.001, time spent in ICU 127:233h, p < 0.001. Need for intubation/reintubation 16 (42.1%:34 (100%/4 (11.8%, p < 0.001, hospital pneumonia 2 (5.3%:18 (52.9%, p =0.001. Applying strict application protocols, and based on com-parison of objective parameters of pulmonary mechanics, biochemistry and finally treatment outcome, high advantage of NIMV method was confirmed.

  11. Comparison of sEMG processing methods during whole-body vibration exercise.

    Science.gov (United States)

    Lienhard, Karin; Cabasson, Aline; Meste, Olivier; Colson, Serge S

    2015-12-01

    The objective was to investigate the influence of surface electromyography (sEMG) processing methods on the quantification of muscle activity during whole-body vibration (WBV) exercises. sEMG activity was recorded while the participants performed squats on the platform with and without WBV. The spikes observed in the sEMG spectrum at the vibration frequency and its harmonics were deleted using state-of-the-art methods, i.e. (1) a band-stop filter, (2) a band-pass filter, and (3) spectral linear interpolation. The same filtering methods were applied on the sEMG during the no-vibration trial. The linear interpolation method showed the highest intraclass correlation coefficients (no vibration: 0.999, WBV: 0.757-0.979) with the comparison measure (unfiltered sEMG during the no-vibration trial), followed by the band-stop filter (no vibration: 0.929-0.975, WBV: 0.661-0.938). While both methods introduced a systematic bias (P interpolation method and the band-stop filter was comparable. The band-pass filter was in poor agreement with the other methods (ICC: 0.207-0.697), unless the sEMG(RMS) was corrected for the bias (ICC ⩾ 0.931, %LOA ⩽ 32.3). In conclusion, spectral linear interpolation or a band-stop filter centered at the vibration frequency and its multiple harmonics should be applied to delete the artifacts in the sEMG signals during WBV. With the use of a band-stop filter it is recommended to correct the sEMG(RMS) for the bias as this procedure improved its performance. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Comparison of methods to evaluate the fungal biomass in heating, ventilation, and air-conditioning (HVAC) dust.

    Science.gov (United States)

    Biyeyeme Bi Mve, Marie-Jeanne; Cloutier, Yves; Lacombe, Nancy; Lavoie, Jacques; Debia, Maximilien; Marchand, Geneviève

    2016-12-01

    Heating, ventilation, and air-conditioning (HVAC) systems contain dust that can be contaminated with fungal spores (molds), which may have harmful effects on the respiratory health of the occupants of a building. HVAC cleaning is often based on visual inspection of the quantity of dust, without taking the mold content into account. The purpose of this study is to propose a method to estimate fungal contamination of dust in HVAC systems. Comparisons of different analytical methods were carried out on dust deposited in a controlled-atmosphere exposure chamber. Sixty samples were analyzed using four methods: culture, direct microscopic spore count (DMSC), β-N-acetylhexosaminidase (NAHA) dosing and qPCR. For each method, the limit of detection, replicability, and repeatability were assessed. The Pearson correlation coefficients between the methods were also evaluated. Depending on the analytical method, mean spore concentrations per 100 cm 2 of dust ranged from 10,000 to 682,000. Limits of detection varied from 120 to 217,000 spores/100 cm 2 . Replicability and repeatability were between 1 and 15%. Pearson correlation coefficients varied from -0.217 to 0.83. The 18S qPCR showed the best sensitivity and precision, as well as the best correlation with the culture method. PCR targets only molds, and a total count of fungal DNA is obtained. Among the methods, mold DNA amplification by qPCR is the method suggested for estimating the fungal content found in dust of HVAC systems.

  13. Experimental Study on the comparison of antibacterial and antioxidant effects between the Bee Venom and Sweet Bee Venom

    OpenAIRE

    Joong chul An; Ki Rok Kwon; Eun Hee Lee; Bae Chun Cha

    2006-01-01

    Objectives : This study was conducted to compare antibacterial activities and free radical scavenging activity between the Bee Venom and Sweet Bee Venom in which the allergy-causing enzyme is removed. Methods : To evaluate antibacterial activities of the test samples, gram negative E. coli and gram positive St. aureus were compared using the paper disc method. For comparison of the antioxidant effects, DPPH (1,1-diphenyl-2-picrylhydrazyl) free radical scavenging assay and Thiobarbituric Ac...

  14. Optimization and Comparison of ESI and APCI LC-MS/MS Methods: A Case Study of Irgarol 1051, Diuron, and their Degradation Products in Environmental Samples

    Science.gov (United States)

    Maragou, Niki C.; Thomaidis, Nikolaos S.; Koupparis, Michael A.

    2011-10-01

    A systematic and detailed optimization strategy for the development of atmospheric pressure ionization (API) LC-MS/MS methods for the determination of Irgarol 1051, Diuron, and their degradation products (M1, DCPMU, DCPU, and DCA) in water, sediment, and mussel is described. Experimental design was applied for the optimization of the ion sources parameters. Comparison of ESI and APCI was performed in positive- and negative-ion mode, and the effect of the mobile phase on ionization was studied for both techniques. Special attention was drawn to the ionization of DCA, which presents particular difficulty in API techniques. Satisfactory ionization of this small molecule is achieved only with ESI positive-ion mode using acetonitrile in the mobile phase; the instrumental detection limit is 0.11 ng/mL. Signal suppression was qualitatively estimated by using purified and non-purified samples. The sample preparation for sediments and mussels is direct and simple, comprising only solvent extraction. Mean recoveries ranged from 71% to 110%, and the corresponding (%) RSDs ranged between 4.1 and 14%. The method limits of detection ranged between 0.6 and 3.5 ng/g for sediment and mussel and from 1.3 to 1.8 ng/L for sea water. The method was applied to sea water, marine sediment, and mussels, which were obtained from marinas in Attiki, Greece. Ion ratio confirmation was used for the identification of the compounds.

  15. Comparison of PCR with Standard Method (MPN for detection of bacterial contamination in drinking water

    Directory of Open Access Journals (Sweden)

    Fatemeh Dehghan

    2014-11-01

    Full Text Available Background: Detection of bacterial contamination in drinking water by culture method is a time and cost consuming method and spends a few days depending on contamination degree. However, the people use the tap water during that time. Molecular methods are rapid and sensitive. In this study a rapid Multiplex PCR method was used for rapid analysis both coliform bacteria and E.coli, and probable detection of VBNC bacteria in drinking water, the experiments were performed in bacteriological lab of water and Wastewater Corporation in Markazi province. Material and Methods:Amplification of a fragment from each of lacZ and uidA genes in a Multiplex PCR was used for detection of coliforms. Eight samples was taken from Arak drinking water system including 36 samples of wells, 41 samples of water distribution network and 3 samples from water storages were examined by amplification of lacZ and uidA genes in a Multiplex PCR. Equivalently, the MPN test was applied as a standard method for all samples for comparison of results. Standard bacteria, pure bacteria isolated from positive MPN and CRM were examined by PCR and MPN method. Results: The result of most samples water network, water storages, and water well were same in both MPN and PCR method .The results of standard bacteria and pure cultures of bacteria isolated from positive MPN and CRM confirmed the PCR method. Five samples were positive in PCR but negative in MPN method. Duration time of PCR was decreased about 105 min by changing the PCR program and electrophoreses factors. Conclusion: The Multiplex PCR can detect coliform bacteria and E.coli synchronous in drinking water.

  16. Measuring digit lengths with 3D digital stereophotogrammetry: A comparison across methods.

    Science.gov (United States)

    Gremba, Allison; Weinberg, Seth M

    2018-05-09

    We compared digital 3D stereophotogrammetry to more traditional measurement methods (direct anthropometry and 2D scanning) to capture digit lengths and ratios. The length of the second and fourth digits was measured by each method and the second-to-fourth ratio was calculated. For each digit measurement, intraobserver agreement was calculated for each of the three collection methods. Further, measurements from the three methods were compared directly to one another. Agreement statistics included the intraclass correlation coefficient (ICC) and technical error of measurement (TEM). Intraobserver agreement statistics for the digit length measurements were high for all three methods; ICC values exceeded 0.97 and TEM values were below 1 mm. For digit ratio, intraobserver agreement was also acceptable for all methods, with direct anthropometry exhibiting lower agreement (ICC = 0.87) compared to indirect methods. For the comparison across methods, the overall agreement was high for digit length measurements (ICC values ranging from 0.93 to 0.98; TEM values below 2 mm). For digit ratios, high agreement was observed between the two indirect methods (ICC = 0.93), whereas indirect methods showed lower agreement when compared to direct anthropometry (ICC < 0.75). Digit measurements and derived ratios from 3D stereophotogrammetry showed high intraobserver agreement (similar to more traditional methods) suggesting that landmarks could be placed reliably on 3D hand surface images. While digit length measurements were found to be comparable across all three methods, ratios derived from direct anthropometry tended to be higher than those calculated indirectly from 2D or 3D images. © 2018 Wiley Periodicals, Inc.

  17. Comparison of two uncertainty dressing methods: SAD VS DAD

    Science.gov (United States)

    Chardon, Jérémy; Mathevet, Thibault; Le-Lay, Matthieu; Gailhard, Joël

    2014-05-01

    Hydrological Ensemble Prediction Systems (HEPSs) allow a better representation of meteorological and hydrological forecast uncertainties and improve human expertise of hydrological forecasts. An operational HEPS has been developed at EDF (French Producer of Electricity) since 2008 and is being used since 2010 on a hundred of watersheds in France. Depending on the hydro-meteorological situation, streamflow forecasts could be issued on a daily basis and are used to help dam management operations during floods or dam works within the river. A part of this HEPS is characterized by a streamflow ensemble post-processing, where a large human expertise is solicited. The aim of post-processing methods is to achieve better overall performances, by dressing hydrological ensemble forecasts with hydrological model uncertainties. The present study compares two post-processing methods, which are based on a logarithmic representation of the residuals distribution of the Rainfall-Runoff (RR) model, based on "perfect" forcing forecasts - i.e. forecasts with observed meteorological variables as inputs. The only difference between the two post-processing methods lies in the sampling of the perfect forcing forecasts for the estimation of the residuals statistics: (i) a first method, referred here as Statistical Analogy Dressing (SAD) model and used for operational HEPS, estimates beforehand the statistics of the residuals by streamflow sub-samples of quantile class and lead-time, since RR model residuals are not homoscedastic. (ii) an alternative method, referred as Dynamical Analogy Dressing (DAD) model, estimates the statistics of the residuals using the N most similar perfect forcing forecasts. The selection of this N forecasts is based on streamflow range and variation. On a set of 20 watersheds used for operational forecasts, both models were evaluated with perfect forcing forecasts and with ensemble forecasts. Results show that both approaches ensure a good post-processing of

  18. The health benefits of yoga and exercise: a review of comparison studies.

    Science.gov (United States)

    Ross, Alyson; Thomas, Sue

    2010-01-01

    Exercise is considered an acceptable method for improving and maintaining physical and emotional health. A growing body of evidence supports the belief that yoga benefits physical and mental health via down-regulation of the hypothalamic-pituitary-adrenal (HPA) axis and the sympathetic nervous system (SNS). The purpose of this article is to provide a scholarly review of the literature regarding research studies comparing the effects of yoga and exercise on a variety of health outcomes and health conditions. Using PubMed((R)) and the key word "yoga," a comprehensive search of the research literature from core scientific and nursing journals yielded 81 studies that met inclusion criteria. These studies subsequently were classified as uncontrolled (n = 30), wait list controlled (n = 16), or comparison (n = 35). The most common comparison intervention (n = 10) involved exercise. These studies were included in this review. In the studies reviewed, yoga interventions appeared to be equal or superior to exercise in nearly every outcome measured except those involving physical fitness. The studies comparing the effects of yoga and exercise seem to indicate that, in both healthy and diseased populations, yoga may be as effective as or better than exercise at improving a variety of health-related outcome measures. Future clinical trials are needed to examine the distinctions between exercise and yoga, particularly how the two modalities may differ in their effects on the SNS/HPA axis. Additional studies using rigorous methodologies are needed to examine the health benefits of the various types of yoga.

  19. Comparison between two braking control methods integrating energy recovery for a two-wheel front driven electric vehicle

    International Nuclear Information System (INIS)

    Itani, Khaled; De Bernardinis, Alexandre; Khatir, Zoubir; Jammal, Ahmad

    2016-01-01

    Highlights: • Comparison between two braking methods for an EV maximizing the energy recovery. • Wheels slip ratio control based on robust sliding mode and ECE R13 control methods. • Regenerative braking control strategy. • Energy recovery of a HESS with respect to road surface type and road condition. - Abstract: This paper presents the comparison between two braking methods for a two-wheel front driven Electric Vehicle maximizing the energy recovery on the Hybrid Energy Storage System. The first method consists in controlling the wheels slip ratio while braking using a robust sliding mode controller. The second method will be based on ECE R13H constraints for an M1 passenger vehicle. The vehicle model used for simulation is a simplified five degrees of freedom model. It is driven by two 30 kW permanent magnet synchronous motor (PMSM) recovering energy during braking phases. Several simulation results for extreme braking conditions will be performed and compared on various road type surfaces using Matlab/Simulink®. For an initial speed of 80 km/h, simulation results demonstrate that the difference of energy recovery efficiency between the two control braking methods is beneficial to the ECE constraints control method and it can vary from 3.7% for high friction road type to 11.2% for medium friction road type. At low friction road type, the difference attains 6.6% due to different reasons treated in the paper. The stability deceleration is also discussed and detailed.

  20. Comparisons of peak-search and photopeak-integration methods in the computer analysis of gamma-ray spectra

    International Nuclear Information System (INIS)

    Baedecker, P.A.

    1980-01-01

    Myriad methods have been devised for extracting quantitative information from gamma-ray spectra by means of a computer, and a critical evaluation of the relative merits of the various programs that have been written would represent a Herculean, if not an impossible, task. The results from the International Atomic Energy Agency (IAEA) intercomparison, which may represent the most straightforward approach to making such an evaluation, showed a wide range in the quality of the results - even among laboratories where similar methods were used. The most clear-cut way of differentiating between programs is by the method used to evaluate peak areas: by the iterative fitting of the spectral features to an often complex model, or by a simple summation procedure. Previous comparisons have shown that relatively simple algorithms can compete favorably with fitting procedures, although fitting holds the greatest promise for the detection and measurement of complex peaks. However, fitting algorithms, which are generally complex and time consuming, are often ruled out by practical limitations based on the type of computing equipment available, cost limitations, the number of spectra to be processed in a given time period, and the ultimate goal of the analysis. Comparisons of methods can be useful, however, in helping to illustrate the limitations of the various algorithms that have been devised. This paper presents a limited review of some of the more common peak-search and peak-integration methods, along with Peak-search procedures

  1. Numerical Study of Two-Dimensional Volterra Integral Equations by RDTM and Comparison with DTM

    Directory of Open Access Journals (Sweden)

    Reza Abazari

    2013-01-01

    Full Text Available The two-dimensional Volterra integral equations are solved using more recent semianalytic method, the reduced differential transform method (the so-called RDTM, and compared with the differential transform method (DTM. The concepts of DTM and RDTM are briefly explained, and their application to the two-dimensional Volterra integral equations is studied. The results obtained by DTM and RDTM together are compared with exact solution. As an important result, it is depicted that the RDTM results are more accurate in comparison with those obtained by DTM applied to the same Volterra integral equations. The numerical results reveal that the RDTM is very effective, convenient, and quite accurate compared to the other kind of nonlinear integral equations. It is predicted that the RDTM can be found widely applicable in engineering sciences.

  2. Comparison is key.

    Science.gov (United States)

    Stone, Mark H; Stenner, A Jackson

    2014-01-01

    Several concepts from Georg Rasch's last papers are discussed. The key one is comparison because Rasch considered the method of comparison fundamental to science. From the role of comparison stems scientific inference made operational by a properly developed frame of reference producing specific objectivity. The exact specifications Rasch outlined for making comparisons are explicated from quotes, and the role of causality derived from making comparisons is also examined. Understanding causality has implications for what can and cannot be produced via Rasch measurement. His simple examples were instructive, but the implications are far reaching upon first establishing the key role of comparison.

  3. A Comparison Study on the Integrated Risk Estimation for Various Power Systems

    International Nuclear Information System (INIS)

    Kim, Tae Woon; Ha, J. J.; Kim, S. H.; Jeong, J. T.; Min, K. R.; Kim, K. Y.

    2007-06-01

    The objective of this study is to establish a system for the comparative analysis of the environmental impacts, risks, health effects, and social acceptance for various electricity generation systems and a computational framework and necessary databases. In this study, the second phase of the nuclear research and development program(2002-2004), the methodologies for the comparative analysis of the environmental impacts, risks, and health effects for various electricity generation systems was investigated and applied to reference power plants. The life cycle assessment (LCA) methodology as a comparative analysis tool for the environmental impacts was adopted and applied to fossil-fueled and nuclear power plants. The scope of the analysis considered in this study are the construction, operation/fuel cycle), and demolition of each power generation system. In the risk analysis part, the empirical and analytical methods were adopted and applied to fossil-fueled and nuclear power plants. In the empirical risk assessment part, we collected historical experiences of worldwide energy-related accidents with fatalities over the last 30 years. The scope of the analysis considered in this study are the construction, operation (fuel cycle), and demolition stages of each power generation systems. The risks for the case of nuclear power plants which have potential releases of radioactive materials were estimated In a probabilistic way (PSA) by considering the occurrence of severe accidents and compared with the risks of other electricity generation systems. The health effects testimated as external cost) resulting from the operation of nuclear, coal, and hydro power systems were estimated and compared by using the program developed by the IAEA. Regarding a comprehensive comparison of the various power systems, the analytic hierarchy process (AHP) method is introduced to aggregate the diverse information under conflicting decision criteria. Social aspect is treated by a web

  4. A comparison of two methods of measuring static coefficient of friction at low normal forces: a pilot study.

    Science.gov (United States)

    Seo, Na Jin; Armstrong, Thomas J; Drinkaus, Philip

    2009-01-01

    This study compares two methods for estimating static friction coefficients for skin. In the first method, referred to as the 'tilt method', a hand supporting a flat object is tilted until the object slides. The friction coefficient is estimated as the tangent of the angle of the object at the slip. The second method estimates the friction coefficient as the pull force required to begin moving a flat object over the surface of the hand, divided by object weight. Both methods were used to estimate friction coefficients for 12 subjects and three materials (cardboard, aluminium, rubber) against a flat hand and against fingertips. No differences in static friction coefficients were found between the two methods, except for that of rubber, where friction coefficient was 11% greater for the tilt method. As with previous studies, the friction coefficients varied with contact force and contact area. Static friction coefficient data are needed for analysis and design of objects that are grasped or manipulated with the hand. The tilt method described in this study can easily be used by ergonomic practitioners to estimate static friction coefficients in the field in a timely manner.

  5. Simulation study on unfolding methods for diagnostic X-rays and mixed gamma rays

    International Nuclear Information System (INIS)

    Hashimoto, Makoto; Ohtaka, Masahiko; Ara, Kuniaki; Kanno, Ikuo; Imamura, Ryo; Mikami, Kenta; Nomiya, Seiichiro; Onabe, Hideaki

    2009-01-01

    A photon detector operating in current mode that can sense X-ray energy distribution has been reported. This detector consists of a row of several segment detectors. The energy distribution is derived using an unfolding technique. In this paper, comparisons of the unfolding techniques among error reduction, spectrum surveillance, and neural network methods are discussed through simulation studies on the detection of diagnostic X-rays and gamma rays emitted by a mixture of 137 Cs and 60 Co. For diagnostic X-ray measurement, the spectrum surveillance and neural network methods appeared promising, while the error reduction method yielded poor results. However, in the case of measuring mixtures of gamma rays, the error reduction method was both sufficient and effective. (author)

  6. Comparison of Deep Learning With Multiple Machine Learning Methods and Metrics Using Diverse Drug Discovery Data Sets.

    Science.gov (United States)

    Korotcov, Alexandru; Tkachenko, Valery; Russo, Daniel P; Ekins, Sean

    2017-12-04

    Machine learning methods have been applied to many data sets in pharmaceutical research for several decades. The relative ease and availability of fingerprint type molecular descriptors paired with Bayesian methods resulted in the widespread use of this approach for a diverse array of end points relevant to drug discovery. Deep learning is the latest machine learning algorithm attracting attention for many of pharmaceutical applications from docking to virtual screening. Deep learning is based on an artificial neural network with multiple hidden layers and has found considerable traction for many artificial intelligence applications. We have previously suggested the need for a comparison of different machine learning methods with deep learning across an array of varying data sets that is applicable to pharmaceutical research. End points relevant to pharmaceutical research include absorption, distribution, metabolism, excretion, and toxicity (ADME/Tox) properties, as well as activity against pathogens and drug discovery data sets. In this study, we have used data sets for solubility, probe-likeness, hERG, KCNQ1, bubonic plague, Chagas, tuberculosis, and malaria to compare different machine learning methods using FCFP6 fingerprints. These data sets represent whole cell screens, individual proteins, physicochemical properties as well as a data set with a complex end point. Our aim was to assess whether deep learning offered any improvement in testing when assessed using an array of metrics including AUC, F1 score, Cohen's kappa, Matthews correlation coefficient and others. Based on ranked normalized scores for the metrics or data sets Deep Neural Networks (DNN) ranked higher than SVM, which in turn was ranked higher than all the other machine learning methods. Visualizing these properties for training and test sets using radar type plots indicates when models are inferior or perhaps over trained. These results also suggest the need for assessing deep learning further

  7. Comparative study of discretization methods of microarray data for inferring transcriptional regulatory networks

    Directory of Open Access Journals (Sweden)

    Ji Wei

    2010-10-01

    Full Text Available Abstract Background Microarray data discretization is a basic preprocess for many algorithms of gene regulatory network inference. Some common discretization methods in informatics are used to discretize microarray data. Selection of the discretization method is often arbitrary and no systematic comparison of different discretization has been conducted, in the context of gene regulatory network inference from time series gene expression data. Results In this study, we propose a new discretization method "bikmeans", and compare its performance with four other widely-used discretization methods using different datasets, modeling algorithms and number of intervals. Sensitivities, specificities and total accuracies were calculated and statistical analysis was carried out. Bikmeans method always gave high total accuracies. Conclusions Our results indicate that proper discretization methods can consistently improve gene regulatory network inference independent of network modeling algorithms and datasets. Our new method, bikmeans, resulted in significant better total accuracies than other methods.

  8. Suitability of voltage stability study methods for real-time assessment

    DEFF Research Database (Denmark)

    Perez, Angel; Jóhannsson, Hjörtur; Vancraeyveld, Pieter

    2013-01-01

    This paper analyzes the suitability of existing methods for long-term voltage stability assessment for real-time operation. An overview of the relevant methods is followed with a comparison that takes into account the accuracy, computational efficiency and characteristics when used for security...... assessment. The results enable an evaluation of the run time of each method with respect to the number of inputs. Furthermore, the results assist in identifying which of the methods is most suitable for realtime operation in future power system with production based on fluctuating energy sources....

  9. Improvement of two calculation methods of the detectors activation and on the PWR 900 MWe vessel. Comparison with the experiment

    International Nuclear Information System (INIS)

    Kitsos, S.

    1992-11-01

    The conditions of the vessel ageing (damages through irradiation) that mostly determine the life time of a nuclear reactor depend on the dose received. For the determination of this dose we use two calculation methods: one exact method using the TRIPOLI code that solves the Boltzmann equation with the Monte-Carlo method and one simplified method based on the point-kernel method. The advantages of the second method which is fast (easy reproduction of the results) and deterministic (the effects of difference are possible) compared to the first one which is long and statistical make its development necessary. The qualifications of these methods are done by comparison with the experiment that we reach as following: for the first method, we check the programming of TRIPOLI code (representativity of the collision) and its alignment with SN codes, and we modify the second method in order to use variable linear attenuation coefficients inside each medium to represent better the effects of the spectrum and of the reflection. As part of the checking of the basic physical data and their mode of representation, we present a study of the influence of the energy group averaging of the cross sections and of the number of the groups, as well as study of the influence of the cross sections origin

  10. Method and apparatus for biological sequence comparison

    Science.gov (United States)

    Marr, T.G.; Chang, W.I.

    1997-12-23

    A method and apparatus are disclosed for comparing biological sequences from a known source of sequences, with a subject (query) sequence. The apparatus takes as input a set of target similarity levels (such as evolutionary distances in units of PAM), and finds all fragments of known sequences that are similar to the subject sequence at each target similarity level, and are long enough to be statistically significant. The invention device filters out fragments from the known sequences that are too short, or have a lower average similarity to the subject sequence than is required by each target similarity level. The subject sequence is then compared only to the remaining known sequences to find the best matches. The filtering member divides the subject sequence into overlapping blocks, each block being sufficiently large to contain a minimum-length alignment from a known sequence. For each block, the filter member compares the block with every possible short fragment in the known sequences and determines a best match for each comparison. The determined set of short fragment best matches for the block provide an upper threshold on alignment values. Regions of a certain length from the known sequences that have a mean alignment value upper threshold greater than a target unit score are concatenated to form a union. The current block is compared to the union and provides an indication of best local alignment with the subject sequence. 5 figs.

  11. Comparison of some effects of modification of a polylactide surface layer by chemical, plasma, and laser methods

    Energy Technology Data Exchange (ETDEWEB)

    Moraczewski, Krzysztof, E-mail: kmm@ukw.edu.pl [Department of Materials Engineering, Kazimierz Wielki University, Department of Materials Engineering, ul. Chodkiewicza 30, 85-064 Bydgoszcz (Poland); Rytlewski, Piotr [Department of Materials Engineering, Kazimierz Wielki University, Department of Materials Engineering, ul. Chodkiewicza 30, 85-064 Bydgoszcz (Poland); Malinowski, Rafał [Institute for Engineering of Polymer Materials and Dyes, ul. M. Skłodowskiej–Curie 55, 87-100 Toruń (Poland); Żenkiewicz, Marian [Department of Materials Engineering, Kazimierz Wielki University, Department of Materials Engineering, ul. Chodkiewicza 30, 85-064 Bydgoszcz (Poland)

    2015-08-15

    Highlights: • We modified polylactide surface layer with chemical, plasma or laser methods. • We tested selected properties and surface structure of modified samples. • We stated that the plasma treatment appears to be the most beneficial. - Abstract: The article presents the results of studies and comparison of selected properties of the modified PLA surface layer. The modification was carried out with three methods. In the chemical method, a 0.25 M solution of sodium hydroxide in water and ethanol was utilized. In the plasma method, a 50 W generator was used, which produced plasma in the air atmosphere under reduced pressure. In the laser method, a pulsed ArF excimer laser with fluency of 60 mJ/cm{sup 2} was applied. Polylactide samples were examined by using the following techniques: scanning electron microscopy (SEM), atomic force microscopy (AFM), goniometry and X-ray photoelectron spectroscopy (XPS). Images of surfaces of the modified samples were recorded, contact angles were measured, and surface free energy was calculated. Qualitative and quantitative analyses of chemical composition of the PLA surface layer were performed as well. Based on the survey it was found that the best modification results are obtained using the plasma method.

  12. A comparative study of three different gene expression analysis methods.

    Science.gov (United States)

    Choe, Jae Young; Han, Hyung Soo; Lee, Seon Duk; Lee, Hanna; Lee, Dong Eun; Ahn, Jae Yun; Ryoo, Hyun Wook; Seo, Kang Suk; Kim, Jong Kun

    2017-12-04

    TNF-α regulates immune cells and acts as an endogenous pyrogen. Reverse transcription polymerase chain reaction (RT-PCR) is one of the most commonly used methods for gene expression analysis. Among the alternatives to PCR, loop-mediated isothermal amplification (LAMP) shows good potential in terms of specificity and sensitivity. However, few studies have compared RT-PCR and LAMP for human gene expression analysis. Therefore, in the present study, we compared one-step RT-PCR, two-step RT-LAMP and one-step RT-LAMP for human gene expression analysis. We compared three gene expression analysis methods using the human TNF-α gene as a biomarker from peripheral blood cells. Total RNA from the three selected febrile patients were subjected to the three different methods of gene expression analysis. In the comparison of three gene expression analysis methods, the detection limit of both one-step RT-PCR and one-step RT-LAMP were the same, while that of two-step RT-LAMP was inferior. One-step RT-LAMP takes less time, and the experimental result is easy to determine. One-step RT-LAMP is a potentially useful and complementary tool that is fast and reasonably sensitive. In addition, one-step RT-LAMP could be useful in environments lacking specialized equipment or expertise.

  13. Comparison of Method for the Simultaneous Analysis of Bioactive for the Eurycoma longifolia jack using different Analysis Methods

    International Nuclear Information System (INIS)

    Salmah Moosa; Sobri Hussein; Rusli Ibrahim; Maizatul Akmam Md Nasir

    2011-01-01

    Eurycoma longifolia jack (Tongkat Ali, Genus: Eurycoma; Family, Simaroubaceae) is one of the most popular tropical herbal plants. The plant contains a series of quassinoids, which are mainly responsible for its bitter taste. The plant extract, especially roots, are exclusively used (traditionally) for enhancing testosterone levels in men. The roots also have been used in indigenous traditional medicines for its unique anti-malarial, anti-pyretic, antiulcer, cytotoxic and aphrodisiac properties. As part of an on-going research on the bioactive compound of Eurycoma longifolia and evaluation for an optimized analysis method and parameter that influence in LC-MS analysis were carried out. Identification of the bioactive compounds was based on comparison of calculated retention time and mass spectral data with literature values. Examination of the Eurycoma longifolia sample showed some variations and differences in terms of parameters in LC-MS. However, combined method using methanol as the solvent with injection volume 1.0 μL and analysis in ultra scan mode and acetic acid as acidic modifier is the optimum method for LCMS analysis of Eurycoma longifolia because it successfully detected the optimum mass of compounds with good resolution and perfectly separated within a short analysis time. (author)

  14. a Comparison Study of Different Kernel Functions for Svm-Based Classification of Multi-Temporal Polarimetry SAR Data

    Science.gov (United States)

    Yekkehkhany, B.; Safari, A.; Homayouni, S.; Hasanlou, M.

    2014-10-01

    In this paper, a framework is developed based on Support Vector Machines (SVM) for crop classification using polarimetric features extracted from multi-temporal Synthetic Aperture Radar (SAR) imageries. The multi-temporal integration of data not only improves the overall retrieval accuracy but also provides more reliable estimates with respect to single-date data. Several kernel functions are employed and compared in this study for mapping the input space to higher Hilbert dimension space. These kernel functions include linear, polynomials and Radial Based Function (RBF). The method is applied to several UAVSAR L-band SAR images acquired over an agricultural area near Winnipeg, Manitoba, Canada. In this research, the temporal alpha features of H/A/α decomposition method are used in classification. The experimental tests show an SVM classifier with RBF kernel for three dates of data increases the Overall Accuracy (OA) to up to 3% in comparison to using linear kernel function, and up to 1% in comparison to a 3rd degree polynomial kernel function.

  15. A method comparison of photovoice and content analysis: research examining challenges and supports of family caregivers.

    Science.gov (United States)

    Faucher, Mary Ann; Garner, Shelby L

    2015-11-01

    The purpose of this manuscript is to compare methods and thematic representations of the challenges and supports of family caregivers identified with photovoice methodology contrasted with content analysis, a more traditional qualitative approach. Results from a photovoice study utilizing a participatory action research framework was compared to an analysis of the audio-transcripts from that study utilizing content analysis methodology. Major similarities between the results are identified with some notable differences. Content analysis provides a more in-depth and abstract elucidation of the nature of the challenges and supports of the family caregiver. The comparison provides evidence to support the trustworthiness of photovoice methodology with limitations identified. The enhanced elaboration of theme and categories with content analysis may have some advantages relevant to the utilization of this knowledge by health care professionals. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Comments on the paper 'Optical properties of bovine muscle tissue in vitro; a comparison of methods'

    International Nuclear Information System (INIS)

    Marchesini, R.

    1999-01-01

    In reply to R. Marchesini's comments that optical values derived by himself and other authors given in the paper entitled 'Optical properties of bovine muscle tissue in vitro; a comparison of methods' were incorrectly cited, the author, J.R. Zijp, apologizes for this mistake and explains the reasons for this misinterpretation. Letter-to-the-editor

  17. A comparison of methods in estimating soil water erosion

    Directory of Open Access Journals (Sweden)

    Marisela Pando Moreno

    2012-02-01

    Full Text Available A comparison between direct field measurements and predictions of soil water erosion using two variant; (FAO and R/2 index of the Revised Universal Soil Loss Equation (RUSLE was carried out in a microcatchment o 22.32 km2 in Northeastern Mexico. Direct field measurements were based on a geomorphologic classification of the area; while environmental units were defined for applying the equation. Environmental units were later grouped withir geomorphologic units to compare results. For the basin as a whole, erosion rates from FAO index were statistical!; equal to those measured on the field, while values obtained from the R/2 index were statistically different from the res and overestimated erosion. However, when comparing among geomorphologic units, erosion appeared overestimate! in steep units and underestimated in more flat areas. The most remarkable differences on erosion rates, between th( direct and FAO methods, were for those units where gullies have developed, fn these cases, erosion was underestimated by FAO index. Hence, it is suggested that a weighted factor for presence of gullies should be developed and included in RUSLE equation.

  18. Selection of candidate plus phenotypes of Jatropha curcas L. using method of paired comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, D.K. [Silviculture Division, Arid Forest Research Institute, P.O. Krishi Mandi, New Pali Road, Jodhpur 342005, Rajasthan (India)

    2009-03-15

    Jatropha curcas L. (Euphorbiaceae) is an oil bearing species with multiple uses and considerable potential as a biodiesel crop. The present communication deals with the method of selecting plus phenotypes of J. curcas for exploiting genetic variability for further improvement. Candidate plus tree selection is the first and most important stage in any tree improvement programme. The selection of candidate plus plants (CPPs) is based upon various important attributes associated with the species and their relative ranking. Relative preference between various traits and scoring for each trait has been worked out by using the method of paired comparisons for the selection of CPP in J. curcas L. The most important ones are seed and oil yields. (author)

  19. Comparison of seven methods for producing Affymetrix expression scores based on False Discovery Rates in disease profiling data

    Directory of Open Access Journals (Sweden)

    Gruber Stephen B

    2005-02-01

    Full Text Available Abstract Background A critical step in processing oligonucleotide microarray data is combining the information in multiple probes to produce a single number that best captures the expression level of a RNA transcript. Several systematic studies comparing multiple methods for array processing have used tightly controlled calibration data sets as the basis for comparison. Here we compare performances for seven processing methods using two data sets originally collected for disease profiling studies. An emphasis is placed on understanding sensitivity for detecting differentially expressed genes in terms of two key statistical determinants: test statistic variability for non-differentially expressed genes, and test statistic size for truly differentially expressed genes. Results In the two data sets considered here, up to seven-fold variation across the processing methods was found in the number of genes detected at a given false discovery rate (FDR. The best performing methods called up to 90% of the same genes differentially expressed, had less variable test statistics under randomization, and had a greater number of large test statistics in the experimental data. Poor performance of one method was directly tied to a tendency to produce highly variable test statistic values under randomization. Based on an overall measure of performance, two of the seven methods (Dchip and a trimmed mean approach are superior in the two data sets considered here. Two other methods (MAS5 and GCRMA-EB are inferior, while results for the other three methods are mixed. Conclusions Choice of processing method has a major impact on differential expression analysis of microarray data. Previously reported performance analyses using tightly controlled calibration data sets are not highly consistent with results reported here using data from human tissue samples. Performance of array processing methods in disease profiling and other realistic biological studies should be

  20. On dose distribution comparison

    International Nuclear Information System (INIS)

    Jiang, Steve B; Sharp, Greg C; Neicu, Toni; Berbeco, Ross I; Flampouri, Stella; Bortfeld, Thomas

    2006-01-01

    In radiotherapy practice, one often needs to compare two dose distributions. Especially with the wide clinical implementation of intensity-modulated radiation therapy, software tools for quantitative dose (or fluence) distribution comparison are required for patient-specific quality assurance. Dose distribution comparison is not a trivial task since it has to be performed in both dose and spatial domains in order to be clinically relevant. Each of the existing comparison methods has its own strengths and weaknesses and there is room for improvement. In this work, we developed a general framework for comparing dose distributions. Using a new concept called maximum allowed dose difference (MADD), the comparison in both dose and spatial domains can be performed entirely in the dose domain. Formulae for calculating MADD values for various comparison methods, such as composite analysis and gamma index, have been derived. For convenience in clinical practice, a new measure called normalized dose difference (NDD) has also been proposed, which is the dose difference at a point scaled by the ratio of MADD to the predetermined dose acceptance tolerance. Unlike the simple dose difference test, NDD works in both low and high dose gradient regions because it considers both dose and spatial acceptance tolerances through MADD. The new method has been applied to a test case and a clinical example. It was found that the new method combines the merits of the existing methods (accurate, simple, clinically intuitive and insensitive to dose grid size) and can easily be implemented into any dose/intensity comparison tool

  1. Comparison of methods for the determination of reduced inorganic sulphur in acid sulphate soils

    International Nuclear Information System (INIS)

    Santomartino, S.L.

    1999-01-01

    Full text: The management of acid sulphate soils requires analytical methods that provide accurate data on the quantity of reduced inorganic sulphur within a soil, as it is this fraction that produces acid upon oxidation. This study uses sulphidic Coode Island Silt samples to compare common analytical methods including POCAS (Peroxide Oxidation-Combined Acidity and Sulphate) which consists of TSA (Total Sulphidic Acidity), S pos (Peroxide Oxidisable Sulphur), TOS (Total Oxidisable Sulphur) and chromium-reducible sulphur. The determination of total sulphur by Leco sulphur is strongly correlated with, but slightly less than, that analysed by XRF. Comparison of soil sulphide content by chromium-reducible sulphur, TSA and TOS methods indicates that TOS values are substantially higher than both other methods. The problem with the TOS method lies in the sulphate extraction procedure. Hot distilled water and HCI are commonly used as extractants, however hot distilled water fails to remove organic sulphur, thereby overestimating the sulphide content of the soil. Leco carbon analyses verify that a substantial proportion of organic matter exists within the samples. The HCI extraction process, which uses Ion Chromatography to analyse for sulphate, produces highly inaccurate results due to the interference of the sulphate peak by the chloride peak during analysis. An alternative method involving HCI extraction and XRF analysis of the soil residue is currently being undertaken. The use of KCI to extract sulphate generally produces values similar to the hot distilled water method. The sulphidic content measured by TSA is strongly correlated with, but slightly higher than that determined by the chromium-reducible sulphur method. This is attributed to the use of hydrogen peroxide in the TSA method, which oxidises organic matter to organic acids in addition to oxidising sulphides. These preliminary findings indicate that the chromium-reducible sulphur method is the most suitable

  2. A comparison of five methods of measuring mammographic density: a case-control study.

    Science.gov (United States)

    Astley, Susan M; Harkness, Elaine F; Sergeant, Jamie C; Warwick, Jane; Stavrinos, Paula; Warren, Ruth; Wilson, Mary; Beetles, Ursula; Gadde, Soujanya; Lim, Yit; Jain, Anil; Bundred, Sara; Barr, Nicola; Reece, Valerie; Brentnall, Adam R; Cuzick, Jack; Howell, Tony; Evans, D Gareth

    2018-02-05

    High mammographic density is associated with both risk of cancers being missed at mammography, and increased risk of developing breast cancer. Stratification of breast cancer prevention and screening requires mammographic density measures predictive of cancer. This study compares five mammographic density measures to determine the association with subsequent diagnosis of breast cancer and the presence of breast cancer at screening. Women participating in the "Predicting Risk Of Cancer At Screening" (PROCAS) study, a study of cancer risk, completed questionnaires to provide personal information to enable computation of the Tyrer-Cuzick risk score. Mammographic density was assessed by visual analogue scale (VAS), thresholding (Cumulus) and fully-automated methods (Densitas, Quantra, Volpara) in contralateral breasts of 366 women with unilateral breast cancer (cases) detected at screening on entry to the study (Cumulus 311/366) and in 338 women with cancer detected subsequently. Three controls per case were matched using age, body mass index category, hormone replacement therapy use and menopausal status. Odds ratios (OR) between the highest and lowest quintile, based on the density distribution in controls, for each density measure were estimated by conditional logistic regression, adjusting for classic risk factors. The strongest predictor of screen-detected cancer at study entry was VAS, OR 4.37 (95% CI 2.72-7.03) in the highest vs lowest quintile of percent density after adjustment for classical risk factors. Volpara, Densitas and Cumulus gave ORs for the highest vs lowest quintile of 2.42 (95% CI 1.56-3.78), 2.17 (95% CI 1.41-3.33) and 2.12 (95% CI 1.30-3.45), respectively. Quantra was not significantly associated with breast cancer (OR 1.02, 95% CI 0.67-1.54). Similar results were found for subsequent cancers, with ORs of 4.48 (95% CI 2.79-7.18), 2.87 (95% CI 1.77-4.64) and 2.34 (95% CI 1.50-3.68) in highest vs lowest quintiles of VAS, Volpara and Densitas

  3. Comparison of methods for the quantification of the different carbon fractions in atmospheric aerosol samples

    Science.gov (United States)

    Nunes, Teresa; Mirante, Fátima; Almeida, Elza; Pio, Casimiro

    2010-05-01

    to evaluate the possibility of continue using, for trend analysis, the historical data set, we performed an inter-comparison between our method and an adaptation of EUSAAR-2 protocol, taking into account that this last protocol will possibly be recommended for analysing carbonaceous aerosols at European sites. In this inter-comparison we tested different types of samples (PM2,5, PM2,5-10, PM10) with large spectra of carbon loadings, with and without pre-treatment acidification. For a reduced number of samples, five replicates of each one were analysed by each method for statistical purposes. The inter-comparison study revealed that when the sample analysis were performed in similar room conditions, the two thermo-optic methods give similar results for TC, OC and EC, without significant differences at a 95% confidence level. The correlation between the methods, DAO and EUSAAR-2 for EC is smaller than for TC and OC, although showing a coefficient correlation over 0,95, with a slope close to one. For samples performed in different periods, room temperatures seem to have a significant effect over OC quantification. The sample pre-treatment with HCl fumigation tends to decrease TC quantification, mainly due to the more volatile organic fraction release during the first heating step. For a set of 20 domestic biomass burning samples analyzed by the DAO method we observed an average decrease in TC quantification of 3,7 % in relation to non-acidified samples, even though this decrease is accompanied by an average increase in the less volatile organic fraction. The indirect measurement of carbon carbonate, usually a minor carbon component in the carbonaceous aerosol, based on the difference between TC measured by TOM of acidified and non-acidified samples is not a robust measurement, considering the biases affecting his quantification. The present study show that the two thermo-optic temperature program used for OC and EC quantification give similar results, and if in the

  4. Stochastic spectral Galerkin and collocation methods for PDEs with random coefficients: A numerical comparison

    KAUST Repository

    Bäck, Joakim

    2010-09-17

    Much attention has recently been devoted to the development of Stochastic Galerkin (SG) and Stochastic Collocation (SC) methods for uncertainty quantification. An open and relevant research topic is the comparison of these two methods. By introducing a suitable generalization of the classical sparse grid SC method, we are able to compare SG and SC on the same underlying multivariate polynomial space in terms of accuracy vs. computational work. The approximation spaces considered here include isotropic and anisotropic versions of Tensor Product (TP), Total Degree (TD), Hyperbolic Cross (HC) and Smolyak (SM) polynomials. Numerical results for linear elliptic SPDEs indicate a slight computational work advantage of isotropic SC over SG, with SC-SM and SG-TD being the best choices of approximation spaces for each method. Finally, numerical results corroborate the optimality of the theoretical estimate of anisotropy ratios introduced by the authors in a previous work for the construction of anisotropic approximation spaces. © 2011 Springer.

  5. Study the Three Extraction Methods for HBV DNA to Use in PCR

    Directory of Open Access Journals (Sweden)

    N. Sheikh

    2004-07-01

    Full Text Available Diagnosis of Hepatitis B is important because of the its high prevalence. Recently PCR method , has found greater interest among different diagnostic methods. Several reports emphasis on some false negative results in those laboratories using PCR. The aim of this study was to compare three different procedures for HBV DNA extraction. A total 30 serum samples received from Shariati hospital. Sera was taken from patients having chronic Hepatitis with HBs antigen positive and HBe antigen negative. The sensitivity of guanidium hydrochloride method for extracting the HBV DNA from serum were evaluated and compared with phenol–chloroform and boiling methods. Diagnostic PCR kit was obtained from Cynagene contained taq polymerase, reaction mixture, dNTP, and buffer for reaction. A 353 bp product were amplified by amplification program provided in used PCR protocol. The comparison of results indicated that procedure was successful for amplification of the designed products from Hepatitis B in sera. Number of positive results were 16,19,23 and number of negative result were 14,11,7 for the boiling, phenol-chloroform and guanidium-hydrochloride extraction methods respectively.PCR method is the fastest diagnosis method and the most accurate procedure to identify Hepatitis B. Guanidium hydrochloride method was the most successful procedure studied in this survey for viruses.

  6. On the comparison of numerical methods for the integration of kinetic equations in atmospheric chemistry and transport models

    Science.gov (United States)

    Saylor, Rick D.; Ford, Gregory D.

    The integration of systems of ordinary differential equations (ODEs) that arise in atmospheric photochemistry is of significant concern to tropospheric and stratospheric chemistry modelers. As a consequence of the stiff nature of these ODE systems, their solution requires a large fraction of the total computational effort in three-dimensional chemical model simulations. Several integration techniques have been proposed and utilized over the years in an attempt to provide computationally efficient, yet accurate, solutions to chemical kinetics ODES. In this work, we present a comparison of some of these techniques and argue that valid comparisons of ODE solvers must take into account the trade-off between solution accuracy and computational efficiency. Misleading comparison results can be obtained by neglecting the fact that any ODE solution method can be made faster or slower by manipulation of the appropriate error tolerances or time steps. Comparisons among ODE solution techniques should therefore attempt to identify which technique can provide the most accurate solution with the least computational effort over the entire range of behavior of each technique. We present here a procedure by which ODE solver comparisons can achieve this goal. Using this methodology, we compare a variety of integration techniques, including methods proposed by Hesstvedt et al. (1978, Int. J. Chem. Kinet.10, 971-994), Gong and Cho (1993, Atmospheric Environment27A, 2147-2160), Young and Boris (1977, J. phys. Chem.81, 2424-2427) and Hindmarsh (1983, In Scientific Computing (edited by Stepleman R. S. et al.), pp. 55-64. North-Holland, Amsterdam). We find that Gear-type solvers such as the Livermore Solver for ordinary differential equations (LSODE) and the sparse-matrix version of LSODE (LSODES) provide the most accurate solution of our test problems with the least computational effort.

  7. Correlation between average tissue depth data and quantitative accuracy of forensic craniofacial reconstructions measured by geometric surface comparison method.

    Science.gov (United States)

    Lee, Won-Joon; Wilkinson, Caroline M; Hwang, Hyeon-Shik; Lee, Sang-Mi

    2015-05-01

    Accuracy is the most important factor supporting the reliability of forensic facial reconstruction (FFR) comparing to the corresponding actual face. A number of methods have been employed to evaluate objective accuracy of FFR. Recently, it has been attempted that the degree of resemblance between computer-generated FFR and actual face is measured by geometric surface comparison method. In this study, three FFRs were produced employing live adult Korean subjects and three-dimensional computerized modeling software. The deviations of the facial surfaces between the FFR and the head scan CT of the corresponding subject were analyzed in reverse modeling software. The results were compared with those from a previous study which applied the same methodology as this study except average facial soft tissue depth dataset. Three FFRs of this study that applied updated dataset demonstrated lesser deviation errors between the facial surfaces of the FFR and corresponding subject than those from the previous study. The results proposed that appropriate average tissue depth data are important to increase quantitative accuracy of FFR. © 2015 American Academy of Forensic Sciences.

  8. Variable selection methods in PLS regression - a comparison study on metabolomics data

    DEFF Research Database (Denmark)

    Karaman, İbrahim; Hedemann, Mette Skou; Knudsen, Knud Erik Bach

    . The aim of the metabolomics study was to investigate the metabolic profile in pigs fed various cereal fractions with special attention to the metabolism of lignans using LC-MS based metabolomic approach. References 1. Lê Cao KA, Rossouw D, Robert-Granié C, Besse P: A Sparse PLS for Variable Selection when...... integrated approach. Due to the high number of variables in data sets (both raw data and after peak picking) the selection of important variables in an explorative analysis is difficult, especially when different data sets of metabolomics data need to be related. Variable selection (or removal of irrelevant...... different strategies for variable selection on PLSR method were considered and compared with respect to selected subset of variables and the possibility for biological validation. Sparse PLSR [1] as well as PLSR with Jack-knifing [2] was applied to data in order to achieve variable selection prior...

  9. The use of EURACHEM guide for comparison of two 210Pb determination methods in solid environmental samples

    International Nuclear Information System (INIS)

    Al-Masri, M. S.; Hassan, M.; Amin, Y.

    2008-07-01

    Two techniques for determination of 210 Pb in solid environmental samples have been validated and compared according to Eurachem Guide on method validation. The first technique depended on determination of 210 Po, which equilibrium with 210 Pb, by platting it onto a rotating silver disc. Then, Alpha counting of 210 Po was done using an alpha spectrometer. On the other hand, according to its decay scheme, 210 Pb was measured directly through gamma spectrometry by measuring the 46.5 keV. Detection limits, reproducibility and recovery coefficient were the main validation parameters. In addition, uncertainties of measurement were estimated and compared for the two techniques. The comparison results have shown that, the activity of 210 Pb in the environmental samples can choose which technique is appropriated. It was found that Eurachem Guide and comparison of quality statistical validation parameters can be a good tool for selection of the appropriate method for the application. (Authors)

  10. COMPARISON OF METHODS FOR ALKALINE PHOSPHATASE AND PEROXIDASE DETECTION IN MILK

    Directory of Open Access Journals (Sweden)

    felipe Nael Seixas

    2014-02-01

    Full Text Available This study evaluated the performance of strips for colorimetric detection of alkaline phosphatase and peroxidase in milk, comparing them with a kit of reagents for alkaline phosphatase and the official methodology for peroxidase. The samples were analyzed at the Laboratory Inspection of Products of Animal Origin, State University of Londrina. For the comparison tests for the detection of alkaline phosphatase four treatments were made by adding different percentages of raw milk (1%, 2%, 5% and 10% in the pasteurized milk, plus two control treatments. Thirty-eight samples triplicate for each treatment were analyzed. To compare the performance of tests for peroxidase 80 pasteurized milk samples were evaluated simultaneously by official methodology and by colorimetric strips. The performance of the alkaline phosphatase were different for the treatments with 1% and 2% of raw milk which had all the strips change color as the reagent kit showed the presence of phosphatase in just 2.63% and 5.26% the cases, respectively for each treatment. The colorimetric strips for alkaline phosphatase are more sensitive for the identification of small quantities compared to the reagent kit. The performance of tests for peroxidase showed no difference. The strips for the detection of peroxidase or alkaline phosphatase were effective and can replace traditional methods.

  11. Comparison of Pre-Analytical FFPE Sample Preparation Methods and Their Impact on Massively Parallel Sequencing in Routine Diagnostics

    Science.gov (United States)

    Heydt, Carina; Fassunke, Jana; Künstlinger, Helen; Ihle, Michaela Angelika; König, Katharina; Heukamp, Lukas Carl; Schildhaus, Hans-Ulrich; Odenthal, Margarete; Büttner, Reinhard; Merkelbach-Bruse, Sabine

    2014-01-01

    Over the last years, massively parallel sequencing has rapidly evolved and has now transitioned into molecular pathology routine laboratories. It is an attractive platform for analysing multiple genes at the same time with very little input material. Therefore, the need for high quality DNA obtained from automated DNA extraction systems has increased, especially to those laboratories which are dealing with formalin-fixed paraffin-embedded (FFPE) material and high sample throughput. This study evaluated five automated FFPE DNA extraction systems as well as five DNA quantification systems using the three most common techniques, UV spectrophotometry, fluorescent dye-based quantification and quantitative PCR, on 26 FFPE tissue samples. Additionally, the effects on downstream applications were analysed to find the most suitable pre-analytical methods for massively parallel sequencing in routine diagnostics. The results revealed that the Maxwell 16 from Promega (Mannheim, Germany) seems to be the superior system for DNA extraction from FFPE material. The extracts had a 1.3–24.6-fold higher DNA concentration in comparison to the other extraction systems, a higher quality and were most suitable for downstream applications. The comparison of the five quantification methods showed intermethod variations but all methods could be used to estimate the right amount for PCR amplification and for massively parallel sequencing. Interestingly, the best results in massively parallel sequencing were obtained with a DNA input of 15 ng determined by the NanoDrop 2000c spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA). No difference could be detected in mutation analysis based on the results of the quantification methods. These findings emphasise, that it is particularly important to choose the most reliable and constant DNA extraction system, especially when using small biopsies and low elution volumes, and that all common DNA quantification techniques can be used for

  12. Comparison of pre-analytical FFPE sample preparation methods and their impact on massively parallel sequencing in routine diagnostics.

    Directory of Open Access Journals (Sweden)

    Carina Heydt

    Full Text Available Over the last years, massively parallel sequencing has rapidly evolved and has now transitioned into molecular pathology routine laboratories. It is an attractive platform for analysing multiple genes at the same time with very little input material. Therefore, the need for high quality DNA obtained from automated DNA extraction systems has increased, especially to those laboratories which are dealing with formalin-fixed paraffin-embedded (FFPE material and high sample throughput. This study evaluated five automated FFPE DNA extraction systems as well as five DNA quantification systems using the three most common techniques, UV spectrophotometry, fluorescent dye-based quantification and quantitative PCR, on 26 FFPE tissue samples. Additionally, the effects on downstream applications were analysed to find the most suitable pre-analytical methods for massively parallel sequencing in routine diagnostics. The results revealed that the Maxwell 16 from Promega (Mannheim, Germany seems to be the superior system for DNA extraction from FFPE material. The extracts had a 1.3-24.6-fold higher DNA concentration in comparison to the other extraction systems, a higher quality and were most suitable for downstream applications. The comparison of the five quantification methods showed intermethod variations but all methods could be used to estimate the right amount for PCR amplification and for massively parallel sequencing. Interestingly, the best results in massively parallel sequencing were obtained with a DNA input of 15 ng determined by the NanoDrop 2000c spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA. No difference could be detected in mutation analysis based on the results of the quantification methods. These findings emphasise, that it is particularly important to choose the most reliable and constant DNA extraction system, especially when using small biopsies and low elution volumes, and that all common DNA quantification techniques can

  13. Modified Right Heart Contrast Echocardiography Versus Traditional Method in Diagnosis of Right-to-Left Shunt: A Comparative Study

    OpenAIRE

    Wang, Yi; Zeng, Jie; Yin, Lixue; Zhang, Mei; Hou, Dailun

    2016-01-01

    BACKGROUND: The purpose of this study was to evaluate the reliability, effectiveness, and safety of modified right heart contrast transthoracic echocardiography (cTTE) in comparison with the traditional method. MATERIAL AND METHODS: We performed a modified right heart cTTE using saline mixed with a small sample of patient's own blood. Samples were agitated with varying intensity. This study protocol involved microscopic analysis and patient evaluation. 1. Microscopic analysis: After two contr...

  14. Manual versus automatic bladder wall thickness measurements: a method comparison study

    NARCIS (Netherlands)

    Oelke, M.; Mamoulakis, C.; Ubbink, D.T.; de la Rosette, J.J.; Wijkstra, H.

    2009-01-01

    Purpose To compare repeatability and agreement of conventional ultrasound bladder wall thickness (BWT) measurements with automatically obtained BWT measurements by the BVM 6500 device. Methods Adult patients with lower urinary tract symptoms, urinary incontinence, or postvoid residual urine were

  15. Formation of oiliness and sebum output--comparison of a lipid-absorbant and occlusive-tape method with photometry.

    Science.gov (United States)

    Serup, J

    1991-07-01

    Sebum output and the development of oiliness formation were studied in 24 acne-prone volunteers. Sebutape and photometric measurement by the Sebumeter were compared. Tapes were taken 1, 2 and 3 h after degreasing. Tapes were scored, and light transmission was measured by a special densitometric device. Sebumeter recordings were performed after 3 h. Densitometric evaluation of tapes was accurate with a high correlation to scoring. Sebutape and Sebumeter assessments correlated. However, in some individuals the tape method overestimated the sebum output. Right-left comparison indicated that the tape method was less reproducible, particularly after longer sampling periods. It is suggested that Sebutape, due to water occlusion and temperature insulation during the sampling period, interferes with sebum droplet formation and spreading. However, such systematic interference may be advantageous since sweating and heat are important clinical prerequisites in the formation of oiliness. Thus, it is suggested that the Sebutape is a specialized method for the determination of 'oiliness', the very last phase of sebum output, in which sebum droplets spread over the skin surface. The tape is not automatically comparable to other methods for determining sebum output from the follicular reservoir.

  16. A comparison of moving object detection methods for real-time moving object detection

    Science.gov (United States)

    Roshan, Aditya; Zhang, Yun

    2014-06-01

    Moving object detection has a wide variety of applications from traffic monitoring, site monitoring, automatic theft identification, face detection to military surveillance. Many methods have been developed across the globe for moving object detection, but it is very difficult to find one which can work globally in all situations and with different types of videos. The purpose of this paper is to evaluate existing moving object detection methods which can be implemented in software on a desktop or laptop, for real time object detection. There are several moving object detection methods noted in the literature, but few of them are suitable for real time moving object detection. Most of the methods which provide for real time movement are further limited by the number of objects and the scene complexity. This paper evaluates the four most commonly used moving object detection methods as background subtraction technique, Gaussian mixture model, wavelet based and optical flow based methods. The work is based on evaluation of these four moving object detection methods using two (2) different sets of cameras and two (2) different scenes. The moving object detection methods have been implemented using MatLab and results are compared based on completeness of detected objects, noise, light change sensitivity, processing time etc. After comparison, it is observed that optical flow based method took least processing time and successfully detected boundary of moving objects which also implies that it can be implemented for real-time moving object detection.

  17. Comparison of fluorescence rejection methods of baseline correction and shifted excitation Raman difference spectroscopy

    Science.gov (United States)

    Cai, Zhijian; Zou, Wenlong; Wu, Jianhong

    2017-10-01

    Raman spectroscopy has been extensively used in biochemical tests, explosive detection, food additive and environmental pollutants. However, fluorescence disturbance brings a big trouble to the applications of portable Raman spectrometer. Currently, baseline correction and shifted-excitation Raman difference spectroscopy (SERDS) methods are the most prevailing fluorescence suppressing methods. In this paper, we compared the performances of baseline correction and SERDS methods, experimentally and simulatively. Through the comparison, it demonstrates that the baseline correction can get acceptable fluorescence-removed Raman spectrum if the original Raman signal has good signal-to-noise ratio, but it cannot recover the small Raman signals out of large noise background. By using SERDS method, the Raman signals, even very weak compared to fluorescence intensity and noise level, can be clearly extracted, and the fluorescence background can be completely rejected. The Raman spectrum recovered by SERDS has good signal to noise ratio. It's proved that baseline correction is more suitable for large bench-top Raman system with better quality or signal-to-noise ratio, while the SERDS method is more suitable for noisy devices, especially the portable Raman spectrometers.

  18. Statistical learning techniques applied to epidemiology: a simulated case-control comparison study with logistic regression

    Directory of Open Access Journals (Sweden)

    Land Walker H

    2011-01-01

    Full Text Available Abstract Background When investigating covariate interactions and group associations with standard regression analyses, the relationship between the response variable and exposure may be difficult to characterize. When the relationship is nonlinear, linear modeling techniques do not capture the nonlinear information content. Statistical learning (SL techniques with kernels are capable of addressing nonlinear problems without making parametric assumptions. However, these techniques do not produce findings relevant for epidemiologic interpretations. A simulated case-control study was used to contrast the information embedding characteristics and separation boundaries produced by a specific SL technique with logistic regression (LR modeling representing a parametric approach. The SL technique was comprised of a kernel mapping in combination with a perceptron neural network. Because the LR model has an important epidemiologic interpretation, the SL method was modified to produce the analogous interpretation and generate odds ratios for comparison. Results The SL approach is capable of generating odds ratios for main effects and risk factor interactions that better capture nonlinear relationships between exposure variables and outcome in comparison with LR. Conclusions The integration of SL methods in epidemiology may improve both the understanding and interpretation of complex exposure/disease relationships.

  19. Comparison between three different LCIA methods for aquatic ecotoxicity and a product Environmental Risk Assessment – Insights from a Detergent Case Study within OMNIITOX

    DEFF Research Database (Denmark)

    Pant, Rana; Van Hoof, Geert; Feijtel, Tom

    2004-01-01

    set of physico-chemical and toxicological effect data to enable a better comparison of the methodological differences. For the same reason, the system boundaries were kept the same in all cases, focusing on emissions into water at the disposal stage. Results and Discussion. Significant differences...... ecotoxicity is not satisfactory, unless explicit reasons for the differences are identifiable. This can hamper practical decision support, as LCA practitioners usually will not be in a position to choose the 'right' LCIA method for their specific case. This puts a challenge to the entire OMNIITOX project......) with results from an Environmental Risk Assessment (ERA). Material and Methods. The LCIA has been conducted with EDIP97 (chronic aquatic ecotoxicity) [1], USES-LCA (freshwater and marine water aquatic ecotoxicity, sometimes referred to as CML2001) [2, 3] and IMPACT 2002 (covering freshwater aquatic ecotoxicity...

  20. A new ART iterative method and a comparison of performance among various ART methods

    International Nuclear Information System (INIS)

    Tan, Yufeng; Sato, Shunsuke

    1993-01-01

    Many algebraic reconstruction techniques (ART) image reconstruction algorithms, for instance, simultaneous iterative reconstruction technique (SIRT), the relaxation method and multiplicative ART (MART), have been proposed and their convergent properties have been studied. SIRT and the underrelaxed relaxation method converge to the least-squares solution, but the convergent speeds are very slow. The Kaczmarz method converges very quickly, but the reconstructed images contain a lot of noise. The comparative studies between these algorithms have been done by Gilbert and others, but are not adequate. In this paper, we (1) propose a new method which is a modified Kaczmarz method and prove its convergence property, (2) study performance of 7 algorithms including the one proposed here by computer simulation for 3 kinds of typical phantoms. The method proposed here does not give the least-square solution, but the root mean square errors of its reconstructed images decrease very quickly after few interations. The result shows that the method proposed here gives a better reconstructed image. (author)

  1. A study on asymptomatic bacteriuria in pregnancy: prevalence, etiology and comparison of screening methods

    OpenAIRE

    Kheya Mukherjee; Saroj Golia; Vasudha CL; Babita; Debojyoti Bhattacharjee; Goutam Chakroborti

    2014-01-01

    Background: Asymptomatic bacteriuria is common in women with prevalence of 4-7% in pregnancy. The traditional reference test for bacteriuria is quantitative culture of urine which is relatively expensive time consuming and laborious. The aim of this study was to know the prevalence of asymptomatic bacteriuria in pregnancy, to identify pathogens and their antibiotic susceptibility patterns and to device a single or combined rapid screening method as an acceptable alternative to urine culture. ...

  2. Genomic comparisons of Brucella spp. and closely related bacteria using base compositional and proteome based methods

    DEFF Research Database (Denmark)

    Bohlin, Jon; Snipen, Lars; Cloeckaert, Axel

    2010-01-01

    BACKGROUND: Classification of bacteria within the genus Brucella has been difficult due in part to considerable genomic homogeneity between the different species and biovars, in spite of clear differences in phenotypes. Therefore, many different methods have been used to assess Brucella taxonomy....... In the current work, we examine 32 sequenced genomes from genus Brucella representing the six classical species, as well as more recently described species, using bioinformatical methods. Comparisons were made at the level of genomic DNA using oligonucleotide based methods (Markov chain based genomic signatures...... between the oligonucleotide based methods used. Whilst the Markov chain based genomic signatures grouped the different species in genus Brucella according to host preference, the codon and amino acid frequencies based methods reflected small differences between the Brucella species. Only minor differences...

  3. Comparison of point-of-care methods for preparation of platelet concentrate (platelet-rich plasma).

    Science.gov (United States)

    Weibrich, Gernot; Kleis, Wilfried K G; Streckbein, Philipp; Moergel, Maximilian; Hitzler, Walter E; Hafner, Gerd

    2012-01-01

    This study analyzed the concentrations of platelets and growth factors in platelet-rich plasma (PRP), which are likely to depend on the method used for its production. The cellular composition and growth factor content of platelet concentrates (platelet-rich plasma) produced by six different procedures were quantitatively analyzed and compared. Platelet and leukocyte counts were determined on an automatic cell counter, and analysis of growth factors was performed using enzyme-linked immunosorbent assay. The principal differences between the analyzed PRP production methods (blood bank method of intermittent flow centrifuge system/platelet apheresis and by the five point-of-care methods) and the resulting platelet concentrates were evaluated with regard to resulting platelet, leukocyte, and growth factor levels. The platelet counts in both whole blood and PRP were generally higher in women than in men; no differences were observed with regard to age. Statistical analysis of platelet-derived growth factor AB (PDGF-AB) and transforming growth factor β1 (TGF-β1) showed no differences with regard to age or gender. Platelet counts and TGF-β1 concentration correlated closely, as did platelet counts and PDGF-AB levels. There were only rare correlations between leukocyte counts and PDGF-AB levels, but comparison of leukocyte counts and PDGF-AB levels demonstrated certain parallel tendencies. TGF-β1 levels derive in substantial part from platelets and emphasize the role of leukocytes, in addition to that of platelets, as a source of growth factors in PRP. All methods of producing PRP showed high variability in platelet counts and growth factor levels. The highest growth factor levels were found in the PRP prepared using the Platelet Concentrate Collection System manufactured by Biomet 3i.

  4. A COMPARISON OF METHODS FOR DETERMINING THE MOLECULAR CONTENT OF MODEL GALAXIES

    International Nuclear Information System (INIS)

    Krumholz, Mark R.; Gnedin, Nickolay Y.

    2011-01-01

    Recent observations indicate that star formation occurs only in the molecular phase of a galaxy's interstellar medium. A realistic treatment of star formation in simulations and analytic models of galaxies therefore requires that one determine where the transition from the atomic to molecular gas occurs. In this paper, we compare two methods for making this determination in cosmological simulations where the internal structures of molecular clouds are unresolved: a complex time-dependent chemistry network coupled to a radiative transfer calculation of the dissociating ultraviolet (UV) radiation field and a simple time-independent analytic approximation. We show that these two methods produce excellent agreement at all metallicities ∼>10 -2 of the Milky Way value across a very wide range of UV fields. At lower metallicities the agreement is worse, likely because time-dependent effects become important; however, there are no observational calibrations of molecular gas content at such low metallicities, so it is unclear if either method is accurate. The comparison suggests that, in many but not all applications, the analytic approximation provides a viable and nearly cost-free alternative to full time-dependent chemistry and radiative transfer.

  5. COMPARATIVE STUDIES OF THREE METHODS FOR MEASURING PEPSIN ACTIVITY

    Science.gov (United States)

    Loken, Merle K.; Terrill, Kathleen D.; Marvin, James F.; Mosser, Donn G.

    1958-01-01

    Comparison has been made of a simple method originated by Absolon and modified in our laboratories for assay of proteolytic activity using RISA (radioactive iodinated serum albumin—Abbott Laboratories), with the commonly used photometric methods of Anson and Kunitz. In this method, pepsin was incubated with an albumin substrate containing RISA, followed by precipitation of the undigested substrate with trichloroacetic acid and measurement of radioactive digestion products in the supernatant fluid. The I131—albumin bond was shown in the present studies to be altered only by the proteolytic activity, and not by the incubation procedures at various values of pH. Any free iodine present originally in the RISA was removed by a single passage through a resin column (amberlite IRA-400-C1). Pepsin was shown to be most stable in solution at a pH of 5.5. Activity of pepsin was shown to be maximal when it was incubated with albumin at a pH of 2.5. Pepsin activity was shown to be altered in the presence of various electrolytes. Pepsin activity measured by the RISA and Anson methods as a function of concentration or of time of incubation indicated that these two methods are in good agreement and are equally sensitive. Consistently smaller standard errors were obtained by the RISA method of pepsin assay than were obtained with either of the other methods. PMID:13587910

  6. A Comparison of Mindray BC-6800, Sysmex XN-2000, and Beckman Coulter LH750 Automated Hematology Analyzers: A Pediatric Study.

    Science.gov (United States)

    Ciepiela, Olga; Kotuła, Iwona; Kierat, Szymon; Sieczkowska, Sandra; Podsiadłowska, Anna; Jenczelewska, Anna; Księżarczyk, Karolina; Demkow, Urszula

    2016-11-01

    Modern automated laboratory hematology analyzers allow the measurement of over 30 different hematological parameters useful in the diagnostic and clinical interpretation of patient symptoms. They use different methods to measure the same parameters. Thus, a comparison of complete blood count made by Mindray BC-6800, Sysmex XN-2000 and Beckman Coulter LH750 was performed. A comparison of results obtained by automated analysis of 807 anticoagulated blood samples from children and 125 manual microscopic differentiations were performed. This comparative study included white blood cell count, red blood cell count, and erythrocyte indices, as well as platelet count. The present study showed a poor level of agreement between white blood cell enumeration and differentiation of the three automated hematology analyzers under comparison. A very good agreement was found when comparing manual blood smear and automated granulocytes, monocytes, and lymphocytes differentiation. Red blood cell evaluation showed better agreement than white blood cells between the studied analyzers. To conclude, studied instruments did not ensure satisfactory interchangeability and did not facilitate a substitution of one analyzer by another. © 2016 Wiley Periodicals, Inc.

  7. Comparison between Two Radiological Methods for Assessment of Tooth Root Resorption: An In Vitro Study

    Directory of Open Access Journals (Sweden)

    Sabina Saccomanno

    2018-01-01

    Full Text Available Purpose. This study aims to verify the validity of the radiographic image and the most effective radiological techniques for the diagnosis of root resorption to prevent, cure, and reduce it and to verify if radiological images can be helpful in medical and legal situations. Methods. 19 dental elements without root resorption extracted from several patients were examined: endooral and panoramic radiographs were performed, with traditional and digital methods. Then the root of each tooth was dipped into 3-4 mm of 10% nitric acid for 24 hours to simulate the resorption of the root and later submitted again to radiological examinations and measurements using the same criteria and methods. Results. For teeth with root resorption the real measurements and the values obtained with endooral techniques and digital sensors are almost the same, while image values obtained by panoramic radiographs are more distorted than the real ones. Conclusions. Panoramic radiographs are not useful for the diagnosis of root resorption. The endooral examination is, in medical and legal fields, the most valid and objective instrument to detect root resorption. Although the literature suggests that CBCT is a reliable tool in detecting root resorption defects, the increased radiation dosage and expense and the limited availability of CBCT in most clinical settings accentuate the outcome of this study.

  8. Comparison of different methods for determining the size of a focal spot of microfocus X-ray tubes

    International Nuclear Information System (INIS)

    Salamon, M.; Hanke, R.; Krueger, P.; Sukowski, F.; Uhlmann, N.; Voland, V.

    2008-01-01

    The EN 12543-5 describes a method for determining the focal spot size of microfocus X-ray tubes up to a minimum spot size of 5 μm. The wide application of X-ray tubes with even smaller focal spot sizes in computed tomography and radioscopy applications requires the evaluation of existing methods for focal spot sizes below 5 μm. In addition, new methods and conditions for determining submicron focal spot sizes have to be developed. For the evaluation and extension of the present methods to smaller focal spot sizes, different procedures in comparison with the existing EN 12543-5 were analyzed and applied, and the results are presented

  9. A Normalized Transfer Matrix Method for the Free Vibration of Stepped Beams: Comparison with Experimental and FE(3D Methods

    Directory of Open Access Journals (Sweden)

    Tamer Ahmed El-Sayed

    2017-01-01

    Full Text Available The exact solution for multistepped Timoshenko beam is derived using a set of fundamental solutions. This set of solutions is derived to normalize the solution at the origin of the coordinates. The start, end, and intermediate boundary conditions involve concentrated masses and linear and rotational elastic supports. The beam start, end, and intermediate equations are assembled using the present normalized transfer matrix (NTM. The advantage of this method is that it is quicker than the standard method because the size of the complete system coefficient matrix is 4 × 4. In addition, during the assembly of this matrix, there are no inverse matrix steps required. The validity of this method is tested by comparing the results of the current method with the literature. Then the validity of the exact stepped analysis is checked using experimental and FE(3D methods. The experimental results for stepped beams with single step and two steps, for sixteen different test samples, are in excellent agreement with those of the three-dimensional finite element FE(3D. The comparison between the NTM method and the finite element method results shows that the modal percentage deviation is increased when a beam step location coincides with a peak point in the mode shape. Meanwhile, the deviation decreases when a beam step location coincides with a straight portion in the mode shape.

  10. Comparison of five actigraphy scoring methods with bipolar disorder.

    Science.gov (United States)

    Boudebesse, Carole; Leboyer, Marion; Begley, Amy; Wood, Annette; Miewald, Jean; Hall, Martica; Frank, Ellen; Kupfer, David; Germain, Anne

    2013-01-01

    The goal of this study was to compare 5 actigraphy scoring methods in a sample of 18 remitted patients with bipolar disorder. Actigraphy records were processed using five different scoring methods relying on the sleep diary; the event-marker; the software-provided automatic algorithm; the automatic algorithm supplemented by the event-marker; visual inspection (VI) only. The algorithm and the VI methods differed from the other methods for many actigraphy parameters of interest. Particularly, the algorithm method yielded longer sleep duration, and the VI method yielded shorter sleep latency compared to the other methods. The present findings provide guidance for the selection of signal processing method based on sleep parameters of interest, time-cue sources and availability, and related scoring time costs for the study.

  11. Comparisons of Energy Management Methods for a Parallel Plug-In Hybrid Electric Vehicle between the Convex Optimization and Dynamic Programming

    Directory of Open Access Journals (Sweden)

    Renxin Xiao

    2018-01-01

    Full Text Available This paper proposes a comparison study of energy management methods for a parallel plug-in hybrid electric vehicle (PHEV. Based on detailed analysis of the vehicle driveline, quadratic convex functions are presented to describe the nonlinear relationship between engine fuel-rate and battery charging power at different vehicle speed and driveline power demand. The engine-on power threshold is estimated by the simulated annealing (SA algorithm, and the battery power command is achieved by convex optimization with target of improving fuel economy, compared with the dynamic programming (DP based method and the charging depleting–charging sustaining (CD/CS method. In addition, the proposed control methods are discussed at different initial battery state of charge (SOC values to extend the application. Simulation results validate that the proposed strategy based on convex optimization can save the fuel consumption and reduce the computation burden obviously.

  12. Study of comparison between neutron activation analysis and the other analytical methods

    International Nuclear Information System (INIS)

    Nagatsuka, Sumiko

    1986-01-01

    The neutron activation analysis (NAA) is compared with other analytical methods based on various data. It is concluded that NAA is most frequently used for the analysis of NBS environmental standard samples. NAA is suitable for the analysis of trace elements contained in environmental samples since non-destructive quantitative determination can be carried out simultaneously for different elements. NAA and XRF are the only methods which can be used for analyzing oil samples. This also indicates the usefulness of non-destructive techniques. In any standard sample, NAA can achieve a high accuracy for more than 90 % of the elements contained. On the other hand, the accuracy varies depending on the type of sample in the case of the other analytical methods examined. Regarding the prescision, NAA for any standard sample is the smallest in the proportion of the number of elements determined with C.V. (coefficient of variation) of less than 10 % to the total number of elements contained. However, the total number of elements which can be determined by NAA with C.V. of less than 10 % is greater than that by any of the other four methods examined. It should be noticed that there are some elements which can be determined only by NAA. (Nogami, K.)

  13. Comparison of various quantization methods of segmental ventricular wall motion in ischemic heart disease

    International Nuclear Information System (INIS)

    Probst, P.; Moore, R.; Kim, S.W.; Zollikofer, C.; Amplatz, K.

    1981-01-01

    Numerous methods of measuring regional myocardial wall motion are in use. A critical comparison is needed to assess the strengths, weaknesses, accuracy, and precision of these methods. This paper reports the evaluation of five methods using computer-assisted interactive graphics. Fifty cines were selected: 16 from normal subjects, and 34 from patients with proven cardiovascular diseases. Tracings were made of the opacified left ventricle in end systole and dastole and digitized. All fifty cines were analyzed by five methods using computer-implemented graphic techniques. The reults included a display of the silhouettes, which were translated and rotated according to various methods. In addition, the percent contraction for eleven myocardial regions was tabulated and displayed. The sixteen cines from normal subjects were used to derive 1 range of 'house' normal values for region contraction patterns with which the measurements from the 34 abnormal patients were compared. The five methods were evaluated by comparing results from the computer-aided analysis with the visual assessment of two experienced radiologists. One method was found, the results from which agreed with the radiologists' visual impression for every case. This computer-aided method was quantitative and reproducible. Consequently, it can give information which supplements the visual impression. (orig.) [de

  14. Development of modelling method selection tool for health services management: from problem structuring methods to modelling and simulation methods.

    Science.gov (United States)

    Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P

    2011-05-19

    There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.

  15. Bioanalysis works in the IAA AMS facility: Comparison of AMS analytical method with LSC method in human mass balance study

    International Nuclear Information System (INIS)

    Miyaoka, Teiji; Isono, Yoshimi; Setani, Kaoru; Sakai, Kumiko; Yamada, Ichimaro; Sato, Yoshiaki; Gunji, Shinobu; Matsui, Takao

    2007-01-01

    Institute of Accelerator Analysis Ltd. (IAA) is the first Contract Research Organization in Japan providing Accelerator Mass Spectrometry (AMS) analysis services for carbon dating and bioanalysis works. The 3 MV AMS machines are maintained by validated analysis methods using multiple control compounds. It is confirmed that these AMS systems have reliabilities and sensitivities enough for each objective. The graphitization of samples for bioanalysis is prepared by our own purification lines including the measurement of total carbon content in the sample automatically. In this paper, we present the use of AMS analysis in human mass balance and metabolism profiling studies with IAA 3 MV AMS, comparing results obtained from the same samples with liquid scintillation counting (LSC). Human samples such as plasma, urine and feces were obtained from four healthy volunteers orally administered a 14 C-labeled drug Y-700, a novel xanthine oxidase inhibitor, of which radioactivity was about 3 MBq (85 μCi). For AMS measurement, these samples were diluted 100-10,000-fold with pure-water or blank samples. The results indicated that AMS method had a good correlation with LSC method (e.g. plasma: r = 0.998, urine: r = 0.997, feces: r = 0.997), and that the drug recovery in the excreta exceeded 92%. The metabolite profiles of plasma, urine and feces obtained with HPLC-AMS corresponded to radio-HPLC results measured at much higher radioactivity level. These results revealed that AMS analysis at IAA is useful to measure 14 C-concentration in bioanalysis studies at very low radioactivity level

  16. Instrumental and statistical methods for the comparison of class evidence

    Science.gov (United States)

    Liszewski, Elisa Anne

    Trace evidence is a major field within forensic science. Association of trace evidence samples can be problematic due to sample heterogeneity and a lack of quantitative criteria for comparing spectra or chromatograms. The aim of this study is to evaluate different types of instrumentation for their ability to discriminate among samples of various types of trace evidence. Chemometric analysis, including techniques such as Agglomerative Hierarchical Clustering, Principal Components Analysis, and Discriminant Analysis, was employed to evaluate instrumental data. First, automotive clear coats were analyzed by using microspectrophotometry to collect UV absorption data. In total, 71 samples were analyzed with classification accuracy of 91.61%. An external validation was performed, resulting in a prediction accuracy of 81.11%. Next, fiber dyes were analyzed using UV-Visible microspectrophotometry. While several physical characteristics of cotton fiber can be identified and compared, fiber color is considered to be an excellent source of variation, and thus was examined in this study. Twelve dyes were employed, some being visually indistinguishable. Several different analyses and comparisons were done, including an inter-laboratory comparison and external validations. Lastly, common plastic samples and other polymers were analyzed using pyrolysis-gas chromatography/mass spectrometry, and their pyrolysis products were then analyzed using multivariate statistics. The classification accuracy varied dependent upon the number of classes chosen, but the plastics were grouped based on composition. The polymers were used as an external validation and misclassifications occurred with chlorinated samples all being placed into the category containing PVC.

  17. Comparison of floods non-stationarity detection methods: an Austrian case study

    Science.gov (United States)

    Salinas, Jose Luis; Viglione, Alberto; Blöschl, Günter

    2016-04-01

    Non-stationarities in flood regimes have a huge impact in any mid and long term flood management strategy. In particular the estimation of design floods is very sensitive to any kind of flood non-stationarity, as they should be linked to a return period, concept that can be ill defined in a non-stationary context. Therefore it is crucial when analyzing existent flood time series to detect and, where possible, attribute flood non-stationarities to changing hydroclimatic and land-use processes. This works presents the preliminary results of applying different non-stationarity detection methods on annual peak discharges time series over more than 400 gauging stations in Austria. The kind of non-stationarities analyzed include trends (linear and non-linear), breakpoints, clustering beyond stochastic randomness, and detection of flood rich/flood poor periods. Austria presents a large variety of landscapes, elevations and climates that allow us to interpret the spatial patterns obtained with the non-stationarity detection methods in terms of the dominant flood generation mechanisms.

  18. Automatic variable selection method and a comparison for quantitative analysis in laser-induced breakdown spectroscopy

    Science.gov (United States)

    Duan, Fajie; Fu, Xiao; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Zhang, Cong

    2018-05-01

    In this work, an automatic variable selection method for quantitative analysis of soil samples using laser-induced breakdown spectroscopy (LIBS) is proposed, which is based on full spectrum correction (FSC) and modified iterative predictor weighting-partial least squares (mIPW-PLS). The method features automatic selection without artificial processes. To illustrate the feasibility and effectiveness of the method, a comparison with genetic algorithm (GA) and successive projections algorithm (SPA) for different elements (copper, barium and chromium) detection in soil was implemented. The experimental results showed that all the three methods could accomplish variable selection effectively, among which FSC-mIPW-PLS required significantly shorter computation time (12 s approximately for 40,000 initial variables) than the others. Moreover, improved quantification models were got with variable selection approaches. The root mean square errors of prediction (RMSEP) of models utilizing the new method were 27.47 (copper), 37.15 (barium) and 39.70 (chromium) mg/kg, which showed comparable prediction effect with GA and SPA.

  19. A Method for the Comparison of Item Selection Rules in Computerized Adaptive Testing

    Science.gov (United States)

    Barrada, Juan Ramon; Olea, Julio; Ponsoda, Vicente; Abad, Francisco Jose

    2010-01-01

    In a typical study comparing the relative efficiency of two item selection rules in computerized adaptive testing, the common result is that they simultaneously differ in accuracy and security, making it difficult to reach a conclusion on which is the more appropriate rule. This study proposes a strategy to conduct a global comparison of two or…

  20. A comparison of two methods for assessing awareness of antitobacco television advertisements.

    Science.gov (United States)

    Luxenberg, Michael G; Greenseid, Lija O; Depue, Jacob; Mowery, Andrea; Dreher, Marietta; Larsen, Lindsay S; Schillo, Barbara

    2016-05-01

    This study uses an online survey panel to compare two approaches for assessing ad awareness. The first uses a screenshot of a television ad and the second shows participants a full-length video of the ad. We randomly assigned 1034 Minnesota respondents to view a screenshot or a streaming video from two antitobacco ads. The study used one ad from ClearWay Minnesota's ITALIC! We All Pay the Price campaign, and one from the Centers for Disease Control ITALIC! Tips campaign. The key measure used to assess ad awareness was aided ad recall. Multivariate analyses of recall with cessation behaviour and attitudinal beliefs assessed the validity of these approaches. The respondents who saw the video reported significantly higher recall than those who saw the screenshot. Associations of recall with cessation behaviour and attitudinal beliefs were stronger and in the anticipated direction using the screenshot method. Over 20% of the respondents assigned to the video group could not see the ad. People who were under 45 years old, had incomes greater than $35,000 and women were reportedly less able to access the video. The methodology used to assess recall matters. Campaigns may exaggerate the successes or failures of their media campaigns, depending on the approach they employ and how they compare it to other media campaign evaluations. When incorporating streaming video, researchers should consider accessibility and report possible response bias. Researchers should fully define the measures they use, specify any viewing accessibility issues, and make ad comparisons only when using comparable methods. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  1. Comparison of three methods of calculating strain in the mouse ulna in exogenous loading studies.

    Science.gov (United States)

    Norman, Stephanie C; Wagner, David W; Beaupre, Gary S; Castillo, Alesha B

    2015-01-02

    Axial compression of mouse limbs is commonly used to induce bone formation in a controlled, non-invasive manner. Determination of peak strains caused by loading is central to interpreting results. Load-strain calibration is typically performed using uniaxial strain gauges attached to the diaphyseal, periosteal surface of a small number of sacrificed animals. Strain is measured as the limb is loaded to a range of physiological loads known to be anabolic to bone. The load-strain relationship determined by this subgroup is then extrapolated to a larger group of experimental mice. This method of strain calculation requires the challenging process of strain gauging very small bones which is subject to variability in placement of the strain gauge. We previously developed a method to estimate animal-specific periosteal strain during axial ulnar loading using an image-based computational approach that does not require strain gauges. The purpose of this study was to compare the relationship between load-induced bone formation rates and periosteal strain at ulnar midshaft using three different methods to estimate strain: (A) Nominal strain values based solely on load-strain calibration; (B) Strains calculated from load-strain calibration, but scaled for differences in mid-shaft cross-sectional geometry among animals; and (C) An alternative image-based computational method for calculating strains based on beam theory and animal-specific bone geometry. Our results show that the alternative method (C) provides comparable correlation between strain and bone formation rates in the mouse ulna relative to the strain gauge-dependent methods (A and B), while avoiding the need to use strain gauges. Published by Elsevier Ltd.

  2. Application of synthesis methods to two-dimensional fast reactor transient study

    International Nuclear Information System (INIS)

    Izutsu, Sadayuki; Hirakawa, Naohiro

    1978-01-01

    Space time synthesis and time synthesis codes were developed and applied to the space-dependent kinetics benchmark problem of a two-dimensional fast reactor model, and it was found both methods are accurate and economical for the fast reactor kinetics study. Comparison between the space time synthesis and the time synthesis was made. Also, in space time synthesis, the influence of the number of trial functions on the error and on the computing time and the effect of degeneration of expansion coefficients are investigated. The matrix factorization method is applied to the inversion of the matrix equation derived from the synthesis equation, and it is indicated that by the use of this scheme space-dependent kinetics problem of a fast reactor can be solved efficiently by space time synthesis. (auth.)

  3. Isolating DNA from sexual assault cases: a comparison of standard methods with a nuclease-based approach

    Science.gov (United States)

    2012-01-01

    Background Profiling sperm DNA present on vaginal swabs taken from rape victims often contributes to identifying and incarcerating rapists. Large amounts of the victim’s epithelial cells contaminate the sperm present on swabs, however, and complicate this process. The standard method for obtaining relatively pure sperm DNA from a vaginal swab is to digest the epithelial cells with Proteinase K in order to solubilize the victim’s DNA, and to then physically separate the soluble DNA from the intact sperm by pelleting the sperm, removing the victim’s fraction, and repeatedly washing the sperm pellet. An alternative approach that does not require washing steps is to digest with Proteinase K, pellet the sperm, remove the victim’s fraction, and then digest the residual victim’s DNA with a nuclease. Methods The nuclease approach has been commercialized in a product, the Erase Sperm Isolation Kit (PTC Labs, Columbia, MO, USA), and five crime laboratories have tested it on semen-spiked female buccal swabs in a direct comparison with their standard methods. Comparisons have also been performed on timed post-coital vaginal swabs and evidence collected from sexual assault cases. Results For the semen-spiked buccal swabs, Erase outperformed the standard methods in all five laboratories and in most cases was able to provide a clean male profile from buccal swabs spiked with only 1,500 sperm. The vaginal swabs taken after consensual sex and the evidence collected from rape victims showed a similar pattern of Erase providing superior profiles. Conclusions In all samples tested, STR profiles of the male DNA fractions obtained with Erase were as good as or better than those obtained using the standard methods. PMID:23211019

  4. Comparison of the Sensitivity of Three Methods for the Early Diagnosis of Sporotrichosis in Cats.

    Science.gov (United States)

    Silva, J N; Miranda, L H M; Menezes, R C; Gremião, I D F; Oliveira, R V C; Vieira, S M M; Conceição-Silva, F; Ferreiro, L; Pereira, S A

    2018-04-01

    Sporotrichosis is caused by species of fungi within the Sporothrix schenckii complex that infect man and animals. In Rio de Janeiro, Brazil, an epidemic has been observed since 1998, with most of the cases being related to transmission from infected cats. Although the definitive diagnosis of feline sporotrichosis is made by fungal culture, cytopathological and histopathological examinations are used routinely, because the long culture period may delay treatment onset. However, alternative methods are desirable in cases of low fungal burden. Immunohistochemistry (IHC) has been described as a sensitive method for diagnosing human and canine sporotrichosis, but there are no reports of its application to cats. The aim of this study was to analyse the sensitivity of cytopathological examination (Quick Panoptic method), histopathology (Grocott silver stain) and anti-Sporothrix IHC by blinded comparisons, using fungal culture as the reference standard. Samples were collected from 184 cats with sporotrichosis that exhibited skin ulcers. The sensitivities of Grocott silver stain, cytopathological examination and IHC were 91.3%, 87.0% and 88.6%, respectively. Grocott silver stain showed the best performance. IHC showed high sensitivity, as did cytopathological examination and these may be considered as alternative methodologies. When the three methods were combined, the diagnosis was established in 180 (97.8%) out of 184 cases. Taken together, these findings indicate the need to implement these methods as routine tools for the early diagnosis of sporotrichosis in cats, notably when fungal culture is not available. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Comparisons of NDT Methods to Inspect Cork and Cork filled Epoxy Bands

    Science.gov (United States)

    Lingbloom, Mike

    2007-01-01

    Sheet cork and cork filled epoxy provide external insulation for the Reusable Solid Rocket Motor (RSRM) on the Nation's Space Transportation System (STS). Interest in the reliability of the external insulation bonds has increased since the Columbia incident. A non-destructive test (NDT) method that will provide the best inspection for these bonds has been under evaluation. Electronic Shearography has been selected as the primary NDT method for inspection of these bond lines in the RSRM production flow. ATK Launch Systems Group has purchased an electronic shearography system that includes a vacuum chamber that is used for evaluation of test parts and custom vacuum windows for inspection of full-scale motors. Although the electronic shearography technology has been selected as the primary method for inspection of the external bonds, other technologies that exist continue to be investigated. The NASA/Marshall Space Flight Center (MSFC) NDT department has inspected several samples for comparison with electronic shearography with various inspections systems in their laboratory. The systems that were evaluated are X-ray backscatter, terahertz imaging, and microwave imaging. The samples tested have some programmed flaws as well as some flaws that occurred naturally during the sample making process. These samples provide sufficient flaw variation for the evaluation of the different inspection systems. This paper will describe and compare the basic functionality, test method and test results including dissection for each inspection technology.

  6. Comparison of linear, mixed integer and non-linear programming methods in energy system dispatch modelling

    DEFF Research Database (Denmark)

    Ommen, Torben Schmidt; Markussen, Wiebke Brix; Elmegaard, Brian

    2014-01-01

    In the paper, three frequently used operation optimisation methods are examined with respect to their impact on operation management of the combined utility technologies for electric power and DH (district heating) of eastern Denmark. The investigation focusses on individual plant operation...... differences and differences between the solution found by each optimisation method. One of the investigated approaches utilises LP (linear programming) for optimisation, one uses LP with binary operation constraints, while the third approach uses NLP (non-linear programming). The LP model is used...... as a benchmark, as this type is frequently used, and has the lowest amount of constraints of the three. A comparison of the optimised operation of a number of units shows significant differences between the three methods. Compared to the reference, the use of binary integer variables, increases operation...

  7. Integration of new biological and physical retrospective dosimetry methods into EU emergency response plans - joint RENEB and EURADOS inter-laboratory comparisons.

    Science.gov (United States)

    Ainsbury, Elizabeth; Badie, Christophe; Barnard, Stephen; Manning, Grainne; Moquet, Jayne; Abend, Michael; Antunes, Ana Catarina; Barrios, Lleonard; Bassinet, Celine; Beinke, Christina; Bortolin, Emanuela; Bossin, Lily; Bricknell, Clare; Brzoska, Kamil; Buraczewska, Iwona; Castaño, Carlos Huertas; Čemusová, Zina; Christiansson, Maria; Cordero, Santiago Mateos; Cosler, Guillaume; Monaca, Sara Della; Desangles, François; Discher, Michael; Dominguez, Inmaculada; Doucha-Senf, Sven; Eakins, Jon; Fattibene, Paola; Filippi, Silvia; Frenzel, Monika; Georgieva, Dimka; Gregoire, Eric; Guogyte, Kamile; Hadjidekova, Valeria; Hadjiiska, Ljubomira; Hristova, Rositsa; Karakosta, Maria; Kis, Enikő; Kriehuber, Ralf; Lee, Jungil; Lloyd, David; Lumniczky, Katalin; Lyng, Fiona; Macaeva, Ellina; Majewski, Matthaeus; Vanda Martins, S; McKeever, Stephen W S; Meade, Aidan; Medipally, Dinesh; Meschini, Roberta; M'kacher, Radhia; Gil, Octávia Monteiro; Montero, Alegria; Moreno, Mercedes; Noditi, Mihaela; Oestreicher, Ursula; Oskamp, Dominik; Palitti, Fabrizio; Palma, Valentina; Pantelias, Gabriel; Pateux, Jerome; Patrono, Clarice; Pepe, Gaetano; Port, Matthias; Prieto, María Jesús; Quattrini, Maria Cristina; Quintens, Roel; Ricoul, Michelle; Roy, Laurence; Sabatier, Laure; Sebastià, Natividad; Sholom, Sergey; Sommer, Sylwester; Staynova, Albena; Strunz, Sonja; Terzoudi, Georgia; Testa, Antonella; Trompier, Francois; Valente, Marco; Hoey, Olivier Van; Veronese, Ivan; Wojcik, Andrzej; Woda, Clemens

    2017-01-01

    RENEB, 'Realising the European Network of Biodosimetry and Physical Retrospective Dosimetry,' is a network for research and emergency response mutual assistance in biodosimetry within the EU. Within this extremely active network, a number of new dosimetry methods have recently been proposed or developed. There is a requirement to test and/or validate these candidate techniques and inter-comparison exercises are a well-established method for such validation. The authors present details of inter-comparisons of four such new methods: dicentric chromosome analysis including telomere and centromere staining; the gene expression assay carried out in whole blood; Raman spectroscopy on blood lymphocytes, and detection of radiation-induced thermoluminescent signals in glass screens taken from mobile phones. In general the results show good agreement between the laboratories and methods within the expected levels of uncertainty, and thus demonstrate that there is a lot of potential for each of the candidate techniques. Further work is required before the new methods can be included within the suite of reliable dosimetry methods for use by RENEB partners and others in routine and emergency response scenarios.

  8. Classification-based comparison of pre-processing methods for interpretation of mass spectrometry generated clinical datasets

    Directory of Open Access Journals (Sweden)

    Hoefsloot Huub CJ

    2009-05-01

    Full Text Available Abstract Background Mass spectrometry is increasingly being used to discover proteins or protein profiles associated with disease. Experimental design of mass-spectrometry studies has come under close scrutiny and the importance of strict protocols for sample collection is now understood. However, the question of how best to process the large quantities of data generated is still unanswered. Main challenges for the analysis are the choice of proper pre-processing and classification methods. While these two issues have been investigated in isolation, we propose to use the classification of patient samples as a clinically relevant benchmark for the evaluation of pre-processing methods. Results Two in-house generated clinical SELDI-TOF MS datasets are used in this study as an example of high throughput mass-spectrometry data. We perform a systematic comparison of two commonly used pre-processing methods as implemented in Ciphergen ProteinChip Software and in the Cromwell package. With respect to reproducibility, Ciphergen and Cromwell pre-processing are largely comparable. We find that the overlap between peaks detected by either Ciphergen ProteinChip Software or Cromwell is large. This is especially the case for the more stringent peak detection settings. Moreover, similarity of the estimated intensities between matched peaks is high. We evaluate the pre-processing methods using five different classification methods. Classification is done in a double cross-validation protocol using repeated random sampling to obtain an unbiased estimate of classification accuracy. No pre-processing method significantly outperforms the other for all peak detection settings evaluated. Conclusion We use classification of patient samples as a clinically relevant benchmark for the evaluation of pre-processing methods. Both pre-processing methods lead to similar classification results on an ovarian cancer and a Gaucher disease dataset. However, the settings for pre

  9. Aerial Measuring System (AMS)/ Israel Atomic Energy Commission (IAEC) Joint Comparison Study

    International Nuclear Information System (INIS)

    Halevy, I.; Dadon, S.; Sheinfeld, M.; Broide, A.; Rofe, S.; Yaar, I.; Wasiolek, P.

    2014-01-01

    In support of the U.S. Department of Energy (DOE) International Emergency Management and Cooperation (IEMC/NA-46) Program, the comparison of the U.S. and Israeli Aerial Measuring Systems (AMS) study was proposed and accepted. The study, organized by the DOE/National Nuclear Security Administration (NNSA) Remote Sensing Laboratory (RSL), involved the DOE/NNSA Aerial Measuring System Project based at the RSL and operated under a contractor agreement by National Security Technologies, LLC (NSTec), and the Israel Atomic Energy Commission (IAEC) Aerial Measuring System. The operational comparison was conducted at RSL-Nellis in Las Vegas, Nevada, during week of June 24–27, 2013. The Israeli system, Air RAM 2000 (figure 1, down), was shipped to RSL-Nellis and mounted together with the DOE Spectral Advanced Radiological Computer System, Model A (SPARCS-A, figure 1 up) on U.S. DOE Bell-412 helicopter for a series of aerial comparison measurements at local test ranges, including the Desert Rock Airport and Area 3 at the Nevada National Security Site (NNSS). A four-person Israeli team from the IAEC, Nuclear Research Center – Negev (NRCN) supported the activity. The main objective of this joint comparison was use the DOE/RSL Bell-412 helicopter aerial platform, perform the comparison study of measuring techniques and radiation acquisition systems utilized for emergency response by IEAC and NNSA AMS

  10. Comparison of caries detection methods using varying numbers of intra-oral digital photographs with visual examination for epidemiology in children

    Science.gov (United States)

    2013-01-01

    Background This was a method comparison study. The aim of study was to compare caries information obtained from a full mouth visual examination using the method developed by the British Association for the Study of Community Dentistry (BASCD) for epidemiological surveys with caries data obtained from eight, six and four intra-oral digital photographs of index teeth in two groups of children aged 5 years and 10/11 years. Methods Five trained and calibrated examiners visually examined the whole mouth of 240 5-year-olds and 250 10-/11-year-olds using the BASCD method. The children also had intra-oral digital photographs taken of index teeth. The same 5 examiners assessed the intra-oral digital photographs (in groups of 8, 6 and 4 intra-oral photographs) for caries using the BASCD criteria; dmft/DMFT were used to compute Weighted Kappa Statistic as a measure of intra-examiner reliability and intra-class correlation coefficients as a measure of inter-examiner reliability for each method. A method comparison analysis was performed to determine the 95% limits of agreement for all five examiners, comparing the visual examination method with the photographic assessment method using 8, 6 and 4 intra-oral photographs. Results The intra-rater reliability for the visual examinations ranged from 0.81 to 0.94 in the 5-year-olds and 0.90 to 0.97 in the 10-/11-year-olds. Those for the photographic assessments in the 5-year-olds were for 8 intra-oral photographs, 0.86 to 0.94, for 6 intra-oral photographs, 0.85 to 0.98 and for 4 intra-oral photographs, 0.80 to 0.96; for the 10-/11-year-olds were for 8 intra-oral photographs 0.84 to 1.00, for 6 intra-oral photographs 0.82 to 1.00 and for 4 intra-oral photographs 0.72 to 0.98. The 95% limits of agreement were −1.997 to 1.967, -2.375 to 2.735 and −2.250 to 2.921 respectively for the 5-year-olds and −2.614 to 2.027, -2.179 to 3.887 and −2.594 to 2.163 respectively for the 10-/11-year-olds. Conclusions The photographic

  11. A Comparison Study on the Assessment of Ride Comfort for LRT Passengers

    Science.gov (United States)

    Tengku Munawir, Tengku Imran; Abqari Abu Samah, Ahmad; Afiq Akmal Rosle, Muhammad; Azlis-Sani, Jalil; Hasnan, Khalid; Sabri, S. M.; Ismail, S. M.; Yunos, Muhammad Nur Annuar Mohd; Yen Bin, Teo

    2017-08-01

    Ride comfort in railway transportation is very mind boggling and it relies on different dynamic performance criteria as well as subjective observation from the train passengers. Vibration discomfort from different elements such as vehicle condition, track area condition and working condition can prompt poor ride comfort. However, there are no universal applicable standards to analyse the ride comfort. There are several factors including local condition, vehicle condition and the track condition. In this current work, level of ride comfort by previous Adtranz-Walker light rapid transit (LRT) passengers at Ampang line were analysed. A comparison was done via two possible methods which are BS EN 12299 (2009) and Sperling’s Ride Index equation. BS EN 12299 standard is used to measure and evaluate the ride comfort of seating (Nvd) and standing (Nva) of train passenger in three different routes. Next, Sperling’s ride comfort equation is used to conduct validation and comparison between the obtained data. The result indicates a higher extent of vibration in the vertical axis which impacts the overall result. The standing position demonstrates a higher exposure of vibration in all the three tested routes. Comparison of the ride comfort assessment of passenger in sitting and standing position for both methods indicates that all the track sections exceeds “pronounced but not unpleasant (medium)” limit range. Nevertheless, the seating position at track section AU did not exceed the limit and stayed at the comfortable zone. The highest discomfort level achieved for both methods for seating position are 3.34 m/s2 for Nva and 2.63 m/s2 respectively, which is at route C uptrack that is from Chan Sow Lin station to Sri Petaling station. Meanwhile, the highest discomfort level achieved for both methods for standing are 3.80 m/s2 for Nvd and 2.88 m/s2 for Wz respectively, at uptrack section which is from Sri Petaling station to Chan Sow Lin station. Thus, the highest

  12. Is lower IQ in children with epilepsy due to lower parental IQ? A controlled comparison study

    Science.gov (United States)

    Walker, Natalie M; Jackson, Daren C; Dabbs, Kevin; Jones, Jana E; Hsu, David A; Stafstrom, Carl E; Sheth, Raj D; Koehn, Monica A; Seidenberg, Michael; Hermann, Bruce P

    2012-01-01

    Aim The aim of this study was to determine the relationship between parent and child full-scale IQ (FSIQ) in children with epilepsy and in typically developing comparison children and to examine parent–child IQ differences by epilepsy characteristics. Method The study participants were 97 children (50 males, 47 females; age range 8–18y; mean age 12y 3mo, SD 3y.1mo) with recent-onset epilepsy including idiopathic generalized (n=43) and idiopathic localization-related epilepsies (n=54); 69 healthy comparison children (38 females, 31 males; age range 8–18y; mean age 12y 8mo, SD 3y 2mo), and one biological parent per child. All participants were administered the Wechsler Abbreviated Intelligence Scale. FSIQ was compared in children with epilepsy and typically developing children; FSIQ was compared in the parents of typically developing children and the parents of participants with epilepsy; parent–child FSIQ differences were compared between the groups. Results FSIQ was lower in children with epilepsy than in comparison children (pepilepsy did not differ from the FSIQ of the parents of typically developing children. Children with epilepsy had significantly lower FSIQ than their parents (pepilepsy than the comparison group (p=0.043). Epilepsy characteristics were not related to parent–child IQ difference. Interpretation Parent–child IQ difference appears to be a marker of epilepsy impact independent of familial IQ, epilepsy syndrome, and clinical seizure features. This marker is evident early in the course of idiopathic epilepsies and can be tracked over time. PMID:23216381

  13. Qualitative performance comparison of reactivity estimation between the extended Kalman filter technique and the inverse point kinetic method

    International Nuclear Information System (INIS)

    Shimazu, Y.; Rooijen, W.F.G. van

    2014-01-01

    Highlights: • Estimation of the reactivity of nuclear reactor based on neutron flux measurements. • Comparison of the traditional method, and the new approach based on Extended Kalman Filtering (EKF). • Estimation accuracy depends on filter parameters, the selection of which is described in this paper. • The EKF algorithm is preferred if the signal to noise ratio is low (low flux situation). • The accuracy of the EKF depends on the ratio of the filter coefficients. - Abstract: The Extended Kalman Filtering (EKF) technique has been applied for estimation of subcriticality with a good noise filtering and accuracy. The Inverse Point Kinetic (IPK) method has also been widely used for reactivity estimation. The important parameters for the EKF estimation are the process noise covariance, and the measurement noise covariance. However the optimal selection is quite difficult. On the other hand, there is only one parameter in the IPK method, namely the time constant for the first order delay filter. Thus, the selection of this parameter is quite easy. Thus, it is required to give certain idea for the selection of which method should be selected and how to select the required parameters. From this point of view, a qualitative performance comparison is carried out

  14. Correction for tissue attenuation in radionuclide gastric emptying studies: a comparison of a lateral image method and a geometric mean method

    Energy Technology Data Exchange (ETDEWEB)

    Collins, P.J.; Chatterton, B.E. (Royal Adelaide Hospital (Australia)); Horowitz, M.; Shearman, D.J.C. (Adelaide Univ. (Australia). Dept. of Medicine)

    1984-08-01

    Variation in depth of radionuclide within the stomach may result in significant errors in the measurement of gastric emptying if no attempt is made to correct for gamma-ray attenuation by the patient's tissues. A method of attenuation correction, which uses a single posteriorly located scintillation camera and correction factors derived from a lateral image of the stomach, was compared with a two-camera geometric mean method, in phantom studies and in five volunteer subjects. A meal of 100 g of ground beef containing /sup 99/Tcsup(m)-chicken liver, and 150 ml of water was used in the in vivo studies. In all subjects the geometric mean data showed that solid food emptied in two phases: an initial lag period, followed by a linear emptying phase. Using the geometric mean data as a standard, the anterior camera overestimated the 50% emptying time (T/sub 50/) by an average of 15% (range 5-18) and the posterior camera underestimated this parameter by 15% (4-22). The posterior data, corrected for attenuation using the lateral image method, underestimated the T/sub 50/ by 2% (-7 to +7). The difference in the distances of the proximal and distal stomach from the posterior detector was large in all subjects (mean 5.7 cm, range 3.9-7.4).

  15. Comparative study of two methods for determining the diffusible hydrogen content in welds

    International Nuclear Information System (INIS)

    Celio de Abreu, L.; Modenesi, P.J.; Villani-Marques, P.

    1994-01-01

    This work presents a comparative study of the methods for measurement of the amount of diffusible hydrogen in welds: glycerin, mercury and gaseous chromatography. The effect of the variables collecting temperatures and times were analyzed. Basic electrodes type AWS E 9018-M were humidified and dried at different times and temperatures in order to obtain a large variation in the diffusible hydrogen contents. The results showed that the collecting time can be reduced when the collecting temperature is raised, the mercury and chromatography methods present similar results, higher than those obtained by the glycerin method, the use of liquid nitrogen in the preparation of the specimens for test is unessential. The chromatography method presents the lower dispersion and is the method that can have the collecting time more reduced by the raising of the collecting temperature. The use of equations for comparison between results obtained by the various methods encountered in the literature is also discussed. (Author) 16 refs

  16. Comparison between different uncertainty propagation methods in multivariate analysis: An application in the bivariate case

    International Nuclear Information System (INIS)

    Mullor, R.; Sanchez, A.; Martorell, S.; Martinez-Alzamora, N.

    2011-01-01

    Safety related systems performance optimization is classically based on quantifying the effects that testing and maintenance activities have on reliability and cost (R+C). However, R+C quantification is often incomplete in the sense that important uncertainties may not be considered. An important number of studies have been published in the last decade in the field of R+C based optimization considering uncertainties. They have demonstrated that inclusion of uncertainties in the optimization brings the decision maker insights concerning how uncertain the R+C results are and how this uncertainty does matter as it can result in differences in the outcome of the decision making process. Several methods of uncertainty propagation based on the theory of tolerance regions have been proposed in the literature depending on the particular characteristics of the variables in the output and their relations. In this context, the objective of this paper focuses on the application of non-parametric and parametric methods to analyze uncertainty propagation, which will be implemented on a multi-objective optimization problem where reliability and cost act as decision criteria and maintenance intervals act as decision variables. Finally, a comparison of results of these applications and the conclusions obtained are presented.

  17. Comparison between different uncertainty propagation methods in multivariate analysis: An application in the bivariate case

    Energy Technology Data Exchange (ETDEWEB)

    Mullor, R. [Dpto. Estadistica e Investigacion Operativa, Universidad Alicante (Spain); Sanchez, A., E-mail: aisanche@eio.upv.e [Dpto. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica Valencia, Camino de Vera s/n 46022 (Spain); Martorell, S. [Dpto. Ingenieria Quimica y Nuclear, Universidad Politecnica Valencia (Spain); Martinez-Alzamora, N. [Dpto. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica Valencia, Camino de Vera s/n 46022 (Spain)

    2011-06-15

    Safety related systems performance optimization is classically based on quantifying the effects that testing and maintenance activities have on reliability and cost (R+C). However, R+C quantification is often incomplete in the sense that important uncertainties may not be considered. An important number of studies have been published in the last decade in the field of R+C based optimization considering uncertainties. They have demonstrated that inclusion of uncertainties in the optimization brings the decision maker insights concerning how uncertain the R+C results are and how this uncertainty does matter as it can result in differences in the outcome of the decision making process. Several methods of uncertainty propagation based on the theory of tolerance regions have been proposed in the literature depending on the particular characteristics of the variables in the output and their relations. In this context, the objective of this paper focuses on the application of non-parametric and parametric methods to analyze uncertainty propagation, which will be implemented on a multi-objective optimization problem where reliability and cost act as decision criteria and maintenance intervals act as decision variables. Finally, a comparison of results of these applications and the conclusions obtained are presented.

  18. International Comparisons of Income Poverty and Extreme Income Poverty

    OpenAIRE

    Blackburn, McKinley L.

    1993-01-01

    Uses LIS data to study the sensitivity of cross-national income poverty comparisons to the method in which poverty is measured. Examined are the differences between using absolute and relative poverty comparisons as well as the consequence of lowering the real value of the poverty line to examine extreme poverty.

  19. Evaluation of head and neck cancer with 18F-FDG PET: a comparison with conventional methods

    International Nuclear Information System (INIS)

    Kresnik, E.; Mikosch, P.; Gallowitsch, H.J.; Heinisch, M.; Unterweger, O.; Kumnig, G.; Gomez, I.; Lind, P.; Kogler, D.; Wieser, S.; Gruenbacher, G.; Raunik, W.

    2001-01-01

    The aim of this study was to evaluate the usefulness of 18 F-FDG PET in the diagnosis and staging of primary and recurrent malignant head and neck tumours in comparison with conventional imaging methods [including ultrasonography, radiography, computed tomography (CT) and magnetic resonance imaging (MRI)], physical examination, panendoscopy and biopsies in clinical routine. A total of 54 patients (13 female, 41 male, age 61.3±12 years) were investigated retrospectively. Three groups were formed. In group I, 18 F-FDG PET was performed in 15 patients to detect unknown primary cancers. In group II, 24 studies were obtained for preoperative staging of proven head and neck cancer. In group III, 18 F-FDG PET was used in 15 patients to monitor tumour recurrence after radiotherapy and/or chemotherapy. In all patients, imaging was obtained at 70 min after the intravenous administration of 180 MBq 18 F-FDG. In 11 of the 15 patients in group I, the primary cancer could be found with 18 F-FDG, yielding a detection rate of 73.3%. In 4 of the 15 patients, CT findings were also suggestive of the primary cancer but were nonetheless equivocal. In these patients, 18 F-FDG showed increased 18 F-FDG uptake by the primary tumour, which was confirmed by histology. One patient had recurrence of breast carcinoma that could not be detected with 18 F-FDG PET, but was detected by CT. In three cases, the primary cancer could not be found with any imaging method. Among the 24 patients in group II investigated for staging purposes, 18 F-FDG PET detected a total of 13 local and three distant lymph node metastases, whereas the conventional imaging methods detected only nine local and one distant lymph node metastases. The results of 18 F-FDG PET led to an upstaging in 5/24 (20.8%) patients. The conventional imaging methods were false positive in 5/24 (20.8%). There was one false positive result using 18 F-FDG PET. Among the 15 patients of group III with suspected recurrence after radiotherapy

  20. Comparison of ERBS orbit determination accuracy using batch least-squares and sequential methods

    Science.gov (United States)

    Oza, D. H.; Jones, T. L.; Fabien, S. M.; Mistretta, G. D.; Hart, R. C.; Doll, C. E.

    1991-10-01

    The Flight Dynamics Div. (FDD) at NASA-Goddard commissioned a study to develop the Real Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination of spacecraft on a DOS based personal computer (PC). An overview is presented of RTOD/E capabilities and the results are presented of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOS/E on a PC with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. RTOD/E was used to perform sequential orbit determination for the Earth Radiation Budget Satellite (ERBS), and the Goddard Trajectory Determination System (GTDS) was used to perform the batch least squares orbit determination. The estimated ERBS ephemerides were obtained for the Aug. 16 to 22, 1989, timeframe, during which intensive TDRSS tracking data for ERBS were available. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for ERBS; the solution differences were less than 40 meters after the filter had reached steady state.

  1. Comparison of ERBS orbit determination accuracy using batch least-squares and sequential methods

    Science.gov (United States)

    Oza, D. H.; Jones, T. L.; Fabien, S. M.; Mistretta, G. D.; Hart, R. C.; Doll, C. E.

    1991-01-01

    The Flight Dynamics Div. (FDD) at NASA-Goddard commissioned a study to develop the Real Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination of spacecraft on a DOS based personal computer (PC). An overview is presented of RTOD/E capabilities and the results are presented of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOS/E on a PC with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. RTOD/E was used to perform sequential orbit determination for the Earth Radiation Budget Satellite (ERBS), and the Goddard Trajectory Determination System (GTDS) was used to perform the batch least squares orbit determination. The estimated ERBS ephemerides were obtained for the Aug. 16 to 22, 1989, timeframe, during which intensive TDRSS tracking data for ERBS were available. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for ERBS; the solution differences were less than 40 meters after the filter had reached steady state.

  2. TL glow ratios at different temperature intervals of integration in thermoluminescence method. Comparison of Japanese standard (MHLW notified) method with CEN standard methods

    International Nuclear Information System (INIS)

    Todoriki, Setsuko; Saito, Kimie; Tsujimoto, Yuka

    2008-01-01

    The effect of the integration temperature intervals of TL intensities on the TL glow ratio was examined in comparison of the notified method of the Ministry of Health, Labour and Welfare (MHLW method) with EN1788. Two kinds of un-irradiated geological standard rock and three kinds of spices (black pepper, turmeric, and oregano) irradiated at 0.3 kGy or 1.0 kGy were subjected to TL analysis. Although the TL glow ratio exceeded 0.1 in the andesite according to the calculation of the MHLW notified method (integration interval; 70-490degC), the maximum of the first glow were observed at 300degC or more, attributed the influence of the natural radioactivity and distinguished from food irradiation. When the integration interval was set to 166-227degC according to EN1788, the TL glow ratios became remarkably smaller than 0.1, and the evaluation of the un-irradiated sample became more clear. For spices, the TL glow ratios by the MHLW notified method fell below 0.1 in un-irradiated samples and exceeded 0.1 in irradiated ones. Moreover, Glow1 maximum temperatures of the irradiated samples were observed at the range of 168-196degC, and those of un-irradiated samples were 258degC or more. Therefore, all samples were correctly judged by the criteria of the MHLW method. However, based on the temperature range of integration defined by EN1788, the TL glow ratio of un-irradiated samples remarkably became small compared with that of the MHLW method, and the discrimination of the irradiated sample from non-irradiation sample became clearer. (author)

  3. A study of drying and cleaning methods used in preparation for fluorescent penetrant inspection - Part II

    International Nuclear Information System (INIS)

    Brasche, L.; Lopez, R.; Larson, B.

    2003-01-01

    Fluorescent penetrant inspection is the most widely used method for aerospace components such as critical rotating components of gas turbine engines. Successful use of FPI begins with a clean and dry part, followed by a carefully controlled and applied FPI process, and conscientious inspection by well trained personnel. A variety of cleaning methods are in use for cleaning of titanium and nickel parts with selection based on the soils or contamination to be removed. Cleaning methods may include chemical or mechanical methods with sixteen different types studied as part of this program. Several options also exist for use in drying parts prior to FPI. Samples were generated and exposed to a range of conditions to study the effect of both drying and cleaning methods on the flaw response of FPI. Low cycle fatigue (LCF) cracks were generated in approximately 40 nickel and 40 titanium samples for evaluation of the various cleaning methods. Baseline measurements were made for each of the samples using a photometer to measure sample brightness and a UVA videomicroscope to capture digital images of the FPI indications. Samples were exposed to various contaminants, cleaned and inspected. Brightness measurements and digital images were also taken to compare to the baseline data. A comparison of oven drying to flash dry in preparation for FPI has been completed and will be reported in Part I. Comparison of the effectiveness of various cleaning methods for the contaminants will be presented in Part II. The cleaning and drying studies were completed in cooperation with Delta Airlines using cleaning, drying and FPI processes typical of engine overhaul processes and equipment. The work was completed as part of the Engine Titanium Consortium and included investigators from Honeywell, General Electric, Pratt and Whitney, and Rolls Royce

  4. A study of drying and cleaning methods used in preparation for fluorescent penetrant inspection - Part I

    International Nuclear Information System (INIS)

    Brasche, L.; Lopez, R.; Larson, B.

    2003-01-01

    Fluorescent penetrant inspection is the most widely used method for aerospace components such as critical rotating components of gas turbine engines. Successful use of FPI begins with a clean and dry part, followed by a carefully controlled and applied FPI process, and conscientious inspection by well trained personnel. A variety of cleaning methods are in use for cleaning of titanium and nickel parts with selection based on the soils or contamination to be removed. Cleaning methods may include chemical or mechanical methods with sixteen different types studied as part of this program. Several options also exist for use in drying parts prior to FPI. Samples were generated and exposed to a range of conditions to study the effect of both drying and cleaning methods on the flaw response of FPI. Low cycle fatigue (LCF) cracks were generated in approximately 40 nickel and 40 titanium samples for evaluation of the various cleaning methods. Baseline measurements were made for each of the samples using a photometer to measure sample brightness and a UVA videomicroscope to capture digital images of the FPI indications. Samples were exposed to various contaminants, cleaned and inspected. Brightness measurements and digital images were also taken to compare to the baseline data. A comparison of oven drying to flash dry in preparation for FPI has been completed and will be reported in Part I. Comparison of the effectiveness of various cleaning methods for the contaminants will be presented in Part II. The cleaning and drying studies were completed in cooperation with Delta Airlines using cleaning, drying and FPI processes typical of engine overhaul processes and equipment. The work was completed as part of the Engine Titanium Consortium and included investigators from Honeywell, General Electric, Pratt and Whitney, and Rolls Royce

  5. Comparison of a Label-Free Quantitative Proteomic Method Based on Peptide Ion Current Area to the Isotope Coded Affinity Tag Method

    Directory of Open Access Journals (Sweden)

    Young Ah Goo

    2008-01-01

    Full Text Available Recently, several research groups have published methods for the determination of proteomic expression profiling by mass spectrometry without the use of exogenously added stable isotopes or stable isotope dilution theory. These so-called label-free, methods have the advantage of allowing data on each sample to be acquired independently from all other samples to which they can later be compared in silico for the purpose of measuring changes in protein expression between various biological states. We developed label free software based on direct measurement of peptide ion current area (PICA and compared it to two other methods, a simpler label free method known as spectral counting and the isotope coded affinity tag (ICAT method. Data analysis by these methods of a standard mixture containing proteins of known, but varying, concentrations showed that they performed similarly with a mean squared error of 0.09. Additionally, complex bacterial protein mixtures spiked with known concentrations of standard proteins were analyzed using the PICA label-free method. These results indicated that the PICA method detected all levels of standard spiked proteins at the 90% confidence level in this complex biological sample. This finding confirms that label-free methods, based on direct measurement of the area under a single ion current trace, performed as well as the standard ICAT method. Given the fact that the label-free methods provide ease in experimental design well beyond pair-wise comparison, label-free methods such as our PICA method are well suited for proteomic expression profiling of large numbers of samples as is needed in clinical analysis.

  6. Model uncertainty of various settlement estimation methods in shallow tunnels excavation; case study: Qom subway tunnel

    Science.gov (United States)

    Khademian, Amir; Abdollahipour, Hamed; Bagherpour, Raheb; Faramarzi, Lohrasb

    2017-10-01

    In addition to the numerous planning and executive challenges, underground excavation in urban areas is always followed by certain destructive effects especially on the ground surface; ground settlement is the most important of these effects for which estimation there exist different empirical, analytical and numerical methods. Since geotechnical models are associated with considerable model uncertainty, this study characterized the model uncertainty of settlement estimation models through a systematic comparison between model predictions and past performance data derived from instrumentation. To do so, the amount of surface settlement induced by excavation of the Qom subway tunnel was estimated via empirical (Peck), analytical (Loganathan and Poulos) and numerical (FDM) methods; the resulting maximum settlement value of each model were 1.86, 2.02 and 1.52 cm, respectively. The comparison of these predicted amounts with the actual data from instrumentation was employed to specify the uncertainty of each model. The numerical model outcomes, with a relative error of 3.8%, best matched the reality and the analytical method, with a relative error of 27.8%, yielded the highest level of model uncertainty.

  7. Age modelling for Pleistocene lake sediments: A comparison of methods from the Andean Fúquene Basin (Colombia) case study

    NARCIS (Netherlands)

    Groot, M.H.M.; van der Plicht, J.; Hooghiemstra, H.; Lourens, L.J.; Rowe, H.D.

    2014-01-01

    Challenges and pitfalls for developing age models for long lacustrine sedimentary records are discussed and a comparison is made between radiocarbon dating, visual curve matching, and frequency analysis in the depth domain in combination with cyclostratigraphy. A core section of the high resolution

  8. Age modelling for Pleistocene lake sediments: a comparison of methods from the Andean Fúquene Basin (Colombia) case study

    NARCIS (Netherlands)

    Groot, M.H.M.; van der Plicht, J.; Hooghiemstra, H.; Lourens, L.J.; Rowe, H.D.

    2014-01-01

    Challenges and pitfalls for developing age models for long lacustrine sedimentary records are discussed and a comparison is made between radiocarbon dating, visual curve matching, and frequency analysis in the depth domain in combination with cyclostratigraphy. A core section of the high resolution

  9. COMPARISON BETWEEN MIXED INTEGER PROGRAMMING WITH HEURISTIC METHOD FOR JOB SHOP SCHEDULING WITH SEPARABLE SEQUENCE-DEPENDENT SETUPS

    Directory of Open Access Journals (Sweden)

    I Gede Agus Widyadana

    2001-01-01

    Full Text Available The decisions to choose appropriate tools for solving industrial problems are not just tools that achieve optimal solution only but it should consider computation time too. One of industrial problems that still difficult to achieve both criteria is scheduling problem. This paper discuss comparison between mixed integer programming which result optimal solution and heuristic method to solve job shop scheduling problem with separable sequence-dependent setup. The problems are generated and the result shows that the heuristic methods still cannot satisfy optimal solution.

  10. Monitoring of urinary calcium and phosphorus excretion in preterm infants: comparison of 2 methods.

    Science.gov (United States)

    Staub, Eveline; Wiedmer, Nicolas; Staub, Lukas P; Nelle, Mathias; von Vigier, Rodo O

    2014-04-01

    Premature babies require supplementation with calcium (Ca) and phosphorus (P) to prevent metabolic bone disease of prematurity. To guide mineral supplementation, 2 methods of monitoring urinary excretion of Ca and P are used: urinary Ca or P concentration and Ca/creatinine (Crea) or P/Crea ratios. We compare these 2 methods in regards to their agreement on the need for mineral supplementation. Retrospective chart review of 230 premature babies with birth weight patient characteristics to predict discordant results. A total of 24.8% of cases did not agree on the indication for Ca supplementation, and 8.8% for P. Total daily Ca intake was the only patient characteristic associated with discordant results. With the intention to supplement the respective mineral, comparison of urinary mineral concentration with mineral/Crea ratio is moderate for Ca and good for P. The results do not allow identifying superiority of either method on the decision as to which babies require Ca and/or P supplements.

  11. Comparison of modern and traditional methods of soilsorption complex measurement : the basis of long -term studies and modelling

    Directory of Open Access Journals (Sweden)

    Kučera Aleš

    2014-03-01

    Full Text Available This paper presents the correlations between two different analytical methods of assessing soil nutrient contents. Soil nutrient content measurements measured using the flame atomic absorption spectrometry (FAAS method, which uses barium chloride extraction, were compared with those of the now-unused Gedroiz method, which uses ammonium chloride extraction (calcium by titration, magnesium, potassium and sodium by weighing. Natural forest soils from the Ukrainian Carpathians at the localities of Javorník and Pop Ivan were used. Despite the risk of analysis errors during the complicated analytical procedure, the results showed a high level of correlation between different nutrient content measurements across the whole soil profile. This allows concentration values given in different studies to be linearly recalculated on results of modern method. In this way, results can be used to study soil’s chemical changes over time from the soil samples that were analysed in the past using labour-intensive and time-consuming methods with a higher risk of analytic error.

  12. Comparison of two methods of therapy level calibration at 60Co gamma beams

    International Nuclear Information System (INIS)

    Bjerke, H.; Jaervinen, H.; Grimbergen, T.W.M.; Grindborg, J.E.; Chauvenet, B.; Czap, L.; Ennow, K.; Moretti, C.; Rocha, P.

    1998-01-01

    The accuracy and traceability of the calibration of radiotherapy dosimeters is of great concern to those involved in the delivery of radiotherapy. It has been proposed that calibration should be carried out directly in terms of absorbed dose to water, instead of using the conventional and widely applied quantity of air kerma. In this study, the faithfulness in disseminating standards of both air kerma and absorbed dose to water were evaluated, through comparison of both types of calibration for three types of commonly used radiotherapy dosimeters at 60 Co gamma beams at a few secondary and primary standard dosimetry laboratories (SSDLs and PSDLs). A supplementary aim was to demonstrate the impact which the change in the method of calibration would have on clinical dose measurements at the reference point. Within the estimated uncertainties, both the air kerma and absorbed dose to water calibration factors obtained at different laboratories were regarded as consistent. As might be expected, between the SSDLs traceable to the same PSDL the observed differences were smaller (less than 0.5%) than between PSDLs or SSDLs traceable to different PSDLs (up to 1.5%). This can mainly be attributed to the reported differences between the primary standards. The calibration factors obtained by the two methods differed by up to about 1.5% depending on the primary standards involved and on the parameters of calculation used for 60 Co gamma radiation. It is concluded that this discrepancy should be settled before the new method of calibration at 60 Co gamma beams in terms of absorbed dose to water is taken into routine use. (author)

  13. Comparison of two methods of therapy level calibration at 60Co gamma beams

    International Nuclear Information System (INIS)

    Bjerke, H; Jaervinen, H; Grimbergen, T W M; Grindborg, J-E; Chauvenet, B; Czap, L; Ennow, K; Moretti, C; Rocha, P

    1998-01-01

    The accuracy and traceability of the calibration of radiotherapy dosimeters is of great concern to those involved in the delivery of radiotherapy. It has been proposed that calibration should be carried out directly in terms of absorbed dose to water, instead of using the conventional and widely applied quantity of air kerma. In this study, the faithfulness in disseminating standards of both air kerma and absorbed dose to water were evaluated, through comparison of both types of calibration for three types of commonly used radiotherapy dosimeters at 60 Co gamma beams at a few secondary and primary standard dosimetry laboratories (SSDLs and PSDLs). A supplementary aim was to demonstrate the impact which the change in the method of calibration would have on clinical dose measurements at the reference point. Within the estimated uncertainties, both the air kerma and absorbed dose to water calibration factors obtained at different laboratories were regarded as consistent. As might be expected, between the SSDLs traceable to the same PSDL the observed differences were smaller (less than 0.5%) than between PSDLs or SSDLs traceable to different PSDLs (up to 1.5%). This can mainly be attributed to the reported differences between the primary standards. The calibration factors obtained by the two methods differed by up to about 1.5% depending on the primary standards involved and on the parameters of calculation used for 60 Co gamma radiation. It is concluded that this discrepancy should be settled before the new method of calibration at 60 Co gamma beams in terms of absorbed dose to water is taken into routine use

  14. Study of shielding analysis methods for casks of spent fuel and radioactive waste

    International Nuclear Information System (INIS)

    Saito, Ai

    2017-01-01

    Casks are used for storage or transport spent fuels or radioactive waste. Because high shielding performances are required, it is very important to confirm the validity of shielding analysis methods in order to evaluate cask shielding abilities appropriately. For this purpose, following studies were carried out. 1) A series of parameter survey for several codes to evaluated the difference of the results. 2) Calculations using the MCNP code are effective and theoretically have better accuracy. However setting reasonable variance reduction parameters is indispensable. Therefore, effectiveness of the ADVANTG code which produces automatically reasonable variance reduction parameters is carried out by comparison with conventional method. As a result, the validity of shielding analysis methods for casks is confirmed. The results will be taken into consideration in our future shielding analysis. (author)

  15. Robustness of phase retrieval methods in x-ray phase contrast imaging: A comparison

    International Nuclear Information System (INIS)

    Yan, Aimin; Wu, Xizeng; Liu, Hong

    2011-01-01

    Purpose: The robustness of the phase retrieval methods is of critical importance for limiting and reducing radiation doses involved in x-ray phase contrast imaging. This work is to compare the robustness of two phase retrieval methods by analyzing the phase maps retrieved from the experimental images of a phantom. Methods: Two phase retrieval methods were compared. One method is based on the transport of intensity equation (TIE) for phase contrast projections, and the TIE-based method is the most commonly used method for phase retrieval in the literature. The other is the recently developed attenuation-partition based (AP-based) phase retrieval method. The authors applied these two methods to experimental projection images of an air-bubble wrap phantom for retrieving the phase map of the bubble wrap. The retrieved phase maps obtained by using the two methods are compared. Results: In the wrap's phase map retrieved by using the TIE-based method, no bubble is recognizable, hence, this method failed completely for phase retrieval from these bubble wrap images. Even with the help of the Tikhonov regularization, the bubbles are still hardly visible and buried in the cluttered background in the retrieved phase map. The retrieved phase values with this method are grossly erroneous. In contrast, in the wrap's phase map retrieved by using the AP-based method, the bubbles are clearly recovered. The retrieved phase values with the AP-based method are reasonably close to the estimate based on the thickness-based measurement. The authors traced these stark performance differences of the two methods to their different techniques employed to deal with the singularity problem involved in the phase retrievals. Conclusions: This comparison shows that the conventional TIE-based phase retrieval method, regardless if Tikhonov regularization is used or not, is unstable against the noise in the wrap's projection images, while the AP-based phase retrieval method is shown in these

  16. The Comparison Study of Quadratic Infinite Beam Program on Optimization Instensity Modulated Radiation Therapy Treatment Planning (IMRTP) between Threshold and Exponential Scatter Method with CERR® In The Case of Lung Cancer

    International Nuclear Information System (INIS)

    Hardiyanti, Y; Haekal, M; Waris, A; Haryanto, F

    2016-01-01

    This research compares the quadratic optimization program on Intensity Modulated Radiation Therapy Treatment Planning (IMRTP) with the Computational Environment for Radiotherapy Research (CERR) software. We assumed that the number of beams used for the treatment planner was about 9 and 13 beams. The case used the energy of 6 MV with Source Skin Distance (SSD) of 100 cm from target volume. Dose calculation used Quadratic Infinite beam (QIB) from CERR. CERR was used in the comparison study between Gauss Primary threshold method and Gauss Primary exponential method. In the case of lung cancer, the threshold variation of 0.01, and 0.004 was used. The output of the dose was distributed using an analysis in the form of DVH from CERR. The maximum dose distributions obtained were on the target volume (PTV) Planning Target Volume, (CTV) Clinical Target Volume, (GTV) Gross Tumor Volume, liver, and skin. It was obtained that if the dose calculation method used exponential and the number of beam 9. When the dose calculation method used the threshold and the number of beam 13, the maximum dose distributions obtained were on the target volume PTV, GTV, heart, and skin. (paper)

  17. Comparison of VOC measurements in Nashville, TN, during the southern oxidants study (SOS) 1999

    International Nuclear Information System (INIS)

    Grabmer, W.; Wisthaler, A.; Hansel, A.; Stroud, C.; Roberts, J.M.; Fehsenfeld, F.C.

    2002-01-01

    Full text: During the Southern Oxidants Study (SOS) 1999 Nashville campaign ambient air samples were analyzed at Cornelia Fort Airport (CFA) for organic compounds by two independent methods: 1) a gas chromatographic systems operated by NOAAs Aeronomy Laboratory, which performed immediate analysis of collected samples and 2) an in situ proton transfer reaction mass spectrometer (PTR-MS) system operated by the Univ. of Innsbruck. The sample protocols were quite different for the different methods. The GC system sequentially collected and analyzed air samples each 60 minutes for VOCs. The in-situ PTR-MS system measured more than 20 VOCs on a time shared basis for 5 to 15 seconds respectively, once each 5 minutes. The PTR-MS system is not able to distinguish between isobaric species, therefore acetone and propanal (MVK and MACR) values measured by NOAAs GC were added up prior to comparison with the respective PTR-MS values. For all species mentioned above the different measurement methods show good agreement. (author)

  18. Comparison of VOC measurements in Nashville, TE, during the southern oxidants study (SOS) 1999

    International Nuclear Information System (INIS)

    Grabmer, W.; Wisthaler, A.; Hansel, A.; Stroud, C.; Roberts, J.M.; Fehsenfeld, F.C.

    2002-01-01

    During the Southern Oxidants Study (SOS) 1999 Nashville campaign ambient air samples were analyzed at Cornelia Fort Airport (CFA) for organic compounds by two independent methods: 1) a gas chromatographic systems operated by NOAAs Aeronomy Laboratory, which performed immediate analysis of collected samples and 2) an in situ proton transfer reaction mass spectrometer (PTR M S) system operated by the University of Innsbruck. The sample protocols were quite different for the different methods. The GC system sequentially collected and analyzed air samples each 60 minutes for VOCs. The in-situ PTR-MS system measured more than 20 VOCs on a time shared basis for 5 to 15 seconds respectively, once each five minutes. The PTR-MS system is not able to distinguish between isobaric species, therefor acetone and propanal (MVK and MACR) values measured by NOAAs GC were added up prior to comparison with the respective PTR-MS values. For all species mentioned above the different measurement methods show good agreement. (author)

  19. Accommodating error analysis in comparison and clustering of molecular fingerprints.

    Science.gov (United States)

    Salamon, H; Segal, M R; Ponce de Leon, A; Small, P M

    1998-01-01

    Molecular epidemiologic studies of infectious diseases rely on pathogen genotype comparisons, which usually yield patterns comprising sets of DNA fragments (DNA fingerprints). We use a highly developed genotyping system, IS6110-based restriction fragment length polymorphism analysis of Mycobacterium tuberculosis, to develop a computational method that automates comparison of large numbers of fingerprints. Because error in fragment length measurements is proportional to fragment length and is positively correlated for fragments within a lane, an align-and-count method that compensates for relative scaling of lanes reliably counts matching fragments between lanes. Results of a two-step method we developed to cluster identical fingerprints agree closely with 5 years of computer-assisted visual matching among 1,335 M. tuberculosis fingerprints. Fully documented and validated methods of automated comparison and clustering will greatly expand the scope of molecular epidemiology.

  20. Comparison of Methods of Teaching Children Proper Lifting ...

    African Journals Online (AJOL)

    Objective: This study was designed to determine the effects of three teaching methods on children\\'s ability to demonstrate and recall their mastery of proper lifting techniques. Method: Ninety-three primary five and six public school children who had no knowledge of proper lifting technique were assigned into three equal ...