WorldWideScience

Sample records for methods a comparison study

  1. a comparison of methods in a behaviour study of the south african ...

    African Journals Online (AJOL)

    A COMPARISON OF METHODS IN A BEHAVIOUR STUDY OF THE ... Three methods are outlined in this paper and the results obtained from each method were .... There was definitely no aggressive response towards the Sky pointing mate.

  2. A simple statistical method for catch comparison studies

    DEFF Research Database (Denmark)

    Holst, René; Revill, Andrew

    2009-01-01

    For analysing catch comparison data, we propose a simple method based on Generalised Linear Mixed Models (GLMM) and use polynomial approximations to fit the proportions caught in the test codend. The method provides comparisons of fish catch at length by the two gears through a continuous curve...... with a realistic confidence band. We demonstrate the versatility of this method, on field data obtained from the first known testing in European waters of the Rhode Island (USA) 'Eliminator' trawl. These data are interesting as they include a range of species with different selective patterns. Crown Copyright (C...

  3. Arima model and exponential smoothing method: A comparison

    Science.gov (United States)

    Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri

    2013-04-01

    This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.

  4. Study and comparison of different methods control in light water critical facility

    International Nuclear Information System (INIS)

    Michaiel, M.L.; Mahmoud, M.S.

    1980-01-01

    The control of nuclear reactors, may be studied using several control methods, such as control by rod absorbers, by inserting or removing fuel rods (moderator cavities), or by changing reflector thickness. Every method has its advantage, the comparison between these different methods and their effect on the reactivity of a reactor is the purpose of this work. A computer program is written by the authors to calculate the critical radius and worth in any case of the three precedent methods of control

  5. Soybean allergen detection methods--a comparison study

    DEFF Research Database (Denmark)

    Pedersen, M. Højgaard; Holzhauser, T.; Bisson, C.

    2008-01-01

    Soybean containing products are widely consumed, thus reliable methods for detection of soy in foods are needed in order to make appropriate risk assessment studies to adequately protect soy allergic patients. Six methods were compared using eight food products with a declared content of soy...

  6. Comparison Study of Subspace Identification Methods Applied to Flexible Structures

    Science.gov (United States)

    Abdelghani, M.; Verhaegen, M.; Van Overschee, P.; De Moor, B.

    1998-09-01

    In the past few years, various time domain methods for identifying dynamic models of mechanical structures from modal experimental data have appeared. Much attention has been given recently to so-called subspace methods for identifying state space models. This paper presents a detailed comparison study of these subspace identification methods: the eigensystem realisation algorithm with observer/Kalman filter Markov parameters computed from input/output data (ERA/OM), the robust version of the numerical algorithm for subspace system identification (N4SID), and a refined version of the past outputs scheme of the multiple-output error state space (MOESP) family of algorithms. The comparison is performed by simulating experimental data using the five mode reduced model of the NASA Mini-Mast structure. The general conclusion is that for the case of white noise excitations as well as coloured noise excitations, the N4SID/MOESP algorithms perform equally well but give better results (improved transfer function estimates, improved estimates of the output) compared to the ERA/OM algorithm. The key computational step in the three algorithms is the approximation of the extended observability matrix of the system to be identified, for N4SID/MOESP, or of the observer for the system to be identified, for the ERA/OM. Furthermore, the three algorithms only require the specification of one dimensioning parameter.

  7. Gene Expression Profiles for Predicting Metastasis in Breast Cancer: A Cross-Study Comparison of Classification Methods

    Directory of Open Access Journals (Sweden)

    Mark Burton

    2012-01-01

    Full Text Available Machine learning has increasingly been used with microarray gene expression data and for the development of classifiers using a variety of methods. However, method comparisons in cross-study datasets are very scarce. This study compares the performance of seven classification methods and the effect of voting for predicting metastasis outcome in breast cancer patients, in three situations: within the same dataset or across datasets on similar or dissimilar microarray platforms. Combining classification results from seven classifiers into one voting decision performed significantly better during internal validation as well as external validation in similar microarray platforms than the underlying classification methods. When validating between different microarray platforms, random forest, another voting-based method, proved to be the best performing method. We conclude that voting based classifiers provided an advantage with respect to classifying metastasis outcome in breast cancer patients.

  8. Evaluation of analytical reconstruction with a new gap-filling method in comparison to iterative reconstruction in [11C]-raclopride PET studies

    International Nuclear Information System (INIS)

    Tuna, U.; Johansson, J.; Ruotsalainen, U.

    2014-01-01

    The aim of the study was (1) to evaluate the reconstruction strategies with dynamic [ 11 C]-raclopride human positron emission tomography (PET) studies acquired from ECAT high-resolution research tomograph (HRRT) scanner and (2) to justify for the selected gap-filling method for analytical reconstruction with simulated phantom data. A new transradial bicubic interpolation method has been implemented to enable faster analytical 3D-reprojection (3DRP) reconstructions for the ECAT HRRT PET scanner data. The transradial bicubic interpolation method was compared to the other gap-filling methods visually and quantitatively using the numerical Shepp-Logan phantom. The performance of the analytical 3DRP reconstruction method with this new gap-filling method was evaluated in comparison with the iterative statistical methods: ordinary Poisson ordered subsets expectation maximization (OPOSEM) and resolution modeled OPOSEM methods. The image reconstruction strategies were evaluated using human data at different count statistics and consequently at different noise levels. In the assessments, 14 [ 11 C]-raclopride dynamic PET studies (test-retest studies of 7 healthy subjects) acquired from the HRRT PET scanner were used. Besides the visual comparisons of the methods, we performed regional quantitative evaluations over the cerebellum, caudate and putamen structures. We compared the regional time-activity curves (TACs), areas under the TACs and binding potential (BP ND ) values. The results showed that the new gap-filling method preserves the linearity of the 3DRP method. Results with the 3DRP after gap-filling method exhibited hardly any dependency on the count statistics (noise levels) in the sinograms while we observed changes in the quantitative results with the EM-based methods for different noise contamination in the data. With this study, we showed that 3DRP with transradial bicubic gap-filling method is feasible for the reconstruction of high-resolution PET data with

  9. A new comparison method for dew-point generators

    Science.gov (United States)

    Heinonen, Martti

    1999-12-01

    A new method for comparing dew-point generators was developed at the Centre for Metrology and Accreditation. In this method, the generators participating in a comparison are compared with a transportable saturator unit using a dew-point comparator. The method was tested by constructing a test apparatus and by comparing it with the MIKES primary dew-point generator several times in the dew-point temperature range from -40 to +75 °C. The expanded uncertainty (k = 2) of the apparatus was estimated to be between 0.05 and 0.07 °C and the difference between the comparator system and the generator is well within these limits. In particular, all of the results obtained in the range below 0 °C are within ±0.03 °C. It is concluded that a new type of a transfer standard with characteristics most suitable for dew-point comparisons can be developed on the basis of the principles presented in this paper.

  10. A Novel Group-Fused Sparse Partial Correlation Method for Simultaneous Estimation of Functional Networks in Group Comparison Studies.

    Science.gov (United States)

    Liang, Xiaoyun; Vaughan, David N; Connelly, Alan; Calamante, Fernando

    2018-05-01

    The conventional way to estimate functional networks is primarily based on Pearson correlation along with classic Fisher Z test. In general, networks are usually calculated at the individual-level and subsequently aggregated to obtain group-level networks. However, such estimated networks are inevitably affected by the inherent large inter-subject variability. A joint graphical model with Stability Selection (JGMSS) method was recently shown to effectively reduce inter-subject variability, mainly caused by confounding variations, by simultaneously estimating individual-level networks from a group. However, its benefits might be compromised when two groups are being compared, given that JGMSS is blinded to other groups when it is applied to estimate networks from a given group. We propose a novel method for robustly estimating networks from two groups by using group-fused multiple graphical-lasso combined with stability selection, named GMGLASS. Specifically, by simultaneously estimating similar within-group networks and between-group difference, it is possible to address inter-subject variability of estimated individual networks inherently related with existing methods such as Fisher Z test, and issues related to JGMSS ignoring between-group information in group comparisons. To evaluate the performance of GMGLASS in terms of a few key network metrics, as well as to compare with JGMSS and Fisher Z test, they are applied to both simulated and in vivo data. As a method aiming for group comparison studies, our study involves two groups for each case, i.e., normal control and patient groups; for in vivo data, we focus on a group of patients with right mesial temporal lobe epilepsy.

  11. A comparison of interface tracking methods

    International Nuclear Information System (INIS)

    Kothe, D.B.; Rider, W.J.

    1995-01-01

    In this Paper we provide a direct comparison of several important algorithms designed to track fluid interfaces. In the process we propose improved criteria by which these methods are to be judged. We compare and contrast the behavior of the following interface tracking methods: high order monotone capturing schemes, level set methods, volume-of-fluid (VOF) methods, and particle-based (particle-in-cell, or PIC) methods. We compare these methods by first applying a set of standard test problems, then by applying a new set of enhanced problems designed to expose the limitations and weaknesses of each method. We find that the properties of these methods are not adequately assessed until they axe tested with flows having spatial and temporal vorticity gradients. Our results indicate that the particle-based methods are easily the most accurate of those tested. Their practical use, however, is often hampered by their memory and CPU requirements. Particle-based methods employing particles only along interfaces also have difficulty dealing with gross topology changes. Full PIC methods, on the other hand, do not in general have topology restrictions. Following the particle-based methods are VOF volume tracking methods, which are reasonably accurate, physically based, robust, low in cost, and relatively easy to implement. Recent enhancements to the VOF methods using multidimensional interface reconstruction and improved advection provide excellent results on a wide range of test problems

  12. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  13. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang

    2017-09-27

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  14. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.

    Science.gov (United States)

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-09-21

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  15. Comparison of gas dehydration methods based on energy ...

    African Journals Online (AJOL)

    Comparison of gas dehydration methods based on energy consumption. ... PROMOTING ACCESS TO AFRICAN RESEARCH ... This study compares three conventional methods of natural gas (Associated Natural Gas) dehydration to carry out ...

  16. Comparison of n-γ discrimination by zero-crossing and digital charge comparison methods

    International Nuclear Information System (INIS)

    Wolski, D.; Moszynski, M.; Ludziejewski, T.; Johnson, A.; Klamra, W.; Skeppstedt, Oe.

    1995-01-01

    A comparative study of the n-γ discrimination done by the digital charge comparison and zero-crossing methods was carried out for a 130 mm in diameter and 130 mm high BC501A liquid scintillator coupled to a 130 mm diameter XP4512B photomultiplier. The high quality of the tested detector was reflected in a photoelectron yield of 2300±100 phe/MeV and excellent n-γ discrimination properties with energy discrimination thresholds corresponding to very low neutron (or electron) energies. The superiority of the Z/C method was demonstrated for the n-γ discrimination method alone, as well as, for the simultaneous separation by the pulse shape discrimination and the time-of-flight methods down to about 30 keV recoil electron energy. The digital charge comparison method fails for a large dynamic range of energy and its separation is weakly improved by time-of-flight method for low energies. (orig.)

  17. Structured Feedback Training for Time-Out: Efficacy and Efficiency in Comparison to a Didactic Method.

    Science.gov (United States)

    Jensen, Scott A; Blumberg, Sean; Browning, Megan

    2017-09-01

    Although time-out has been demonstrated to be effective across multiple settings, little research exists on effective methods for training others to implement time-out. The present set of studies is an exploratory analysis of a structured feedback method for training time-out using repeated role-plays. The three studies examined (a) a between-subjects comparison to more a traditional didactic/video modeling method of time-out training, (b) a within-subjects comparison to traditional didactic/video modeling training for another skill, and (c) the impact of structured feedback training on in-home time-out implementation. Though findings are only preliminary and more research is needed, the structured feedback method appears across studies to be an efficient, effective method that demonstrates good maintenance of skill up to 3 months post training. Findings suggest, though do not confirm, a benefit of the structured feedback method over a more traditional didactic/video training model. Implications and further research on the method are discussed.

  18. A comparison of two analytical evaluation methods for educational computer games for young children

    NARCIS (Netherlands)

    Bekker, M.M.; Baauw, E.; Barendregt, W.

    2008-01-01

    In this paper we describe a comparison of two analytical methods for educational computer games for young children. The methods compared in the study are the Structured Expert Evaluation Method (SEEM) and the Combined Heuristic Evaluation (HE) (based on a combination of Nielsen’s HE and the

  19. Comparison of two methods of quantitation in human studies of biodistribution and radiation dosimetry

    International Nuclear Information System (INIS)

    Smith, T.

    1992-01-01

    A simple method of quantitating organ radioactivity content for dosimetry purposes based on relationships between organ count rate and the initial whole body count rate, has been compared with a more rigorous method of absolute quantitation using a transmission scanning technique. Comparisons were on the basis of organ uptake (% administered activity) and resultant organ radiation doses (mGy MBq -1 ) in 6 normal male volunteers given a 99 Tc m -labelled myocardial perfusion imaging agent intravenously at rest and following exercise. In these studies, estimates of individual organ uptakes by the simple method were in error by between +24 and -16% compared with the more accurate method. However, errors on organ dose values were somewhat less and the effective dose was correct to within 3%. (Author)

  20. Improved methods for the mathematically controlled comparison of biochemical systems

    Directory of Open Access Journals (Sweden)

    Schwacke John H

    2004-06-01

    Full Text Available Abstract The method of mathematically controlled comparison provides a structured approach for the comparison of alternative biochemical pathways with respect to selected functional effectiveness measures. Under this approach, alternative implementations of a biochemical pathway are modeled mathematically, forced to be equivalent through the application of selected constraints, and compared with respect to selected functional effectiveness measures. While the method has been applied successfully in a variety of studies, we offer recommendations for improvements to the method that (1 relax requirements for definition of constraints sufficient to remove all degrees of freedom in forming the equivalent alternative, (2 facilitate generalization of the results thus avoiding the need to condition those findings on the selected constraints, and (3 provide additional insights into the effect of selected constraints on the functional effectiveness measures. We present improvements to the method and related statistical models, apply the method to a previously conducted comparison of network regulation in the immune system, and compare our results to those previously reported.

  1. ACCELERATION RENDERING METHOD ON RAY TRACING WITH ANGLE COMPARISON AND DISTANCE COMPARISON

    Directory of Open Access Journals (Sweden)

    Liliana liliana

    2007-01-01

    Full Text Available In computer graphics applications, to produce realistic images, a method that is often used is ray tracing. Ray tracing does not only model local illumination but also global illumination. Local illumination count ambient, diffuse and specular effects only, but global illumination also count mirroring and transparency. Local illumination count effects from the lamp(s but global illumination count effects from other object(s too. Objects that are usually modeled are primitive objects and mesh objects. The advantage of mesh modeling is various, interesting and real-like shape. Mesh contains many primitive objects like triangle or square (rare. A problem in mesh object modeling is long rendering time. It is because every ray must be checked with a lot of triangle of the mesh. Added by ray from other objects checking, the number of ray that traced will increase. It causes the increasing of rendering time. To solve this problem, in this research, new methods are developed to make the rendering process of mesh object faster. The new methods are angle comparison and distance comparison. These methods are used to reduce the number of ray checking. The rays predicted will not intersect with the mesh, are not checked weather the ray intersects the mesh. With angle comparison, if using small angle to compare, the rendering process will be fast. This method has disadvantage, if the shape of each triangle is big, some triangles will be corrupted. If the angle to compare is bigger, mesh corruption can be avoided but the rendering time will be longer than without comparison. With distance comparison, the rendering time is less than without comparison, and no triangle will be corrupted.

  2. Aggregation Methods in International Comparisons

    NARCIS (Netherlands)

    B.M. Balk (Bert)

    2001-01-01

    textabstractThis paper reviews the progress that has been made over the past decade in understanding the nature of the various multilateral in- ternational comparison methods. Fifteen methods are discussed and subjected to a system of ten tests. In addition, attention is paid to recently developed

  3. Proper comparison among methods using a confusion matrix

    CSIR Research Space (South Africa)

    Salmon

    2015-07-01

    Full Text Available -1 IGARSS 2015, Milan, Italy, 26-31 July 2015 Proper comparison among methods using a confusion matrix 1,2 B.P. Salmon, 2,3 W. Kleynhans, 2,3 C.P. Schwegmann and 1J.C. Olivier 1School of Engineering and ICT, University of Tasmania, Australia 2...

  4. A comparison of two instructional methods for drawing Lewis Structures

    Science.gov (United States)

    Terhune, Kari

    Two instructional methods for teaching Lewis structures were compared -- the Direct Octet Rule Method (DORM) and the Commonly Accepted Method (CAM). The DORM gives the number of bonds and the number of nonbonding electrons immediately, while the CAM involves moving electron pairs from nonbonding to bonding electrons, if necessary. The research question was as follows: Will high school chemistry students draw more accurate Lewis structures using the DORM or the CAM? Students in Regular Chemistry 1 (N = 23), Honors Chemistry 1 (N = 51) and Chemistry 2 (N = 15) at an urban high school were the study participants. An identical pretest and posttest was given before and after instruction. Students were given instruction with either the DORM (N = 45), the treatment method, or the CAM (N = 44), the control for two days. After the posttest, 15 students were interviewed, using a semistructured interview process. The pretest/posttest consisted of 23 numerical response questions and 2 to 6 free response questions that were graded using a rubric. A two-way ANOVA showed a significant interaction effect between the groups and the methods, F (1, 70) = 10.960, p = 0.001. Post hoc comparisons using the Bonferroni pairwise comparison showed that Reg Chem 1 students demonstrated larger gain scores when they had been taught the CAM (Mean difference = 3.275, SE = 1.324, p Chemistry 1 students performed better with the DORM, perhaps due to better math skills, enhanced working memory, and better metacognitive skills. Regular Chemistry 1 students performed better with the CAM, perhaps because it is more visual. Teachers may want to use the CAM or a direct-pairing method to introduce the topic and use the DORM in advanced classes when a correct structure is needed quickly.

  5. Comparison of microstickies measurement methods. Part I, sample preparation and measurement methods

    Science.gov (United States)

    Mahendra R. Doshi; Angeles Blanco; Carlos Negro; Gilles M. Dorris; Carlos C. Castro; Axel Hamann; R. Daniel Haynes; Carl Houtman; Karen Scallon; Hans-Joachim Putz; Hans Johansson; R.A. Venditti; K. Copeland; H.-M. Chang

    2003-01-01

    Recently, we completed a project on the comparison of macrostickies measurement methods. Based on the success of the project, we decided to embark on this new project on comparison of microstickies measurement methods. When we started this project, there were some concerns and doubts principally due to the lack of an accepted definition of microstickies. However, we...

  6. Experience from the comparison of two PSA-studies

    International Nuclear Information System (INIS)

    Holmberg, J.; Pulkkinen, U.

    2001-03-01

    Two probabilistic safety assessments (PSA) made for nearly identical reactors units (Forsmark 3 and Oskarshamn 3) have been compared. Two different analysis teams made the PSAs, and the analyses became quite different. The goal of the study is to identify, clarify and explain differences between PSA-studies. The purpose is to understand limitations and uncertainties in PSA, to explain reasons for differences between PSA-studies, and to give recommendations for comparison of PSA-studies and for improving the PSA-methodology. The reviews have been made by reading PSA-documentation, using the computer model and interviewing persons involved in the projects. The method and findings have been discussed within the project group. Both the PSA-project and various parts in the PSA-model have been reviewed. A major finding was that the two projects had different purpose and thus had different resources, scope and even methods in their study. The study shows that comparison of PSA results from different plants is normally not meaningful. It takes a very deep knowledge of the PSA studies to make a comparison of the results and usually one has to ensure that the compared studies have the same scope and are based on the same analysis methods. Harmonisation of the PSA-methodology is recommended in the presentation of results, presentation of methods, scope main limitation and assumption, and definitions for end states, initiating events and common cause failures. This would facilitate the comparison of the studies. Methods for validation of PSA for different application areas should be developed. The developed PSA review standards can be applied for a general validation of a study. The most important way to evaluate the real feasibility of PSA can take place only with practical applications. The PSA-documentation and models can be developed to facilitate the communication between PSA-experts and users. In any application consultation with the PSA-expert is however needed. Many

  7. A comparison of non-invasive versus invasive methods of ...

    African Journals Online (AJOL)

    Puneet Khanna

    for Hb estimation from the laboratory [total haemoglobin mass (tHb)] and arterial blood gas (ABG) machine (aHb), using ... A comparison of non-invasive versus invasive methods of haemoglobin estimation in patients undergoing intracranial surgery. 161 .... making decisions for blood transfusions based on these results.

  8. A Simplified Version of the Fuzzy Decision Method and its Comparison with the Paraconsistent Decision Method

    Science.gov (United States)

    de Carvalho, Fábio Romeu; Abe, Jair Minoro

    2010-11-01

    Two recent non-classical logics have been used to make decision: fuzzy logic and paraconsistent annotated evidential logic Et. In this paper we present a simplified version of the fuzzy decision method and its comparison with the paraconsistent one. Paraconsistent annotated evidential logic Et, introduced by Da Costa, Vago and Subrahmanian (1991), is capable of handling uncertain and contradictory data without becoming trivial. It has been used in many applications such as information technology, robotics, artificial intelligence, production engineering, decision making etc. Intuitively, one Et logic formula is type p(a, b), in which a and b belong to [0, 1] (real interval) and represent respectively the degree of favorable evidence (or degree of belief) and the degree of contrary evidence (or degree of disbelief) found in p. The set of all pairs (a; b), called annotations, when plotted, form the Cartesian Unitary Square (CUS). This set, containing a similar order relation of real number, comprises a network, called lattice of the annotations. Fuzzy logic was introduced by Zadeh (1965). It tries to systematize the knowledge study, searching mainly to study the fuzzy knowledge (you don't know what it means) and distinguish it from the imprecise one (you know what it means, but you don't know its exact value). This logic is similar to paraconsistent annotated one, since it attributes a numeric value (only one, not two values) to each proposition (then we can say that it is an one-valued logic). This number translates the intensity (the degree) with which the preposition is true. Let's X a set and A, a subset of X, identified by the function f(x). For each element x∈X, you have y = f(x)∈[0, 1]. The number y is called degree of pertinence of x in A. Decision making theories based on these logics have shown to be powerful in many aspects regarding more traditional methods, like the one based on Statistics. In this paper we present a first study for a simplified

  9. Comparison of sampling methods for the assessment of indoor microbial exposure

    DEFF Research Database (Denmark)

    Frankel, M; Timm, Michael; Hansen, E W

    2012-01-01

    revealed. This study thus facilitates comparison between methods and may therefore be used as a frame of reference when studying the literature or when conducting further studies on indoor microbial exposure. Results also imply that the relatively simple EDC method for the collection of settled dust may...

  10. A comparison of methods for teaching receptive language to toddlers with autism.

    Science.gov (United States)

    Vedora, Joseph; Grandelski, Katrina

    2015-01-01

    The use of a simple-conditional discrimination training procedure, in which stimuli are initially taught in isolation with no other comparison stimuli, is common in early intensive behavioral intervention programs. Researchers have suggested that this procedure may encourage the development of faulty stimulus control during training. The current study replicated previous work that compared the simple-conditional and the conditional-only methods to teach receptive labeling of pictures to young children with autism spectrum disorder. Both methods were effective, but the conditional-only method required fewer sessions to mastery. © Society for the Experimental Analysis of Behavior.

  11. Quantitative Imaging Biomarkers: A Review of Statistical Methods for Computer Algorithm Comparisons

    Science.gov (United States)

    2014-01-01

    Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. PMID:24919829

  12. Quantitative imaging biomarkers: a review of statistical methods for computer algorithm comparisons.

    Science.gov (United States)

    Obuchowski, Nancy A; Reeves, Anthony P; Huang, Erich P; Wang, Xiao-Feng; Buckler, Andrew J; Kim, Hyun J Grace; Barnhart, Huiman X; Jackson, Edward F; Giger, Maryellen L; Pennello, Gene; Toledano, Alicia Y; Kalpathy-Cramer, Jayashree; Apanasovich, Tatiyana V; Kinahan, Paul E; Myers, Kyle J; Goldgof, Dmitry B; Barboriak, Daniel P; Gillies, Robert J; Schwartz, Lawrence H; Sullivan, Daniel C

    2015-02-01

    Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  13. A comparison of three clustering methods for finding subgroups in MRI, SMS or clinical data

    DEFF Research Database (Denmark)

    Kent, Peter; Jensen, Rikke K; Kongsted, Alice

    2014-01-01

    ). There is a scarcity of head-to-head comparisons that can inform the choice of which clustering method might be suitable for particular clinical datasets and research questions. Therefore, the aim of this study was to perform a head-to-head comparison of three commonly available methods (SPSS TwoStep CA, Latent Gold...... LCA and SNOB LCA). METHODS: The performance of these three methods was compared: (i) quantitatively using the number of subgroups detected, the classification probability of individuals into subgroups, the reproducibility of results, and (ii) qualitatively using subjective judgments about each program...... classify individuals into those subgroups. CONCLUSIONS: Our subjective judgement was that Latent Gold offered the best balance of sensitivity to subgroups, ease of use and presentation of results with these datasets but we recognise that different clustering methods may suit other types of data...

  14. Comparison of Nested-PCR technique and culture method in ...

    African Journals Online (AJOL)

    USER

    2010-04-05

    Apr 5, 2010 ... Full Length Research Paper. Comparison of ... The aim of the present study was to evaluate the diagnostic value of nested PCR in genitourinary ... method. Based on obtained results, the positivity rate of urine samples in this study was 5.0% by using culture and PCR methods and 2.5% for acid fast staining.

  15. A comparison of three time-domain anomaly detection methods

    Energy Technology Data Exchange (ETDEWEB)

    Schoonewelle, H.; Hagen, T.H.J.J. van der; Hoogenboom, J.E. [Delft University of Technology (Netherlands). Interfaculty Reactor Institute

    1996-01-01

    Three anomaly detection methods based on a comparison of signal values with predictions from an autoregressive model are presented. These methods are: the extremes method, the {chi}{sup 2} method and the sequential probability ratio test. The methods are used to detect a change of the standard deviation of the residual noise obtained from applying an autoregressive model. They are fast and can be used in on-line applications. For each method some important anomaly detection parameters are determined by calculation or simulation. These parameters are: the false alarm rate, the average time to alarm and - being of minor importance -the alarm failure rate. Each method is optimized with respect to the average time to alarm for a given value of the false alarm rate. The methods are compared with each other, resulting in the sequential probability ratio test being clearly superior. (author).

  16. A comparison of three time-domain anomaly detection methods

    International Nuclear Information System (INIS)

    Schoonewelle, H.; Hagen, T.H.J.J. van der; Hoogenboom, J.E.

    1996-01-01

    Three anomaly detection methods based on a comparison of signal values with predictions from an autoregressive model are presented. These methods are: the extremes method, the χ 2 method and the sequential probability ratio test. The methods are used to detect a change of the standard deviation of the residual noise obtained from applying an autoregressive model. They are fast and can be used in on-line applications. For each method some important anomaly detection parameters are determined by calculation or simulation. These parameters are: the false alarm rate, the average time to alarm and - being of minor importance -the alarm failure rate. Each method is optimized with respect to the average time to alarm for a given value of the false alarm rate. The methods are compared with each other, resulting in the sequential probability ratio test being clearly superior. (author)

  17. A cross-benchmark comparison of 87 learning to rank methods

    NARCIS (Netherlands)

    Tax, N.; Bockting, S.; Hiemstra, D.

    2015-01-01

    Learning to rank is an increasingly important scientific field that comprises the use of machine learning for the ranking task. New learning to rank methods are generally evaluated on benchmark test collections. However, comparison of learning to rank methods based on evaluation results is hindered

  18. Study of n-γ discrimination by digital charge comparison method for a large volume liquid scintillator

    International Nuclear Information System (INIS)

    Moszynski, M.; Costa, G.J.; Guillaume, G.; Heusch, B.; Huck, A.; Ring, C.; Bizard, G.; Durand, D.; Peter, J.; Tamain, B.; El Masri, Y.; Hanappe, F.

    1992-01-01

    The study of the n-γ discrimination for a large 41 volume BC501A liquid scintillator coupled to a 130 mm diameter XP4512B photomultiplier was carried out by digital charge comparison method. A very good n-γ discrimination down to 100 keV of recoil electron energy was achieved. The measured relative intensity of the charge integrated at the slow component of the scintillation pulse and the photoelectron yield of the tested counter allow the factor of merit of the n-γ discrimination spectra to be calculated and to be compared with those measured experimentally. This shows that the main limitation of the n-γ discrimination is associated with the statistical fluctuation of the photoelectron number at the slow component. A serious effect of the distortion in the cable used to send the photomultiplier pulse to the electronics for the n-γ discrimination was studied. This suggests that the length of RG58 cable should be limited to about 40 m to preserve a high quality n-γ discrimination. (orig.)

  19. Comparison of different methods for the solution of sets of linear equations

    International Nuclear Information System (INIS)

    Bilfinger, T.; Schmidt, F.

    1978-06-01

    The application of the conjugate-gradient methods as novel general iterative methods for the solution of sets of linear equations with symmetrical systems matrices led to this paper, where a comparison of these methods with the conventional differently accelerated Gauss-Seidel iteration was carried out. In additon, the direct Cholesky method was also included in the comparison. The studies referred mainly to memory requirement, computing time, speed of convergence, and accuracy of different conditions of the systems matrices, by which also the sensibility of the methods with respect to the influence of truncation errors may be recognized. (orig.) 891 RW [de

  20. A comparison of methods for evaluating structure during ship collisions

    International Nuclear Information System (INIS)

    Ammerman, D.J.; Daidola, J.C.

    1996-01-01

    A comparison is provided of the results of various methods for evaluating structure during a ship-to-ship collision. The baseline vessel utilized in the analyses is a 67.4 meter in length displacement hull struck by an identical vessel traveling at speeds ranging from 10 to 30 knots. The structural response of the struck vessel and motion of both the struck and striking vessels are assessed by finite element analysis. These same results are then compared to predictions utilizing the open-quotes Tanker Structural Analysis for Minor Collisionsclose quotes (TSAMC) Method, the Minorsky Method, the Haywood Collision Process, and comparison to full-scale tests. Consideration is given to the nature of structural deformation, absorbed energy, penetration, rigid body motion, and virtual mass affecting the hydrodynamic response. Insights are provided with regard to the calibration of the finite element model which was achievable through utilizing the more empirical analyses and the extent to which the finite element analysis is able to simulate the entire collision event. 7 refs., 8 figs., 4 tabs

  1. Study on Comparison of Bidding and Pricing Behavior Distinction between Estimate Methods

    Science.gov (United States)

    Morimoto, Emi; Namerikawa, Susumu

    The most characteristic trend on bidding and pricing behavior distinction in recent years is the increasing number of bidders just above the criteria for low-price bidding investigations. The contractor's markup is the difference between the bidding price and the execution price. Therefore, the contractor's markup is the difference between criteria for low-price bidding investigations price and the execution price in the public works bid in Japan. Virtually, bidder's strategies and behavior have been controlled by public engineer's budgets. Estimation and bid are inseparably linked in the Japanese public works procurement system. The trial of the unit price-type estimation method begins in 2004. On another front, accumulated estimation method is one of the general methods in public works. So, there are two types of standard estimation methods in Japan. In this study, we did a statistical analysis on the bid information of civil engineering works for the Ministry of Land, Infrastructure, and Transportation in 2008. It presents several issues that bidding and pricing behavior is related to an estimation method (several estimation methods) for public works bid in Japan. The two types of standard estimation methods produce different results that number of bidders (decide on bid-no bid strategy) and distribution of bid price (decide on mark-up strategy).The comparison on the distribution of bid prices showed that the percentage of the bid concentrated on the criteria for low-price bidding investigations have had a tendency to get higher in the large-sized public works by the unit price-type estimation method, comparing with the accumulated estimation method. On one hand, the number of bidders who bids for public works estimated unit-price tends to increase significantly Public works estimated unit-price is likely to have been one of the factors for the construction companies to decide if they participate in the biddings.

  2. Differential and difference equations a comparison of methods of solution

    CERN Document Server

    Maximon, Leonard C

    2016-01-01

    This book, intended for researchers and graduate students in physics, applied mathematics and engineering, presents a detailed comparison of the important methods of solution for linear differential and difference equations - variation of constants, reduction of order, Laplace transforms and generating functions - bringing out the similarities as well as the significant differences in the respective analyses. Equations of arbitrary order are studied, followed by a detailed analysis for equations of first and second order. Equations with polynomial coefficients are considered and explicit solutions for equations with linear coefficients are given, showing significant differences in the functional form of solutions of differential equations from those of difference equations. An alternative method of solution involving transformation of both the dependent and independent variables is given for both differential and difference equations. A comprehensive, detailed treatment of Green’s functions and the associat...

  3. A Comparison of Surface Acoustic Wave Modeling Methods

    Science.gov (United States)

    Wilson, W. c.; Atkinson, G. M.

    2009-01-01

    Surface Acoustic Wave (SAW) technology is low cost, rugged, lightweight, extremely low power and can be used to develop passive wireless sensors. For these reasons, NASA is investigating the use of SAW technology for Integrated Vehicle Health Monitoring (IVHM) of aerospace structures. To facilitate rapid prototyping of passive SAW sensors for aerospace applications, SAW models have been developed. This paper reports on the comparison of three methods of modeling SAWs. The three models are the Impulse Response Method a first order model, and two second order matrix methods; the conventional matrix approach, and a modified matrix approach that is extended to include internal finger reflections. The second order models are based upon matrices that were originally developed for analyzing microwave circuits using transmission line theory. Results from the models are presented with measured data from devices.

  4. A Comparison of Moments-Based Logo Recognition Methods

    Directory of Open Access Journals (Sweden)

    Zili Zhang

    2014-01-01

    Full Text Available Logo recognition is an important issue in document image, advertisement, and intelligent transportation. Although there are many approaches to study logos in these fields, logo recognition is an essential subprocess. Among the methods of logo recognition, the descriptor is very vital. The results of moments as powerful descriptors were not discussed before in terms of logo recognition. So it is unclear which moments are more appropriate to recognize which kind of logos. In this paper we find out the relations between logos with different transforms and moments, which moments are fit for logos with different transforms. The open datasets are employed from the University of Maryland. The comparisons based on moments are carried out from the aspects of logos with noise, and rotation, scaling, rotation and scaling.

  5. Assessing the Accuracy of Generalized Inferences From Comparison Group Studies Using a Within-Study Comparison Approach: The Methodology.

    Science.gov (United States)

    Jaciw, Andrew P

    2016-06-01

    Various studies have examined bias in impact estimates from comparison group studies (CGSs) of job training programs, and in education, where results are benchmarked against experimental results. Such within-study comparison (WSC) approaches investigate levels of bias in CGS-based impact estimates, as well as the success of various design and analytic strategies for reducing bias. This article reviews past literature and summarizes conditions under which CGSs replicate experimental benchmark results. It extends the framework to, and develops the methodology for, situations where results from CGSs are generalized to untreated inference populations. Past research is summarized; methods are developed to examine bias in program impact estimates based on cross-site comparisons in a multisite trial that are evaluated against site-specific experimental benchmarks. Students in Grades K-3 in 79 schools in Tennessee; students in Grades 4-8 in 82 schools in Alabama. Grades K-3 Stanford Achievement Test (SAT) in reading and math scores; Grades 4-8 SAT10 reading scores. Past studies show that bias in CGS-based estimates can be limited through strong design, with local matching, and appropriate analysis involving pretest covariates and variables that represent selection processes. Extension of the methodology to investigate accuracy of generalized estimates from CGSs shows bias from confounders and effect moderators. CGS results, when extrapolated to untreated inference populations, may be biased due to variation in outcomes and impact. Accounting for effects of confounders or moderators may reduce bias. © The Author(s) 2016.

  6. A comparison of surveillance methods for small incidence rates

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Woodall, William H.; Reynolds, Marion R.

    2008-05-15

    A number of methods have been proposed to detect an increasing shift in the incidence rate of a rare health event, such as a congenital malformation. Among these are the Sets method, two modifcations of the Sets method, and the CUSUM method based on the Poisson distribution. We consider the situation where data are observed as a sequence of Bernoulli trials and propose the Bernoulli CUSUM chart as a desirable method for the surveillance of rare health events. We compare the performance of the Sets method and its modifcations to the Bernoulli CUSUM chart under a wide variety of circumstances. Chart design parameters were chosen to satisfy a minimax criteria.We used the steady- state average run length to measure chart performance instead of the average run length which was used in nearly all previous comparisons involving the Sets method or its modifcations. Except in a very few instances, we found that the Bernoulli CUSUM chart has better steady-state average run length performance than the Sets method and its modifcations for the extensive number of cases considered.

  7. Comparison of methods to identify crop productivity constraints in developing countries. A review

    NARCIS (Netherlands)

    Kraaijvanger, R.G.M.; Sonneveld, M.P.W.; Almekinders, C.J.M.; Veldkamp, T.

    2015-01-01

    Selecting a method for identifying actual crop productivity constraints is an important step for triggering innovation processes. Applied methods can be diverse and although such methods have consequences for the design of intervention strategies, documented comparisons between various methods are

  8. Removal of phenol from water : a comparison of energization methods

    NARCIS (Netherlands)

    Grabowski, L.R.; Veldhuizen, van E.M.; Rutgers, W.R.

    2005-01-01

    Direct electrical energization methods for removal of persistent substances from water are under investigation in the framework of the ytriD-project. The emphasis of the first stage of the project is the energy efficiency. A comparison is made between a batch reactor with a thin layer of water and

  9. Two Methods for Turning and Positioning and the Effect on Pressure Ulcer Development: A Comparison Cohort Study.

    Science.gov (United States)

    Powers, Jan

    2016-01-01

    We evaluated 2 methods for patient positioning on the development of pressure ulcers; specifically, standard of care (SOC) using pillows versus a patient positioning system (PPS). The study also compared turning effectiveness as well as nursing resources related to patient positioning and nursing injuries. A nonrandomized comparison design was used for the study. Sixty patients from a trauma/neurointensive care unit were included in the study. Patients were randomly assigned to 1 of 2 teams per standard bed placement practices at the institution. Patients were identified for enrollment in the study if they were immobile and mechanically ventilated with anticipation of 3 days or more on mechanical ventilation. Patients were excluded if they had a preexisting pressure ulcer. Patients were evaluated daily for the presence of pressure ulcers. Data were collected on the number of personnel required to turn patients. Once completed, the angle of the turn was measured. The occupational health database was reviewed to determine nurse injuries. The final sample size was 59 (SOC = 29; PPS = 30); there were no statistical differences between groups for age (P = .10), body mass index (P = .65), gender (P = .43), Braden Scale score (P = .46), or mobility score (P = .10). There was a statistically significant difference in the number of hospital-acquired pressure ulcers between turning methods (6 in the SOC group vs 1 in the PPS group; P = .042). The number of nurses needed for the SOC method was significantly higher than the PPS (P ≤ 0.001). The average turn angle achieved using the PPS was 31.03°, while the average turn angle achieved using SOC was 22.39°. The difference in turn angle from initial turn to 1 hour after turning in the SOC group was statistically significant (P patients to prevent development of pressure ulcers.

  10. Measurement of Capsaicinoids in Chiltepin Hot Pepper: A Comparison Study between Spectrophotometric Method and High Performance Liquid Chromatography Analysis

    Directory of Open Access Journals (Sweden)

    Alberto González-Zamora

    2015-01-01

    Full Text Available Direct spectrophotometric determination of capsaicinoids content in Chiltepin pepper was investigated as a possible alternative to HPLC analysis. Capsaicinoids were extracted from Chiltepin in red ripe and green fruit with acetonitrile and evaluated quantitatively using the HPLC method with capsaicin and dihydrocapsaicin standards. Three samples of different treatment were analyzed for their capsaicinoids content successfully by these methods. HPLC-DAD revealed that capsaicin, dihydrocapsaicin, and nordihydrocapsaicin comprised up to 98% of total capsaicinoids detected. The absorbance of the diluted samples was read on a spectrophotometer at 215–300 nm and monitored at 280 nm. We report herein the comparison between traditional UV assays and HPLC-DAD methods for the determination of the molar absorptivity coefficient of capsaicin (ε280=3,410 and ε280=3,720 M−1 cm−1 and dihydrocapsaicin (ε280=4,175 and ε280=4,350 M−1 cm−1, respectively. Statistical comparisons were performed using the regression analyses (ordinary linear regression and Deming regression and Bland-Altman analysis. Comparative data for pungency was determined spectrophotometrically and by HPLC on samples ranging from 29.55 to 129 mg/g with a correlation of 0.91. These results indicate that the two methods significantly agree. The described spectrophotometric method can be routinely used for total capsaicinoids analysis and quality control in agricultural and pharmaceutical analysis.

  11. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method

    International Nuclear Information System (INIS)

    Shidahara, Miho; Kato, Takashi; Kawatsu, Shoji; Yoshimura, Kumiko; Ito, Kengo; Watabe, Hiroshi; Kim, Kyeong Min; Iida, Hidehiro; Kato, Rikio

    2005-01-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99m Tc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I AC μb with Chang's attenuation correction factor. The scatter component image is estimated by convolving I AC μb with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99m Tc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine. (orig.)

  12. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method.

    Science.gov (United States)

    Shidahara, Miho; Watabe, Hiroshi; Kim, Kyeong Min; Kato, Takashi; Kawatsu, Shoji; Kato, Rikio; Yoshimura, Kumiko; Iida, Hidehiro; Ito, Kengo

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99mTc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I(mub)AC with Chang's attenuation correction factor. The scatter component image is estimated by convolving I(mub)AC with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine.

  13. VerSi. A method for the quantitative comparison of repository systems

    Energy Technology Data Exchange (ETDEWEB)

    Kaempfer, T.U.; Ruebel, A.; Resele, G. [AF-Consult Switzerland Ltd, Baden (Switzerland); Moenig, J. [GRS Braunschweig (Germany)

    2015-07-01

    Decision making and design processes for radioactive waste repositories are guided by safety goals that need to be achieved. In this context, the comparison of different disposal concepts can provide relevant support to better understand the performance of the repository systems. Such a task requires a method for a traceable comparison that is as objective as possible. We present a versatile method that allows for the comparison of different disposal concepts in potentially different host rocks. The condition for the method to work is that the repository systems are defined to a comparable level including designed repository structures, disposal concepts, and engineered and geological barriers which are all based on site-specific safety requirements. The method is primarily based on quantitative analyses and probabilistic model calculations regarding the long-term safety of the repository systems under consideration. The crucial evaluation criteria for the comparison are statistical key figures of indicators that characterize the radiotoxicity flux out of the so called containment-providing rock zone (einschlusswirksamer Gebirgsbereich). The key figures account for existing uncertainties with respect to the actual site properties, the safety relevant processes, and the potential future impact of external processes on the repository system, i.e., they include scenario-, process-, and parameter-uncertainties. The method (1) leads to an evaluation of the retention and containment capacity of the repository systems and its robustness with respect to existing uncertainties as well as to potential external influences; (2) specifies the procedures for the system analyses and the calculation of the statistical key figures as well as for the comparative interpretation of the key figures; and (3) also gives recommendations and sets benchmarks for the comparative assessment of the repository systems under consideration based on the key figures and additional qualitative

  14. Comparison of measurement methods with a mixed effects procedure accounting for replicated evaluations (COM3PARE): method comparison algorithm implementation for head and neck IGRT positional verification.

    Science.gov (United States)

    Roy, Anuradha; Fuller, Clifton D; Rosenthal, David I; Thomas, Charles R

    2015-08-28

    Comparison of imaging measurement devices in the absence of a gold-standard comparator remains a vexing problem; especially in scenarios where multiple, non-paired, replicated measurements occur, as in image-guided radiotherapy (IGRT). As the number of commercially available IGRT presents a challenge to determine whether different IGRT methods may be used interchangeably, an unmet need conceptually parsimonious and statistically robust method to evaluate the agreement between two methods with replicated observations. Consequently, we sought to determine, using an previously reported head and neck positional verification dataset, the feasibility and utility of a Comparison of Measurement Methods with the Mixed Effects Procedure Accounting for Replicated Evaluations (COM3PARE), a unified conceptual schema and analytic algorithm based upon Roy's linear mixed effects (LME) model with Kronecker product covariance structure in a doubly multivariate set-up, for IGRT method comparison. An anonymized dataset consisting of 100 paired coordinate (X/ measurements from a sequential series of head and neck cancer patients imaged near-simultaneously with cone beam CT (CBCT) and kilovoltage X-ray (KVX) imaging was used for model implementation. Software-suggested CBCT and KVX shifts for the lateral (X), vertical (Y) and longitudinal (Z) dimensions were evaluated for bias, inter-method (between-subject variation), intra-method (within-subject variation), and overall agreement using with a script implementing COM3PARE with the MIXED procedure of the statistical software package SAS (SAS Institute, Cary, NC, USA). COM3PARE showed statistically significant bias agreement and difference in inter-method between CBCT and KVX was observed in the Z-axis (both p - value<0.01). Intra-method and overall agreement differences were noted as statistically significant for both the X- and Z-axes (all p - value<0.01). Using pre-specified criteria, based on intra-method agreement, CBCT was deemed

  15. Comparison of Thermal Properties Measured by Different Methods

    International Nuclear Information System (INIS)

    Sundberg, Jan; Kukkonen, Ilmo; Haelldahl, Lars

    2003-04-01

    one or both methods cannot be excluded. For future investigations a set of thermal conductivity standard materials should be selected for testing using the different methods of the laboratories. The material should have thermal properties in the range of typical rocks, be fine-grained and suitable for making samples of different shapes and volumes adjusted to different measurement techniques. Because of large obtained individual variations in the results, comparisons of different methods should continue and include measurements of temperature dependence of thermal properties, especially the specific heat. This should cover the relevant temperature range of about 0-90 deg C. Further comparisons would add to previous studies of temperature dependence of the present rocks

  16. Comparison of Thermal Properties Measured by Different Methods

    Energy Technology Data Exchange (ETDEWEB)

    Sundberg, Jan [Geo Innova AB, Linkoeping (Sweden); Kukkonen, Ilmo [Geological Survey of Finland, Helsinki (Finland); Haelldahl, Lars [Hot Disk AB, Uppsala (Sweden)

    2003-04-01

    one or both methods cannot be excluded. For future investigations a set of thermal conductivity standard materials should be selected for testing using the different methods of the laboratories. The material should have thermal properties in the range of typical rocks, be fine-grained and suitable for making samples of different shapes and volumes adjusted to different measurement techniques. Because of large obtained individual variations in the results, comparisons of different methods should continue and include measurements of temperature dependence of thermal properties, especially the specific heat. This should cover the relevant temperature range of about 0-90 deg C. Further comparisons would add to previous studies of temperature dependence of the present rocks.

  17. Quantitative comparison of two particle tracking methods in fluorescence microscopy images

    CSIR Research Space (South Africa)

    Mabaso, M

    2013-09-01

    Full Text Available that cannot be analysed efficiently by means of manual analysis. In this study we compare the performance of two computer-based tracking methods for tracking of bright particles in fluorescence microscopy image sequences. The methods under comparison are...

  18. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method

    Energy Technology Data Exchange (ETDEWEB)

    Shidahara, Miho; Kato, Takashi; Kawatsu, Shoji; Yoshimura, Kumiko; Ito, Kengo [National Center for Geriatrics and Gerontology Research Institute, Department of Brain Science and Molecular Imaging, Obu, Aichi (Japan); Watabe, Hiroshi; Kim, Kyeong Min; Iida, Hidehiro [National Cardiovascular Center Research Institute, Department of Investigative Radiology, Suita (Japan); Kato, Rikio [National Center for Geriatrics and Gerontology, Department of Radiology, Obu (Japan)

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with {sup 99m}Tc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I{sub AC}{sup {mu}}{sup b} with Chang's attenuation correction factor. The scatter component image is estimated by convolving I{sub AC}{sup {mu}}{sup b} with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and {sup 99m}Tc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine. (orig.)

  19. Comparison of Standard and Fast Charging Methods for Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Petr Chlebis

    2014-01-01

    Full Text Available This paper describes a comparison of standard and fast charging methods used in the field of electric vehicles and also comparison of their efficiency in terms of electrical energy consumption. The comparison was performed on three-phase buck converter, which was designed for EV’s fast charging station. The results were obtained by both mathematical and simulation methods. The laboratory model of entire physical application, which will be further used for simulation results verification, is being built in these days.

  20. A comparison of visual and quantitative methods to identify interstitial lung abnormalities

    OpenAIRE

    Kliment, Corrine R.; Araki, Tetsuro; Doyle, Tracy J.; Gao, Wei; Dupuis, Jos?e; Latourelle, Jeanne C.; Zazueta, Oscar E.; Fernandez, Isis E.; Nishino, Mizuki; Okajima, Yuka; Ross, James C.; Est?par, Ra?l San Jos?; Diaz, Alejandro A.; Lederer, David J.; Schwartz, David A.

    2015-01-01

    Background: Evidence suggests that individuals with interstitial lung abnormalities (ILA) on a chest computed tomogram (CT) may have an increased risk to develop a clinically significant interstitial lung disease (ILD). Although methods used to identify individuals with ILA on chest CT have included both automated quantitative and qualitative visual inspection methods, there has been not direct comparison between these two methods. To investigate this relationship, we created lung density met...

  1. Enzyme sequence similarity improves the reaction alignment method for cross-species pathway comparison

    Energy Technology Data Exchange (ETDEWEB)

    Ovacik, Meric A. [Chemical and Biochemical Engineering Department, Rutgers University, Piscataway, NJ 08854 (United States); Androulakis, Ioannis P., E-mail: yannis@rci.rutgers.edu [Chemical and Biochemical Engineering Department, Rutgers University, Piscataway, NJ 08854 (United States); Biomedical Engineering Department, Rutgers University, Piscataway, NJ 08854 (United States)

    2013-09-15

    Pathway-based information has become an important source of information for both establishing evolutionary relationships and understanding the mode of action of a chemical or pharmaceutical among species. Cross-species comparison of pathways can address two broad questions: comparison in order to inform evolutionary relationships and to extrapolate species differences used in a number of different applications including drug and toxicity testing. Cross-species comparison of metabolic pathways is complex as there are multiple features of a pathway that can be modeled and compared. Among the various methods that have been proposed, reaction alignment has emerged as the most successful at predicting phylogenetic relationships based on NCBI taxonomy. We propose an improvement of the reaction alignment method by accounting for sequence similarity in addition to reaction alignment method. Using nine species, including human and some model organisms and test species, we evaluate the standard and improved comparison methods by analyzing glycolysis and citrate cycle pathways conservation. In addition, we demonstrate how organism comparison can be conducted by accounting for the cumulative information retrieved from nine pathways in central metabolism as well as a more complete study involving 36 pathways common in all nine species. Our results indicate that reaction alignment with enzyme sequence similarity results in a more accurate representation of pathway specific cross-species similarities and differences based on NCBI taxonomy.

  2. Enzyme sequence similarity improves the reaction alignment method for cross-species pathway comparison

    International Nuclear Information System (INIS)

    Ovacik, Meric A.; Androulakis, Ioannis P.

    2013-01-01

    Pathway-based information has become an important source of information for both establishing evolutionary relationships and understanding the mode of action of a chemical or pharmaceutical among species. Cross-species comparison of pathways can address two broad questions: comparison in order to inform evolutionary relationships and to extrapolate species differences used in a number of different applications including drug and toxicity testing. Cross-species comparison of metabolic pathways is complex as there are multiple features of a pathway that can be modeled and compared. Among the various methods that have been proposed, reaction alignment has emerged as the most successful at predicting phylogenetic relationships based on NCBI taxonomy. We propose an improvement of the reaction alignment method by accounting for sequence similarity in addition to reaction alignment method. Using nine species, including human and some model organisms and test species, we evaluate the standard and improved comparison methods by analyzing glycolysis and citrate cycle pathways conservation. In addition, we demonstrate how organism comparison can be conducted by accounting for the cumulative information retrieved from nine pathways in central metabolism as well as a more complete study involving 36 pathways common in all nine species. Our results indicate that reaction alignment with enzyme sequence similarity results in a more accurate representation of pathway specific cross-species similarities and differences based on NCBI taxonomy

  3. Attenuation correction for hybrid MR/PET scanners: a comparison study

    Energy Technology Data Exchange (ETDEWEB)

    Rota Kops, Elena [Forschungszentrum Jülich GmbH, Jülich (Germany); Ribeiro, Andre Santos [Imperial College London, London (United Kingdom); Caldeira, Liliana [Forschungszentrum Jülich GmbH, Jülich (Germany); Hautzel, Hubertus [Heinrich-Heine-University Düsseldorf, Düsseldorf (Germany); Lukas, Mathias [Technische Universitaet Muenchen, Munich (Germany); Antoch, Gerald [Heinrich-Heine-University Düsseldorf, Düsseldorf (Germany); Lerche, Christoph; Shah, Jon [Forschungszentrum Jülich GmbH, Jülich (Germany)

    2015-05-18

    Attenuation correction of PET data acquired in hybrid MR/PET scanners is still a challenge. Different methods have been adopted by several groups to obtain reliable attenuation maps (mu-maps). In this study we compare three methods: MGH, UCL, Neural-Network. The MGH method is based on an MR/CT template obtained with the SPM8 software. The UCL method uses a database of MR/CT pairs. Both generate mu-maps from MP-RAGE images. The feed-forward neural-network from Juelich (NN-Juelich) requires two UTE images; it generates segmented mu-maps. Data from eight subjects (S1-S8) measured in the Siemens 3T MR-BrainPET scanner were used. Corresponding CT images were acquired. The resulting mu-maps were compared against the CT-based mu-maps for each subject and method. Overlapped voxels and Dice similarity coefficients, D, for bone, soft-tissue and air regions, and relative differences images were calculated. The true positive (TP) recognized voxels for the whole head were 79.9% (NN-Juelich, S7) to 92.1% (UCL method, S1). D values of the bone were D=0.65 (NN-Juelich, S1) to D=0.87 (UCL method, S1). For S8 the MHG method failed (TP=76.4%; D=0.46 for bone). D values shared a common tendency in all subjects and methods to recognize soft-tissue as bone. The relative difference images showed a variation of -10.9% - +10.1%; for S8 and MHG method the values were -24.5% and +14.2%. A preliminary comparison of three methods for generation of mu-maps for MR/PET scanners is presented. The continuous methods (MGH, UCL) seem to generate reliable mu-maps, whilst the binary method seems to need further improvement. Future work will include more subjects, the reconstruction of corresponding PET data and their comparison.

  4. Attenuation correction for hybrid MR/PET scanners: a comparison study

    International Nuclear Information System (INIS)

    Rota Kops, Elena; Ribeiro, Andre Santos; Caldeira, Liliana; Hautzel, Hubertus; Lukas, Mathias; Antoch, Gerald; Lerche, Christoph; Shah, Jon

    2015-01-01

    Attenuation correction of PET data acquired in hybrid MR/PET scanners is still a challenge. Different methods have been adopted by several groups to obtain reliable attenuation maps (mu-maps). In this study we compare three methods: MGH, UCL, Neural-Network. The MGH method is based on an MR/CT template obtained with the SPM8 software. The UCL method uses a database of MR/CT pairs. Both generate mu-maps from MP-RAGE images. The feed-forward neural-network from Juelich (NN-Juelich) requires two UTE images; it generates segmented mu-maps. Data from eight subjects (S1-S8) measured in the Siemens 3T MR-BrainPET scanner were used. Corresponding CT images were acquired. The resulting mu-maps were compared against the CT-based mu-maps for each subject and method. Overlapped voxels and Dice similarity coefficients, D, for bone, soft-tissue and air regions, and relative differences images were calculated. The true positive (TP) recognized voxels for the whole head were 79.9% (NN-Juelich, S7) to 92.1% (UCL method, S1). D values of the bone were D=0.65 (NN-Juelich, S1) to D=0.87 (UCL method, S1). For S8 the MHG method failed (TP=76.4%; D=0.46 for bone). D values shared a common tendency in all subjects and methods to recognize soft-tissue as bone. The relative difference images showed a variation of -10.9% - +10.1%; for S8 and MHG method the values were -24.5% and +14.2%. A preliminary comparison of three methods for generation of mu-maps for MR/PET scanners is presented. The continuous methods (MGH, UCL) seem to generate reliable mu-maps, whilst the binary method seems to need further improvement. Future work will include more subjects, the reconstruction of corresponding PET data and their comparison.

  5. A review of methods of analysis in contouring studies for radiation oncology

    International Nuclear Information System (INIS)

    Jameson, Michael G.; Holloway, Lois C.; Metcalfe, Peter E.; Vial, Philip J.; Vinod, Shalini K.

    2010-01-01

    Full text: Inter-observer variability in anatomical contouring is the biggest contributor to uncertainty in radiation treatment planning. Contouring studies are frequently performed to investigate the differences between multiple contours on common datasets. There is, however, no widely accepted method for contour comparisons. The purpose of this study is to review the literature on contouring studies in the context of radiation oncology, with particular consideration of the contouring comparison methods they employ. A literature search, not limited by date, was conducted using Medline and Google Scholar with key words; contour, variation, delineation, inter/intra observer, uncertainty and trial dummy-run. This review includes a description of the contouring processes and contour comparison metrics used. The use of different processes and metrics according to tumour site and other factors were also investigated with limitations described. A total of 69 relevant studies were identified. The most common tumour sites were prostate (26), lung (10), head and neck cancers (8) and breast (7).The most common metric of comparison was volume used 59 times, followed by dimension and shape used 36 times, and centre of volume used 19 times. Of all 69 publications, 67 used a combination of metrics and two used only one metric for comparison. No clear relationships between tumour site or any other factors that may in Auence the contouring process and the metrics used to compare contours were observed from the literature. Further studies are needed to assess the advantages and disadvantages of each metric in various situations.

  6. Comparison of three boosting methods in parent-offspring trios for genotype imputation using simulation study

    Directory of Open Access Journals (Sweden)

    Abbas Mikhchi

    2016-01-01

    Full Text Available Abstract Background Genotype imputation is an important process of predicting unknown genotypes, which uses reference population with dense genotypes to predict missing genotypes for both human and animal genetic variations at a low cost. Machine learning methods specially boosting methods have been used in genetic studies to explore the underlying genetic profile of disease and build models capable of predicting missing values of a marker. Methods In this study strategies and factors affecting the imputation accuracy of parent-offspring trios compared from lower-density SNP panels (5 K to high density (10 K SNP panel using three different Boosting methods namely TotalBoost (TB, LogitBoost (LB and AdaBoost (AB. The methods employed using simulated data to impute the un-typed SNPs in parent-offspring trios. Four different datasets of G1 (100 trios with 5 k SNPs, G2 (100 trios with 10 k SNPs, G3 (500 trios with 5 k SNPs, and G4 (500 trio with 10 k SNPs were simulated. In four datasets all parents were genotyped completely, and offspring genotyped with a lower density panel. Results Comparison of the three methods for imputation showed that the LB outperformed AB and TB for imputation accuracy. The time of computation were different between methods. The AB was the fastest algorithm. The higher SNP densities resulted the increase of the accuracy of imputation. Larger trios (i.e. 500 was better for performance of LB and TB. Conclusions The conclusion is that the three methods do well in terms of imputation accuracy also the dense chip is recommended for imputation of parent-offspring trios.

  7. A comparison of fitness-case sampling methods for genetic programming

    Science.gov (United States)

    Martínez, Yuliana; Naredo, Enrique; Trujillo, Leonardo; Legrand, Pierrick; López, Uriel

    2017-11-01

    Genetic programming (GP) is an evolutionary computation paradigm for automatic program induction. GP has produced impressive results but it still needs to overcome some practical limitations, particularly its high computational cost, overfitting and excessive code growth. Recently, many researchers have proposed fitness-case sampling methods to overcome some of these problems, with mixed results in several limited tests. This paper presents an extensive comparative study of four fitness-case sampling methods, namely: Interleaved Sampling, Random Interleaved Sampling, Lexicase Selection and Keep-Worst Interleaved Sampling. The algorithms are compared on 11 symbolic regression problems and 11 supervised classification problems, using 10 synthetic benchmarks and 12 real-world data-sets. They are evaluated based on test performance, overfitting and average program size, comparing them with a standard GP search. Comparisons are carried out using non-parametric multigroup tests and post hoc pairwise statistical tests. The experimental results suggest that fitness-case sampling methods are particularly useful for difficult real-world symbolic regression problems, improving performance, reducing overfitting and limiting code growth. On the other hand, it seems that fitness-case sampling cannot improve upon GP performance when considering supervised binary classification.

  8. A comparative study of different methods for calculating electronic transition rates

    Science.gov (United States)

    Kananenka, Alexei A.; Sun, Xiang; Schubert, Alexander; Dunietz, Barry D.; Geva, Eitan

    2018-03-01

    We present a comprehensive comparison of the following mixed quantum-classical methods for calculating electronic transition rates: (1) nonequilibrium Fermi's golden rule, (2) mixed quantum-classical Liouville method, (3) mean-field (Ehrenfest) mixed quantum-classical method, and (4) fewest switches surface-hopping method (in diabatic and adiabatic representations). The comparison is performed on the Garg-Onuchic-Ambegaokar benchmark charge-transfer model, over a broad range of temperatures and electronic coupling strengths, with different nonequilibrium initial states, in the normal and inverted regimes. Under weak to moderate electronic coupling, the nonequilibrium Fermi's golden rule rates are found to be in good agreement with the rates obtained via the mixed quantum-classical Liouville method that coincides with the fully quantum-mechanically exact results for the model system under study. Our results suggest that the nonequilibrium Fermi's golden rule can serve as an inexpensive yet accurate alternative to Ehrenfest and the fewest switches surface-hopping methods.

  9. A comparison between progressive extension method (PEM) and iterative method (IM) for magnetic field extrapolations in the solar atmosphere

    Science.gov (United States)

    Wu, S. T.; Sun, M. T.; Sakurai, Takashi

    1990-01-01

    This paper presents a comparison between two numerical methods for the extrapolation of nonlinear force-free magnetic fields, viz the Iterative Method (IM) and the Progressive Extension Method (PEM). The advantages and disadvantages of these two methods are summarized, and the accuracy and numerical instability are discussed. On the basis of this investigation, it is claimed that the two methods do resemble each other qualitatively.

  10. Multi-reader ROC studies with split-plot designs: a comparison of statistical methods.

    Science.gov (United States)

    Obuchowski, Nancy A; Gallas, Brandon D; Hillis, Stephen L

    2012-12-01

    Multireader imaging trials often use a factorial design, in which study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of this design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper, the authors compare three methods of analysis for the split-plot design. Three statistical methods are presented: the Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean analysis-of-variance approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power, and confidence interval coverage of the three test statistics. The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% confidence intervals falls close to the nominal coverage for small and large sample sizes. The split-plot multireader, multicase study design can be statistically efficient compared to the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rates, similar power, and nominal confidence interval coverage, are available for this study design. Copyright © 2012 AUR. All rights reserved.

  11. Methacholine challenge test: Comparison of tidal breathing and dosimeter methods in children.

    Science.gov (United States)

    Mazi, Ahlam; Lands, Larry C; Zielinski, David

    2018-02-01

    Methacholine Challenge Test (MCT) is used to confirm, assess the severity and/or rule out asthma. Two MCT methods are described as equivalent by the American Thoracic Society (ATS), the tidal breathing and the dosimeter methods. However, the majority of adult studies suggest that individuals with asthma do not react at the same PC 20 between the two methods. Additionally, the nebulizers used are no longer available and studies suggest current nebulizers are not equivalent to these. Our study investigates the difference in positive MCT tests between three methods in a pediatric population. A retrospective, chart review of all MCT performed with spirometry at the Montreal Children's Hospital from January 2006 to March 2016. A comparison of the percentage positive MCT tests with three methods, tidal breathing, APS dosimeter and dose adjusted DA-dosimeter, was performed at different cutoff points up to 8 mg/mL. A total of 747 subjects performed the tidal breathing method, 920 subjects the APS dosimeter method, and 200 subjects the DA-dosimeter method. At a PC 20 cutoff ≤4 mg/mL, the percentage positive MCT was significantly higher using the tidal breathing method (76.3%) compared to the APS dosimeter (45.1%) and DA-dosimeter (65%) methods (P < 0.0001). The choice of nebulizer and technique significantly impacts the rate of positivity when using MCT to diagnose and assess asthma. Lack of direct comparison of techniques within the same individuals and clinical assessment should be addressed in future studies to standardize MCT methodology in children. © 2017 Wiley Periodicals, Inc.

  12. A comparison of in vivo and in vitro methods for determining availability of iron from meals

    International Nuclear Information System (INIS)

    Schricker, B.R.; Miller, D.D.; Rasmussen, R.R.; Van Campen, D.

    1981-01-01

    A comparison is made between in vitro and human and rat in vivo methods for estimating food iron availability. Complex meals formulated to replicate meals used by Cook and Monsen (Am J Clin Nutr 1976;29:859) in human iron availability trials were used in the comparison. The meals were prepared by substituting pork, fish, cheese, egg, liver, or chicken for beef in two basic test meals and were evaluated for iron availability using in vitro and rat in vivo methods. When the criterion for comparison was the ability to show statistically significant differences between iron availability in the various meals, there was substantial agreement between the in vitro and human in vivo methods. There was less agreement between the human in vivo and the rat in vivo and between the in vivo and the rat in vivo and between the in vitro and the rat in vivo methods. Correlation analysis indicated significant agreement between in vitro and human in vivo methods. Correlation between the rat in vivo and human in vivo methods were also significant but correlations between the in vitro and rat in vivo methods were less significant and, in some cases, not significant. The comparison supports the contention that the in vitro method allows a rapid, inexpensive, and accurate estimation of nonheme iron availability in complex meals

  13. Assessment of interchangeability rate between 2 methods of measurements: An example with a cardiac output comparison study.

    Science.gov (United States)

    Lorne, Emmanuel; Diouf, Momar; de Wilde, Robert B P; Fischer, Marc-Olivier

    2018-02-01

    The Bland-Altman (BA) and percentage error (PE) methods have been previously described to assess the agreement between 2 methods of medical or laboratory measurements. This type of approach raises several problems: the BA methodology constitutes a subjective approach to interchangeability, whereas the PE approach does not take into account the distribution of values over a range. We describe a new methodology that defines an interchangeability rate between 2 methods of measurement and cutoff values that determine the range of interchangeable values. We used a simulated data and a previously published data set to demonstrate the concept of the method. The interchangeability rate of 5 different cardiac output (CO) pulse contour techniques (Wesseling method, LiDCO, PiCCO, Hemac method, and Modelflow) was calculated, in comparison with the reference pulmonary artery thermodilution CO using our new method. In our example, Modelflow with a good interchangeability rate of 93% and a cutoff value of 4.8 L min, was found to be interchangeable with the thermodilution method for >95% of measurements. Modelflow had a higher interchangeability rate compared to Hemac (93% vs 86%; P = .022) or other monitors (Wesseling cZ = 76%, LiDCO = 73%, and PiCCO = 62%; P < .0001). Simulated data and reanalysis of a data set comparing 5 CO monitors against thermodilution CO showed that, depending on the repeatability of the reference method, the interchangeability rate combined with a cutoff value could be used to define the range of values over which interchangeability remains acceptable.

  14. INTER LABORATORY COMBAT HELMET BLUNT IMPACT TEST METHOD COMPARISON

    Science.gov (United States)

    2018-03-26

    data by Instrumentation for Impact  Test , SAE standard J211‐1 [4]. Although the entire curve is collected, the interest of this  project  team  solely...HELMET BLUNT IMPACT TEST METHOD COMPARISON by Tony J. Kayhart Charles A. Hewitt and Jonathan Cyganik March 2018 Final...INTER-LABORATORY COMBAT HELMET BLUNT IMPACT TEST METHOD COMPARISON 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR

  15. A method for statistical comparison of data sets and its uses in analysis of nuclear physics data

    International Nuclear Information System (INIS)

    Bityukov, S.I.; Smirnova, V.V.; Krasnikov, N.V.; Maksimushkina, A.V.; Nikitenko, A.N.

    2014-01-01

    Authors propose a method for statistical comparison of two data sets. The method is based on the method of statistical comparison of histograms. As an estimator of quality of the decision made, it is proposed to use the value which it is possible to call the probability that the decision (data sets are various) is correct [ru

  16. Clustering Scientific Publications Based on Citation Relations: A Systematic Comparison of Different Methods.

    Science.gov (United States)

    Šubelj, Lovro; van Eck, Nees Jan; Waltman, Ludo

    2016-01-01

    Clustering methods are applied regularly in the bibliometric literature to identify research areas or scientific fields. These methods are for instance used to group publications into clusters based on their relations in a citation network. In the network science literature, many clustering methods, often referred to as graph partitioning or community detection techniques, have been developed. Focusing on the problem of clustering the publications in a citation network, we present a systematic comparison of the performance of a large number of these clustering methods. Using a number of different citation networks, some of them relatively small and others very large, we extensively study the statistical properties of the results provided by different methods. In addition, we also carry out an expert-based assessment of the results produced by different methods. The expert-based assessment focuses on publications in the field of scientometrics. Our findings seem to indicate that there is a trade-off between different properties that may be considered desirable for a good clustering of publications. Overall, map equation methods appear to perform best in our analysis, suggesting that these methods deserve more attention from the bibliometric community.

  17. Clustering Scientific Publications Based on Citation Relations: A Systematic Comparison of Different Methods

    Science.gov (United States)

    Šubelj, Lovro; van Eck, Nees Jan; Waltman, Ludo

    2016-01-01

    Clustering methods are applied regularly in the bibliometric literature to identify research areas or scientific fields. These methods are for instance used to group publications into clusters based on their relations in a citation network. In the network science literature, many clustering methods, often referred to as graph partitioning or community detection techniques, have been developed. Focusing on the problem of clustering the publications in a citation network, we present a systematic comparison of the performance of a large number of these clustering methods. Using a number of different citation networks, some of them relatively small and others very large, we extensively study the statistical properties of the results provided by different methods. In addition, we also carry out an expert-based assessment of the results produced by different methods. The expert-based assessment focuses on publications in the field of scientometrics. Our findings seem to indicate that there is a trade-off between different properties that may be considered desirable for a good clustering of publications. Overall, map equation methods appear to perform best in our analysis, suggesting that these methods deserve more attention from the bibliometric community. PMID:27124610

  18. Steroid hormones in environmental matrices: extraction method comparison.

    Science.gov (United States)

    Andaluri, Gangadhar; Suri, Rominder P S; Graham, Kendon

    2017-11-09

    The U.S. Environmental Protection Agency (EPA) has developed methods for the analysis of steroid hormones in water, soil, sediment, and municipal biosolids by HRGC/HRMS (EPA Method 1698). Following the guidelines provided in US-EPA Method 1698, the extraction methods were validated with reagent water and applied to municipal wastewater, surface water, and municipal biosolids using GC/MS/MS for the analysis of nine most commonly detected steroid hormones. This is the first reported comparison of the separatory funnel extraction (SFE), continuous liquid-liquid extraction (CLLE), and Soxhlet extraction methods developed by the U.S. EPA. Furthermore, a solid phase extraction (SPE) method was also developed in-house for the extraction of steroid hormones from aquatic environmental samples. This study provides valuable information regarding the robustness of the different extraction methods. Statistical analysis of the data showed that SPE-based methods provided better recovery efficiencies and lower variability of the steroid hormones followed by SFE. The analytical methods developed in-house for extraction of biosolids showed a wide recovery range; however, the variability was low (≤ 7% RSD). Soxhlet extraction and CLLE are lengthy procedures and have been shown to provide highly variably recovery efficiencies. The results of this study are guidance for better sample preparation strategies in analytical methods for steroid hormone analysis, and SPE adds to the choice in environmental sample analysis.

  19. Direct comparison of phase-sensitive vibrational sum frequency generation with maximum entropy method: case study of water.

    Science.gov (United States)

    de Beer, Alex G F; Samson, Jean-Sebastièn; Hua, Wei; Huang, Zishuai; Chen, Xiangke; Allen, Heather C; Roke, Sylvie

    2011-12-14

    We present a direct comparison of phase sensitive sum-frequency generation experiments with phase reconstruction obtained by the maximum entropy method. We show that both methods lead to the same complex spectrum. Furthermore, we discuss the strengths and weaknesses of each of these methods, analyzing possible sources of experimental and analytical errors. A simulation program for maximum entropy phase reconstruction is available at: http://lbp.epfl.ch/. © 2011 American Institute of Physics

  20. A comparison study of size-specific dose estimate calculation methods

    Energy Technology Data Exchange (ETDEWEB)

    Parikh, Roshni A. [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); University of Michigan Health System, Department of Radiology, Ann Arbor, MI (United States); Wien, Michael A.; Jordan, David W.; Ciancibello, Leslie; Berlin, Sheila C. [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); Novak, Ronald D. [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); Rebecca D. Considine Research Institute, Children' s Hospital Medical Center of Akron, Center for Mitochondrial Medicine Research, Akron, OH (United States); Klahr, Paul [CT Clinical Science, Philips Healthcare, Highland Heights, OH (United States); Soriano, Stephanie [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); University of Washington, Department of Radiology, Seattle, WA (United States)

    2018-01-15

    The size-specific dose estimate (SSDE) has emerged as an improved metric for use by medical physicists and radiologists for estimating individual patient dose. Several methods of calculating SSDE have been described, ranging from patient thickness or attenuation-based (automated and manual) measurements to weight-based techniques. To compare the accuracy of thickness vs. weight measurement of body size to allow for the calculation of the size-specific dose estimate (SSDE) in pediatric body CT. We retrospectively identified 109 pediatric body CT examinations for SSDE calculation. We examined two automated methods measuring a series of level-specific diameters of the patient's body: method A used the effective diameter and method B used the water-equivalent diameter. Two manual methods measured patient diameter at two predetermined levels: the superior endplate of L2, where body width is typically most thin, and the superior femoral head or iliac crest (for scans that did not include the pelvis), where body width is typically most thick; method C averaged lateral measurements at these two levels from the CT projection scan, and method D averaged lateral and anteroposterior measurements at the same two levels from the axial CT images. Finally, we used body weight to characterize patient size, method E, and compared this with the various other measurement methods. Methods were compared across the entire population as well as by subgroup based on body width. Concordance correlation (ρ{sub c}) between each of the SSDE calculation methods (methods A-E) was greater than 0.92 across the entire population, although the range was wider when analyzed by subgroup (0.42-0.99). When we compared each SSDE measurement method with CTDI{sub vol,} there was poor correlation, ρ{sub c}<0.77, with percentage differences between 20.8% and 51.0%. Automated computer algorithms are accurate and efficient in the calculation of SSDE. Manual methods based on patient thickness provide

  1. A Comparison Study between Two MPPT Control Methods for a Large Variable-Speed Wind Turbine under Different Wind Speed Characteristics

    Directory of Open Access Journals (Sweden)

    Dongran Song

    2017-05-01

    Full Text Available Variable speed wind turbines (VSWTs usually adopt a maximum power point tracking (MPPT method to optimize energy capture performance. Nevertheless, obtained performance offered by different MPPT methods may be affected by the impact of wind turbine (WT’s inertia and wind speed characteristics and it needs to be clarified. In this paper, the tip speed ratio (TSR and optimal torque (OT methods are investigated in terms of their performance under different wind speed characteristics on a 1.5 MW wind turbine model. To this end, the TSR control method based on an effective wind speed estimator and the OT control method are firstly presented. Then, their performance is investigated and compared through simulation test results under different wind speeds using Bladed software. Comparison results show that the TSR control method can capture slightly more wind energy at the cost of high component loads than the other one under all wind conditions. Furthermore, it is found that both control methods present similar trends of power reduction that is relevant to mean wind speed and turbulence intensity. From the obtained results, we demonstrate that, to further improve MPPT capability of large VSWTs, other advanced control methods using wind speed prediction information need to be addressed.

  2. Monitoring uterine activity during labor: a comparison of three methods

    Science.gov (United States)

    EULIANO, Tammy Y.; NGUYEN, Minh Tam; DARMANJIAN, Shalom; MCGORRAY, Susan P.; EULIANO, Neil; ONKALA, Allison; GREGG, Anthony R.

    2012-01-01

    Objective Tocodynamometry (Toco—strain gauge technology) provides contraction frequency and approximate duration of labor contractions, but suffers frequent signal dropout necessitating re-positioning by a nurse, and may fail in obese patients. The alternative invasive intrauterine pressure catheter (IUPC) is more reliable and adds contraction pressure information, but requires ruptured membranes and introduces small risks of infection and abruption. Electrohysterography (EHG) reports the electrical activity of the uterus through electrodes placed on the maternal abdomen. This study compared all three methods of contraction detection simultaneously in laboring women. Study Design Upon consent, laboring women were monitored simultaneously with Toco, EHG, and IUPC. Contraction curves were generated in real-time for the EHG and all three curves were stored electronically. A contraction detection algorithm was used to compare frequency and timing between methods. Seventy-three subjects were enrolled in the study; 14 were excluded due to hardware failure of one or more of the devices (12) or inadequate data collection duration(2). Results In comparison with the gold-standard IUPC, EHG performed significantly better than Toco with regard to Contractions Consistency Index (CCI). The mean CCI for EHG was 0.88 ± 0.17 compared to 0.69 ± 0.27 for Toco (pToco, EHG was not significantly affected by obesity. Conclusion Toco does not correlate well with the gold-standard IUPC and fails more frequently in obese patients. EHG provides a reliable non-invasive alternative regardless of body habitus. PMID:23122926

  3. A comparison of methods to predict historical daily streamflow time series in the southeastern United States

    Science.gov (United States)

    Farmer, William H.; Archfield, Stacey A.; Over, Thomas M.; Hay, Lauren E.; LaFontaine, Jacob H.; Kiang, Julie E.

    2015-01-01

    Effective and responsible management of water resources relies on a thorough understanding of the quantity and quality of available water. Streamgages cannot be installed at every location where streamflow information is needed. As part of its National Water Census, the U.S. Geological Survey is planning to provide streamflow predictions for ungaged locations. In order to predict streamflow at a useful spatial and temporal resolution throughout the Nation, efficient methods need to be selected. This report examines several methods used for streamflow prediction in ungaged basins to determine the best methods for regional and national implementation. A pilot area in the southeastern United States was selected to apply 19 different streamflow prediction methods and evaluate each method by a wide set of performance metrics. Through these comparisons, two methods emerged as the most generally accurate streamflow prediction methods: the nearest-neighbor implementations of nonlinear spatial interpolation using flow duration curves (NN-QPPQ) and standardizing logarithms of streamflow by monthly means and standard deviations (NN-SMS12L). It was nearly impossible to distinguish between these two methods in terms of performance. Furthermore, neither of these methods requires significantly more parameterization in order to be applied: NN-SMS12L requires 24 regional regressions—12 for monthly means and 12 for monthly standard deviations. NN-QPPQ, in the application described in this study, required 27 regressions of particular quantiles along the flow duration curve. Despite this finding, the results suggest that an optimal streamflow prediction method depends on the intended application. Some methods are stronger overall, while some methods may be better at predicting particular statistics. The methods of analysis presented here reflect a possible framework for continued analysis and comprehensive multiple comparisons of methods of prediction in ungaged basins (PUB

  4. Validation and comparison of two sampling methods to assess dermal exposure to drilling fluids and crude oil.

    Science.gov (United States)

    Galea, Karen S; McGonagle, Carolyn; Sleeuwenhoek, Anne; Todd, David; Jiménez, Araceli Sánchez

    2014-06-01

    Dermal exposure to drilling fluids and crude oil is an exposure route of concern. However, there have been no published studies describing sampling methods or reporting dermal exposure measurements. We describe a study that aimed to evaluate a wipe sampling method to assess dermal exposure to an oil-based drilling fluid and crude oil, as well as to investigate the feasibility of using an interception cotton glove sampler for exposure on the hands/wrists. A direct comparison of the wipe and interception methods was also completed using pigs' trotters as a surrogate for human skin and a direct surface contact exposure scenario. Overall, acceptable recovery and sampling efficiencies were reported for both methods, and both methods had satisfactory storage stability at 1 and 7 days, although there appeared to be some loss over 14 days. The methods' comparison study revealed significantly higher removal of both fluids from the metal surface with the glove samples compared with the wipe samples (on average 2.5 times higher). Both evaluated sampling methods were found to be suitable for assessing dermal exposure to oil-based drilling fluids and crude oil; however, the comparison study clearly illustrates that glove samplers may overestimate the amount of fluid transferred to the skin. Further comparison of the two dermal sampling methods using additional exposure situations such as immersion or deposition, as well as a field evaluation, is warranted to confirm their appropriateness and suitability in the working environment. © The Author 2014. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  5. THE FORMATION OF A MILKY WAY-SIZED DISK GALAXY. I. A COMPARISON OF NUMERICAL METHODS

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Qirong; Li, Yuexing, E-mail: qxz125@psu.edu [Department of Astronomy and Astrophysics, The Pennsylvania State University, 525 Davey Lab, University Park, PA 16802 (United States)

    2016-11-01

    The long-standing challenge of creating a Milky Way- (MW-) like disk galaxy from cosmological simulations has motivated significant developments in both numerical methods and physical models. We investigate these two fundamental aspects in a new comparison project using a set of cosmological hydrodynamic simulations of an MW-sized galaxy. In this study, we focus on the comparison of two particle-based hydrodynamics methods: an improved smoothed particle hydrodynamics (SPH) code Gadget, and a Lagrangian Meshless Finite-Mass (MFM) code Gizmo. All the simulations in this paper use the same initial conditions and physical models, which include star formation, “energy-driven” outflows, metal-dependent cooling, stellar evolution, and metal enrichment. We find that both numerical schemes produce a late-type galaxy with extended gaseous and stellar disks. However, notable differences are present in a wide range of galaxy properties and their evolution, including star-formation history, gas content, disk structure, and kinematics. Compared to Gizmo, the Gadget simulation produced a larger fraction of cold, dense gas at high redshift which fuels rapid star formation and results in a higher stellar mass by 20% and a lower gas fraction by 10% at z = 0, and the resulting gas disk is smoother and more coherent in rotation due to damping of turbulent motion by the numerical viscosity in SPH, in contrast to the Gizmo simulation, which shows a more prominent spiral structure. Given its better convergence properties and lower computational cost, we argue that the MFM method is a promising alternative to SPH in cosmological hydrodynamic simulations.

  6. Comparison study on cell calculation method of fast reactor

    International Nuclear Information System (INIS)

    Chiba, Gou

    2002-10-01

    Effective cross sections obtained by cell calculations are used in core calculations in current deterministic methods. Therefore, it is important to calculate the effective cross sections accurately and several methods have been proposed. In this study, some of the methods are compared to each other using a continuous energy Monte Carlo method as a reference. The result shows that the table look-up method used in Japan Nuclear Cycle Development Institute (JNC) sometimes has a difference over 10% in effective microscopic cross sections and be inferior to the sub-group method. The problem was overcome by introducing a new nuclear constant system developed in JNC, in which the ultra free energy group library is used. The system can also deal with resonance interaction effects between nuclides which are not able to be considered by other methods. In addition, a new method was proposed to calculate effective cross section accurately for power reactor fuel subassembly where the new nuclear constant system cannot be applied. This method uses the sub-group method and the ultra fine energy group collision probability method. The microscopic effective cross sections obtained by this method agree with the reference values within 5% difference. (author)

  7. A comparison of Nodal methods in neutron diffusion calculations

    Energy Technology Data Exchange (ETDEWEB)

    Tavron, Barak [Israel Electric Company, Haifa (Israel) Nuclear Engineering Dept. Research and Development Div.

    1996-12-01

    The nuclear engineering department at IEC uses in the reactor analysis three neutron diffusion codes based on nodal methods. The codes, GNOMERl, ADMARC2 and NOXER3 solve the neutron diffusion equation to obtain flux and power distributions in the core. The resulting flux distributions are used for the furl cycle analysis and for fuel reload optimization. This work presents a comparison of the various nodal methods employed in the above codes. Nodal methods (also called Coarse-mesh methods) have been designed to solve problems that contain relatively coarse areas of homogeneous composition. In the nodal method parts of the equation that present the state in the homogeneous area are solved analytically while, according to various assumptions and continuity requirements, a general solution is sought out. Thus efficiency of the method for this kind of problems, is very high compared with the finite element and finite difference methods. On the other hand, using this method one can get only approximate information about the node vicinity (or coarse-mesh area, usually a feel assembly of a 20 cm size). These characteristics of the nodal method make it suitable for feel cycle analysis and reload optimization. This analysis requires many subsequent calculations of the flux and power distributions for the feel assemblies while there is no need for detailed distribution within the assembly. For obtaining detailed distribution within the assembly methods of power reconstruction may be applied. However homogenization of feel assembly properties, required for the nodal method, may cause difficulties when applied to fuel assemblies with many absorber rods, due to exciting strong neutron properties heterogeneity within the assembly. (author).

  8. On solutions of stochastic oscillatory quadratic nonlinear equations using different techniques, a comparison study

    International Nuclear Information System (INIS)

    El-Tawil, M A; Al-Jihany, A S

    2008-01-01

    In this paper, nonlinear oscillators under quadratic nonlinearity with stochastic inputs are considered. Different methods are used to obtain first order approximations, namely, the WHEP technique, the perturbation method, the Pickard approximations, the Adomian decompositions and the homotopy perturbation method (HPM). Some statistical moments are computed for the different methods using mathematica 5. Comparisons are illustrated through figures for different case-studies

  9. Comparison study on qualitative and quantitative risk assessment methods for urban natural gas pipeline network.

    Science.gov (United States)

    Han, Z Y; Weng, W G

    2011-05-15

    In this paper, a qualitative and a quantitative risk assessment methods for urban natural gas pipeline network are proposed. The qualitative method is comprised of an index system, which includes a causation index, an inherent risk index, a consequence index and their corresponding weights. The quantitative method consists of a probability assessment, a consequences analysis and a risk evaluation. The outcome of the qualitative method is a qualitative risk value, and for quantitative method the outcomes are individual risk and social risk. In comparison with previous research, the qualitative method proposed in this paper is particularly suitable for urban natural gas pipeline network, and the quantitative method takes different consequences of accidents into consideration, such as toxic gas diffusion, jet flame, fire ball combustion and UVCE. Two sample urban natural gas pipeline networks are used to demonstrate these two methods. It is indicated that both of the two methods can be applied to practical application, and the choice of the methods depends on the actual basic data of the gas pipelines and the precision requirements of risk assessment. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.

  10. From a tree to a stand in Finnish boreal forests: biomass estimation and comparison of methods

    OpenAIRE

    Liu, Chunjiang

    2009-01-01

    There is an increasing need to compare the results obtained with different methods of estimation of tree biomass in order to reduce the uncertainty in the assessment of forest biomass carbon. In this study, tree biomass was investigated in a 30-year-old Scots pine (Pinus sylvestris) (Young-Stand) and a 130-year-old mixed Norway spruce (Picea abies)-Scots pine stand (Mature-Stand) located in southern Finland (61º50' N, 24º22' E). In particular, a comparison of the results of different estimati...

  11. Description of comparison method for evaluating spatial or temporal homogeneity of environmental monitoring data

    International Nuclear Information System (INIS)

    Mecozzi, M.; Cicero, A.M.

    1995-01-01

    In this paper a comparison method to verify the homogeneity/inhomogeneity of environmental monitoring data is described. The comparison method is based on the simultaneous application of three statistical tests: One-Way ANOVA, Kruskal Wallis and One-Way IANOVA. Robust tests such as IANOVA and Kruskal Wallis can be more efficient than the usual ANOVA methods because are resistant against the presence of outliers and divergences from the normal distribution of the data. The evidences of the study are that the validation of the result about presence/absence of homogeneity in the data set is obtained when it is confirmed by two tests at least

  12. A Quantitative Comparison of Calibration Methods for RGB-D Sensors Using Different Technologies

    Directory of Open Access Journals (Sweden)

    Víctor Villena-Martínez

    2017-01-01

    Full Text Available RGB-D (Red Green Blue and Depth sensors are devices that can provide color and depth information from a scene at the same time. Recently, they have been widely used in many solutions due to their commercial growth from the entertainment market to many diverse areas (e.g., robotics, CAD, etc.. In the research community, these devices have had good uptake due to their acceptable levelofaccuracyformanyapplicationsandtheirlowcost,butinsomecases,theyworkatthelimitof their sensitivity, near to the minimum feature size that can be perceived. For this reason, calibration processes are critical in order to increase their accuracy and enable them to meet the requirements of such kinds of applications. To the best of our knowledge, there is not a comparative study of calibration algorithms evaluating its results in multiple RGB-D sensors. Specifically, in this paper, a comparison of the three most used calibration methods have been applied to three different RGB-D sensors based on structured light and time-of-flight. The comparison of methods has been carried out by a set of experiments to evaluate the accuracy of depth measurements. Additionally, an object reconstruction application has been used as example of an application for which the sensor works at the limit of its sensitivity. The obtained results of reconstruction have been evaluated through visual inspection and quantitative measurements.

  13. Sampling methods for the study of pneumococcal carriage: a systematic review.

    Science.gov (United States)

    Gladstone, R A; Jefferies, J M; Faust, S N; Clarke, S C

    2012-11-06

    Streptococcus pneumoniae is an important pathogen worldwide. Accurate sampling of S. pneumoniae carriage is central to surveillance studies before and following conjugate vaccination programmes to combat pneumococcal disease. Any bias introduced during sampling will affect downstream recovery and typing. Many variables exist for the method of collection and initial processing, which can make inter-laboratory or international comparisons of data complex. In February 2003, a World Health Organisation working group published a standard method for the detection of pneumococcal carriage for vaccine trials to reduce or eliminate variability. We sought to describe the variables associated with the sampling of S. pneumoniae from collection to storage in the context of the methods recommended by the WHO and those used in pneumococcal carriage studies since its publication. A search of published literature in the online PubMed database was performed on the 1st June 2012, to identify published studies that collected pneumococcal carriage isolates, conducted after the publication of the WHO standard method. After undertaking a systematic analysis of the literature, we show that a number of differences in pneumococcal sampling protocol continue to exist between studies since the WHO publication. The majority of studies sample from the nasopharynx, but the choice of swab and swab transport media is more variable between studies. At present there is insufficient experimental data that supports the optimal sensitivity of any standard method. This may have contributed to incomplete adoption of the primary stages of the WHO detection protocol, alongside pragmatic or logistical issues associated with study design. Consequently studies may not provide a true estimate of pneumococcal carriage. Optimal sampling of carriage could lead to improvements in downstream analysis and the evaluation of pneumococcal vaccine impact and extrapolation to pneumococcal disease control therefore

  14. Sampling for ants in different-aged spruce forests: A comparison of methods

    Czech Academy of Sciences Publication Activity Database

    Véle, A.; Holuša, J.; Frouz, Jan

    2009-01-01

    Roč. 45, č. 4 (2009), s. 301-305 ISSN 1164-5563 Institutional research plan: CEZ:AV0Z60660521 Keywords : ants * baits * methods comparison Subject RIV: EH - Ecology, Behaviour Impact factor: 1.247, year: 2009

  15. A comparison of methods to determine tannin acyl hydrolase activity

    Directory of Open Access Journals (Sweden)

    Cristóbal Aguilar

    1999-01-01

    Full Text Available Six methods to determine the activity of tannase produced by Aspergillus niger Aa-20 on polyurethane foam by solid state fermentation, which included two titrimetric techniques, three spectrophotometric methods and one HPLC assay were tested and compared. All methods assayed enabled the measurement of extracellular tannase activity. However, only five were useful to evaluate intracellular tannase activity. Studies on the effect of pH on tannase extraction demonstrated that tannase activity was considerably under-estimated when its extraction was carried out at pH values below 5.5 and above 6.0. Results showed that the HPLC technique and the modified Bajpai and Patil methods presented several advantages in comparison to the other methods tested.Seis métodos para determinar a atividade de tannase produzida por Aspergillus niger O Aa-20 em espuma de polyuretano por fermentação em estado sólido foram estudados. Duas técnicas titulométricas , três métodos spectrofotométricos e um método por HPLC foram testados e comparados. Todos os métodos testados permitiram determinar a atividade da tannase produzida extracelularmente. Entretanto, somente cinco se mostraram úteis para avaliar a atividade da tannase produzida intracelularmente. Os estudos do efeito do pH na extração de tannase demonstraram que a atividade de tannase era consideravelmente subestimada quando sua extração foi executada em valores de pH inferiores a 5.5 e superior a pH 6.0. Os resultados demostraram que a técnica de HPLC o método Bajpai and Patil modificado apresentam várias vantagens em comparação aos outros métodos testados.

  16. An interlaboratory comparison of methods for measuring rock matrix porosity

    International Nuclear Information System (INIS)

    Rasilainen, K.; Hellmuth, K.H.; Kivekaes, L.; Ruskeeniemi, T.; Melamed, A.; Siitari-Kauppi, M.

    1996-09-01

    An interlaboratory comparison study was conducted for the available Finnish methods of rock matrix porosity measurements. The aim was first to compare different experimental methods for future applications, and second to obtain quality assured data for the needs of matrix diffusion modelling. Three different versions of water immersion techniques, a tracer elution method, a helium gas through-diffusion method, and a C-14-PMMA method were tested. All methods selected for this study were established experimental tools in the respective laboratories, and they had already been individually tested. Rock samples for the study were obtained from a homogeneous granitic drill core section from the natural analogue site at Palmottu. The drill core section was cut into slabs that were expected to be practically identical. The subsamples were then circulated between the different laboratories using a round robin approach. The circulation was possible because all methods were non-destructive, except the C-14-PMMA method, which was always the last method to be applied. The possible effect of drying temperature on the measured porosity was also preliminarily tested. These measurements were done in the order of increasing drying temperature. Based on the study, it can be concluded that all methods are comparable in their accuracy. The selection of methods for future applications can therefore be based on practical considerations. Drying temperature seemed to have very little effect on the measured porosity, but a more detailed study is needed for definite conclusions. (author) (4 refs.)

  17. Monitoring uterine activity during labor: a comparison of 3 methods.

    Science.gov (United States)

    Euliano, Tammy Y; Nguyen, Minh Tam; Darmanjian, Shalom; McGorray, Susan P; Euliano, Neil; Onkala, Allison; Gregg, Anthony R

    2013-01-01

    Tocodynamometry (Toco; strain gauge technology) provides contraction frequency and approximate duration of labor contractions but suffers frequent signal dropout, necessitating repositioning by a nurse, and may fail in obese patients. The alternative invasive intrauterine pressure catheter (IUPC) is more reliable and adds contraction pressure information but requires ruptured membranes and introduces small risks of infection and abruption. Electrohysterography (EHG) reports the electrical activity of the uterus through electrodes placed on the maternal abdomen. This study compared all 3 methods of contraction detection simultaneously in laboring women. Upon consent, laboring women were monitored simultaneously with Toco, EHG, and IUPC. Contraction curves were generated in real-time for the EHG, and all 3 curves were stored electronically. A contraction detection algorithm was used to compare frequency and timing between methods. Seventy-three subjects were enrolled in the study; 14 were excluded due to hardware failure of 1 or more of the devices (n = 12) or inadequate data collection duration (n = 2). In comparison with the gold-standard IUPC, EHG performed significantly better than Toco with regard to the Contractions Consistency Index (CCI). The mean CCI for EHG was 0.88 ± 0.17 compared with 0.69 ± 0.27 for Toco (P Toco, EHG was not significantly affected by obesity. Toco does not correlate well with the gold-standard IUPC and fails more frequently in obese patients. EHG provides a reliable noninvasive alternative, regardless of body habitus. Copyright © 2013 Mosby, Inc. All rights reserved.

  18. Comparison of infusion pumps calibration methods

    Science.gov (United States)

    Batista, Elsa; Godinho, Isabel; do Céu Ferreira, Maria; Furtado, Andreia; Lucas, Peter; Silva, Claudia

    2017-12-01

    Nowadays, several types of infusion pump are commonly used for drug delivery, such as syringe pumps and peristaltic pumps. These instruments present different measuring features and capacities according to their use and therapeutic application. In order to ensure the metrological traceability of these flow and volume measuring equipment, it is necessary to use suitable calibration methods and standards. Two different calibration methods can be used to determine the flow error of infusion pumps. One is the gravimetric method, considered as a primary method, commonly used by National Metrology Institutes. The other calibration method, a secondary method, relies on an infusion device analyser (IDA) and is typically used by hospital maintenance offices. The suitability of the IDA calibration method was assessed by testing several infusion instruments at different flow rates using the gravimetric method. In addition, a measurement comparison between Portuguese Accredited Laboratories and hospital maintenance offices was performed under the coordination of the Portuguese Institute for Quality, the National Metrology Institute. The obtained results were directly related to the used calibration method and are presented in this paper. This work has been developed in the framework of the EURAMET projects EMRP MeDD and EMPIR 15SIP03.

  19. Stochastic spectral Galerkin and collocation methods for PDEs with random coefficients: A numerical comparison

    KAUST Repository

    Bä ck, Joakim; Nobile, Fabio; Tamellini, Lorenzo; Tempone, Raul

    2010-01-01

    Much attention has recently been devoted to the development of Stochastic Galerkin (SG) and Stochastic Collocation (SC) methods for uncertainty quantification. An open and relevant research topic is the comparison of these two methods

  20. Comparison of Heuristic Methods Applied for Optimal Operation of Water Resources

    Directory of Open Access Journals (Sweden)

    Alireza Borhani Dariane

    2009-01-01

    Full Text Available Water resources optimization problems are usually complex and hard to solve using the ordinary optimization methods, or they are at least  not economically efficient. A great number of studies have been conducted in quest of suitable methods capable of handling such problems. In recent years, some new heuristic methods such as genetic and ant algorithms have been introduced in systems engineering. Preliminary applications of these methods in water resources problems have shown that some of them are powerful tools, capable of solving complex problems. In this paper, the application of such heuristic methods as Genetic Algorithm (GA and Ant Colony Optimization (ACO have been studied for optimizing reservoir operation. The Dez Dam reservoir inIranwas chosen for a case study. The methods were applied and compared using short-term (one year and long-term models. Comparison of the results showed that GA outperforms both DP and ACO in finding true global optimum solutions and operating rules.

  1. A Comparison of Three Methods for the Analysis of Skin Flap Viability: Reliability and Validity.

    Science.gov (United States)

    Tim, Carla Roberta; Martignago, Cintia Cristina Santi; da Silva, Viviane Ribeiro; Dos Santos, Estefany Camila Bonfim; Vieira, Fabiana Nascimento; Parizotto, Nivaldo Antonio; Liebano, Richard Eloin

    2018-05-01

    Objective: Technological advances have provided new alternatives to the analysis of skin flap viability in animal models; however, the interrater validity and reliability of these techniques have yet to be analyzed. The present study aimed to evaluate the interrater validity and reliability of three different methods: weight of paper template (WPT), paper template area (PTA), and photographic analysis. Approach: Sixteen male Wistar rats had their cranially based dorsal skin flap elevated. On the seventh postoperative day, the viable tissue area and the necrotic area of the skin flap were recorded using the paper template method and photo image. The evaluation of the percentage of viable tissue was performed using three methods, simultaneously and independently by two raters. The analysis of interrater reliability and viability was performed using the intraclass correlation coefficient and Bland Altman Plot Analysis was used to visualize the presence or absence of systematic bias in the evaluations of data validity. Results: The results showed that interrater reliability for WPT, measurement of PTA, and photographic analysis were 0.995, 0.990, and 0.982, respectively. For data validity, a correlation >0.90 was observed for all comparisons made between the three methods. In addition, Bland Altman Plot Analysis showed agreement between the comparisons of the methods and the presence of systematic bias was not observed. Innovation: Digital methods are an excellent choice for assessing skin flap viability; moreover, they make data use and storage easier. Conclusion: Independently from the method used, the interrater reliability and validity proved to be excellent for the analysis of skin flaps' viability.

  2. A Comparison between the Effect of Cooperative Learning Teaching Method and Lecture Teaching Method on Students' Learning and Satisfaction Level

    Science.gov (United States)

    Mohammadjani, Farzad; Tonkaboni, Forouzan

    2015-01-01

    The aim of the present research is to investigate a comparison between the effect of cooperative learning teaching method and lecture teaching method on students' learning and satisfaction level. The research population consisted of all the fourth grade elementary school students of educational district 4 in Shiraz. The statistical population…

  3. A Comparison of Central Composite Design and Taguchi Method for Optimizing Fenton Process

    Directory of Open Access Journals (Sweden)

    Anam Asghar

    2014-01-01

    Full Text Available In the present study, a comparison of central composite design (CCD and Taguchi method was established for Fenton oxidation. Dyeini, Dye : Fe+2, H2O2 : Fe+2, and pH were identified control variables while COD and decolorization efficiency were selected responses. L9 orthogonal array and face-centered CCD were used for the experimental design. Maximum 99% decolorization and 80% COD removal efficiency were obtained under optimum conditions. R squared values of 0.97 and 0.95 for CCD and Taguchi method, respectively, indicate that both models are statistically significant and are in well agreement with each other. Furthermore, Prob > F less than 0.0500 and ANOVA results indicate the good fitting of selected model with experimental results. Nevertheless, possibility of ranking of input variables in terms of percent contribution to the response value has made Taguchi method a suitable approach for scrutinizing the operating parameters. For present case, pH with percent contribution of 87.62% and 66.2% was ranked as the most contributing and significant factor. This finding of Taguchi method was also verified by 3D contour plots of CCD. Therefore, from this comparative study, it is concluded that Taguchi method with 9 experimental runs and simple interaction plots is a suitable alternative to CCD for several chemical engineering applications.

  4. A METHOD OF INTEGRATED ASSESSMENT OF LIVER FIBROSIS DEGREE: A COMPARISON OF THE OPTICAL METHOD FOR THE STUDY OF RED BLOOD CELLS AND INDIRECT ELASTOGRAPHY OF THE LIVER

    Directory of Open Access Journals (Sweden)

    M. V. Kruchinina

    2017-01-01

    Full Text Available The presented  method  of integrated  assessment of liver fibrosis degree based on the comparison of the data obtained in the study of electric and viscoelastic parameters of erythrocytes by the dielectrophoresis  method  using the electro-optical system  of the detection  cells and method  for indirect elastometry. A high degree of comparability of the results of the above-described  methods  was established  when the degree of fibrosis F 2-4  in the  absence  of marked  cytolysis, cholestasis,  inflammatory  syndrome,  metal  overload. It is shown that  parallel using the methods  of dielectrophoresis and indirect elastography is needed in the presence of a rise of transaminases, gammaglutamyltranspeptidase more than 5 norms, expressed dysproteinemia,  syndromes of iron overload, copper to increase the accuracy in determining the degree of liver fibrosis. The evaluation of dynamics of changes of the degree of fibrosis during antiviral therapy is more accurate by the method  of indirect elastometry, and the method of dielectrophoresis  is preferable in the treatment of nonalcoholic fatty liver disease. In cases of restrictions on the use of the method  for indirect elastography (marked obesity, ascites, cholelithiasis, pregnancy, presence of pacemaker, prosthesis to determine  the degree of fibrosis the method of dielectrophoresis  of red blood cells can be used. Simultaneous use of both methods  (using the identified discriminatory values allows to reduce or to neutralize their disadvantages, dependence on associated syndromes, to expand the possibilities of their application, to improve the diagnostic accuracy in determining each of the degrees of liver fibrosis. Integrated application of both methods — indirect elastography and dielectrophoresis of red blood cells — for determining the degree of liver fibrosis allows to achieve high levels of sensitivity (88.9 percent and specificity (100 percent compared to

  5. A comparison between NASCET and ECST methods in the study of carotids

    International Nuclear Information System (INIS)

    Saba, Luca; Mallarini, Giorgio

    2010-01-01

    Purpose: NASCET and ECST systems to quantify carotid artery stenosis use percent diameter ratios from conventional angiography. With the use of Multi-Detector-Row CT scanners it is possible to easily measure plaque area and residual lumen in order to calculate carotid stenosis degree. Our purpose was to compare NASCET and ECST techniques in the measurement of carotid stenosis degree by using MDCTA. Methods and material: From February 2007 to October 2007, 83 non-consecutive patients (68 males; 15 females) were studied using Multi-Detector-Row CT. Each patient was assessed by two experienced radiologists for stenosis degree by using both NASCET and ECST methods. Statistic analysis was performed to determine the entity of correlation (method of Pearson) between NASCET and ECST. The Cohen kappa test and Bland-Altman analysis were applied to assess the level of inter- and intra-observer agreement. Results: The correlation Pearson coefficient between NASCET and ECST was 0.962 (p < 0.01). Intra-observer agreement in the NASCET evaluation, by using Cohen statistic was 0.844 and 0.825. Intra-observer agreement in the ECST evaluation was 0.871 and 0.836. Inter-observer agreement in the NASCET and ECTS were 0.822 and 0.834, respectively. Agreement analysis by using Bland-Altman plots showed a good intra-/inter-observer agreement for the NASCET and an optimal intra-/inter-observer agreement for the ECST. Conclusions: Results of our study suggest that NASCET and ECST methods show a strength correlation according to quadratic regression. Intra-observer agreement results high for both NASCET and ECST.

  6. Assessment and comparison of methods for solar ultraviolet radiation measurements

    Energy Technology Data Exchange (ETDEWEB)

    Leszczynski, K

    1995-06-01

    In the study, the different methods to measure the solar ultraviolet radiation are compared. The methods included are spectroradiometric, erythemally weighted broadband and multi-channel measurements. The comparison of the different methods is based on a literature review and assessments of optical characteristics of the spectroradiometer Optronic 742 of the Finnish Centre for Radiation and Nuclear Safety (STUK) and of the erythemally weighted Robertson-Berger type broadband radiometers Solar Light models 500 and 501 of the Finnish Meteorological Institute and STUK. An introduction to the sources of error in solar UV measurements, to methods for radiometric characterization of UV radiometers together with methods for error reduction are presented. Reviews on experiences from world-wide UV monitoring efforts and instrumentation as well as on the results from international UV radiometer intercomparisons are also presented. (62 refs.).

  7. Assessment and comparison of methods for solar ultraviolet radiation measurements

    International Nuclear Information System (INIS)

    Leszczynski, K.

    1995-06-01

    In the study, the different methods to measure the solar ultraviolet radiation are compared. The methods included are spectroradiometric, erythemally weighted broadband and multi-channel measurements. The comparison of the different methods is based on a literature review and assessments of optical characteristics of the spectroradiometer Optronic 742 of the Finnish Centre for Radiation and Nuclear Safety (STUK) and of the erythemally weighted Robertson-Berger type broadband radiometers Solar Light models 500 and 501 of the Finnish Meteorological Institute and STUK. An introduction to the sources of error in solar UV measurements, to methods for radiometric characterization of UV radiometers together with methods for error reduction are presented. Reviews on experiences from world-wide UV monitoring efforts and instrumentation as well as on the results from international UV radiometer intercomparisons are also presented. (62 refs.)

  8. Comparison of hardenability calculation methods of the heat-treatable constructional steels

    Energy Technology Data Exchange (ETDEWEB)

    Dobrzanski, L.A.; Sitek, W. [Division of Tool Materials and Computer Techniques in Metal Science, Silesian Technical University, Gliwice (Poland)

    1995-12-31

    Evaluation has been made of the consistency of calculation of the hardenability curves of the selected heat-treatable alloyed constructional steels with the experimental data. The study has been conducted basing on the analysis of present state of knowledge on hardenability calculation employing the neural network methods. Several calculation examples and comparison of the consistency of calculation methods employed are included. (author). 35 refs, 2 figs, 3 tabs.

  9. Comparison of genetic algorithms with conjugate gradient methods

    Science.gov (United States)

    Bosworth, J. L.; Foo, N. Y.; Zeigler, B. P.

    1972-01-01

    Genetic algorithms for mathematical function optimization are modeled on search strategies employed in natural adaptation. Comparisons of genetic algorithms with conjugate gradient methods, which were made on an IBM 1800 digital computer, show that genetic algorithms display superior performance over gradient methods for functions which are poorly behaved mathematically, for multimodal functions, and for functions obscured by additive random noise. Genetic methods offer performance comparable to gradient methods for many of the standard functions.

  10. Comparison between MRI-based attenuation correction methods for brain PET in dementia patients

    International Nuclear Information System (INIS)

    Cabello, Jorge; Lukas, Mathias; Pyka, Thomas; Nekolla, Stephan G.; Ziegler, Sibylle I.; Rota Kops, Elena; Shah, N. Jon; Ribeiro, Andre; Yakushev, Igor

    2016-01-01

    The combination of Positron Emission Tomography (PET) with magnetic resonance imaging (MRI) in hybrid PET/MRI scanners offers a number of advantages in investigating brain structure and function. A critical step of PET data reconstruction is attenuation correction (AC). Accounting for bone in attenuation maps (μ-map) was shown to be important in brain PET studies. While there are a number of MRI-based AC methods, no systematic comparison between them has been performed so far. The aim of this work was to study the different performance obtained by some of the recent methods presented in the literature. To perform such a comparison, we focused on [ 18 F]-Fluorodeoxyglucose-PET/MRI neurodegenerative dementing disorders, which are known to exhibit reduced levels of glucose metabolism in certain brain regions. Four novel methods were used to calculate μ-maps from MRI data of 15 patients with Alzheimer's dementia (AD). The methods cover two atlas-based methods, a segmentation method, and a hybrid template/segmentation method. Additionally, the Dixon-based and a UTE-based method, offered by a vendor, were included in the comparison. Performance was assessed at three levels: tissue identification accuracy in the μ-map, quantitative accuracy of reconstructed PET data in specific brain regions, and precision in diagnostic images at identifying hypometabolic areas. Quantitative regional errors of -20-10 % were obtained using the vendor's AC methods, whereas the novel methods produced errors in a margin of ±5 %. The obtained precision at identifying areas with abnormally low levels of glucose uptake, potentially regions affected by AD, were 62.9 and 79.5 % for the two vendor AC methods, the former ignoring bone and the latter including bone information. The precision increased to 87.5-93.3 % in average for the four new methods, exhibiting similar performances. We confirm that the AC methods based on the Dixon and UTE sequences provided by the vendor are inferior

  11. Comparison between MRI-based attenuation correction methods for brain PET in dementia patients

    Energy Technology Data Exchange (ETDEWEB)

    Cabello, Jorge; Lukas, Mathias; Pyka, Thomas; Nekolla, Stephan G.; Ziegler, Sibylle I. [Technische Universitaet Muenchen, Nuklearmedizinische Klinik und Poliklinik, Klinikum rechts der Isar, Munich (Germany); Rota Kops, Elena; Shah, N. Jon [Forschungszentrum Juelich GmbH, Institute of Neuroscience and Medicine 4, Medical Imaging Physics, Juelich (Germany); Ribeiro, Andre [Forschungszentrum Juelich GmbH, Institute of Neuroscience and Medicine 4, Medical Imaging Physics, Juelich (Germany); Institute of Biophysics and Biomedical Engineering, Lisbon (Portugal); Yakushev, Igor [Technische Universitaet Muenchen, Nuklearmedizinische Klinik und Poliklinik, Klinikum rechts der Isar, Munich (Germany); Institute TUM Neuroimaging Center (TUM-NIC), Munich (Germany)

    2016-11-15

    The combination of Positron Emission Tomography (PET) with magnetic resonance imaging (MRI) in hybrid PET/MRI scanners offers a number of advantages in investigating brain structure and function. A critical step of PET data reconstruction is attenuation correction (AC). Accounting for bone in attenuation maps (μ-map) was shown to be important in brain PET studies. While there are a number of MRI-based AC methods, no systematic comparison between them has been performed so far. The aim of this work was to study the different performance obtained by some of the recent methods presented in the literature. To perform such a comparison, we focused on [{sup 18}F]-Fluorodeoxyglucose-PET/MRI neurodegenerative dementing disorders, which are known to exhibit reduced levels of glucose metabolism in certain brain regions. Four novel methods were used to calculate μ-maps from MRI data of 15 patients with Alzheimer's dementia (AD). The methods cover two atlas-based methods, a segmentation method, and a hybrid template/segmentation method. Additionally, the Dixon-based and a UTE-based method, offered by a vendor, were included in the comparison. Performance was assessed at three levels: tissue identification accuracy in the μ-map, quantitative accuracy of reconstructed PET data in specific brain regions, and precision in diagnostic images at identifying hypometabolic areas. Quantitative regional errors of -20-10 % were obtained using the vendor's AC methods, whereas the novel methods produced errors in a margin of ±5 %. The obtained precision at identifying areas with abnormally low levels of glucose uptake, potentially regions affected by AD, were 62.9 and 79.5 % for the two vendor AC methods, the former ignoring bone and the latter including bone information. The precision increased to 87.5-93.3 % in average for the four new methods, exhibiting similar performances. We confirm that the AC methods based on the Dixon and UTE sequences provided by the vendor are

  12. A Simulation-Based Study on the Comparison of Statistical and Time Series Forecasting Methods for Early Detection of Infectious Disease Outbreaks.

    Science.gov (United States)

    Yang, Eunjoo; Park, Hyun Woo; Choi, Yeon Hwa; Kim, Jusim; Munkhdalai, Lkhagvadorj; Musa, Ibrahim; Ryu, Keun Ho

    2018-05-11

    Early detection of infectious disease outbreaks is one of the important and significant issues in syndromic surveillance systems. It helps to provide a rapid epidemiological response and reduce morbidity and mortality. In order to upgrade the current system at the Korea Centers for Disease Control and Prevention (KCDC), a comparative study of state-of-the-art techniques is required. We compared four different temporal outbreak detection algorithms: the CUmulative SUM (CUSUM), the Early Aberration Reporting System (EARS), the autoregressive integrated moving average (ARIMA), and the Holt-Winters algorithm. The comparison was performed based on not only 42 different time series generated taking into account trends, seasonality, and randomly occurring outbreaks, but also real-world daily and weekly data related to diarrhea infection. The algorithms were evaluated using different metrics. These were namely, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), F1 score, symmetric mean absolute percent error (sMAPE), root-mean-square error (RMSE), and mean absolute deviation (MAD). Although the comparison results showed better performance for the EARS C3 method with respect to the other algorithms, despite the characteristics of the underlying time series data, Holt⁻Winters showed better performance when the baseline frequency and the dispersion parameter values were both less than 1.5 and 2, respectively.

  13. Comparison of urine analysis using manual and sedimentation methods.

    Science.gov (United States)

    Kurup, R; Leich, M

    2012-06-01

    Microscopic examination of urine sediment is an essential part in the evaluation of renal and urinary tract diseases. Traditionally, urine sediments are assessed by microscopic examination of centrifuged urine. However the current method used by the Georgetown Public Hospital Corporation Medical Laboratory involves uncentrifuged urine. To encourage high level of care, the results provided to the physician must be accurate and reliable for proper diagnosis. The aim of this study is to determine whether the centrifuge method is more clinically significant than the uncentrifuged method. In this study, a comparison between the results obtained from centrifuged and uncentrifuged methods were performed. A total of 167 urine samples were randomly collected and analysed during the period April-May 2010 at the Medical Laboratory, Georgetown Public Hospital Corporation. The urine samples were first analysed microscopically by the uncentrifuged, and then by the centrifuged method. The results obtained from both methods were recorded in a log book. These results were then entered into a database created in Microsoft Excel, and analysed for differences and similarities using this application. Analysis was further done in SPSS software to compare the results using Pearson ' correlation. When compared using Pearson's correlation coefficient analysis, both methods showed a good correlation between urinary sediments with the exception of white bloods cells. The centrifuged method had a slightly higher identification rate for all of the parameters. There is substantial agreement between the centrifuged and uncentrifuged methods. However the uncentrifuged method provides for a rapid turnaround time.

  14. An interactive website for analytical method comparison and bias estimation.

    Science.gov (United States)

    Bahar, Burak; Tuncel, Ayse F; Holmes, Earle W; Holmes, Daniel T

    2017-12-01

    Regulatory standards mandate laboratories to perform studies to ensure accuracy and reliability of their test results. Method comparison and bias estimation are important components of these studies. We developed an interactive website for evaluating the relative performance of two analytical methods using R programming language tools. The website can be accessed at https://bahar.shinyapps.io/method_compare/. The site has an easy-to-use interface that allows both copy-pasting and manual entry of data. It also allows selection of a regression model and creation of regression and difference plots. Available regression models include Ordinary Least Squares, Weighted-Ordinary Least Squares, Deming, Weighted-Deming, Passing-Bablok and Passing-Bablok for large datasets. The server processes the data and generates downloadable reports in PDF or HTML format. Our website provides clinical laboratories a practical way to assess the relative performance of two analytical methods. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  15. Text-in-context: a method for extracting findings in mixed-methods mixed research synthesis studies.

    Science.gov (United States)

    Sandelowski, Margarete; Leeman, Jennifer; Knafl, Kathleen; Crandell, Jamie L

    2013-06-01

    Our purpose in this paper is to propose a new method for extracting findings from research reports included in mixed-methods mixed research synthesis studies. International initiatives in the domains of systematic review and evidence synthesis have been focused on broadening the conceptualization of evidence, increased methodological inclusiveness and the production of evidence syntheses that will be accessible to and usable by a wider range of consumers. Initiatives in the general mixed-methods research field have been focused on developing truly integrative approaches to data analysis and interpretation. The data extraction challenges described here were encountered, and the method proposed for addressing these challenges was developed, in the first year of the ongoing (2011-2016) study: Mixed-Methods Synthesis of Research on Childhood Chronic Conditions and Family. To preserve the text-in-context of findings in research reports, we describe a method whereby findings are transformed into portable statements that anchor results to relevant information about sample, source of information, time, comparative reference point, magnitude and significance and study-specific conceptions of phenomena. The data extraction method featured here was developed specifically to accommodate mixed-methods mixed research synthesis studies conducted in nursing and other health sciences, but reviewers might find it useful in other kinds of research synthesis studies. This data extraction method itself constitutes a type of integration to preserve the methodological context of findings when statements are read individually and in comparison to each other. © 2012 Blackwell Publishing Ltd.

  16. An algebraic topological method for multimodal brain networks comparison

    Directory of Open Access Journals (Sweden)

    Tiago eSimas

    2015-07-01

    Full Text Available Understanding brain connectivity is one of the most important issues in neuroscience. Nonetheless, connectivity data can reflect either functional relationships of brain activities or anatomical connections between brain areas. Although both representations should be related, this relationship is not straightforward. We have devised a powerful method that allows different operations between networks that share the same set of nodes, by embedding them in a common metric space, enforcing transitivity to the graph topology. Here, we apply this method to construct an aggregated network from a set of functional graphs, each one from a different subject. Once this aggregated functional network is constructed, we use again our method to compare it with the structural connectivity to identify particular brain regions that differ in both modalities (anatomical and functional. Remarkably, these brain regions include functional areas that form part of the classical resting state networks. We conclude that our method -based on the comparison of the aggregated functional network- reveals some emerging features that could not be observed when the comparison is performed with the classical averaged functional network.

  17. Matrix diffusion studies by electrical conductivity methods. Comparison between laboratory and in-situ measurements

    International Nuclear Information System (INIS)

    Ohlsson, Y.; Neretnieks, I.

    1998-01-01

    Traditional laboratory diffusion experiments in rock material are time consuming, and quite small samples are generally used. Electrical conductivity measurements, on the other hand, provide a fast means for examining transport properties in rock and allow measurements on larger samples as well. Laboratory measurements using electrical conductivity give results that compare well to those from traditional diffusion experiments. The measurement of the electrical resistivity in the rock surrounding a borehole is a standard method for the detection of water conducting fractures. If these data could be correlated to matrix diffusion properties, in-situ diffusion data from large areas could be obtained. This would be valuable because it would make it possible to obtain data very early in future investigations of potentially suitable sites for a repository. This study compares laboratory electrical conductivity measurements with in-situ resistivity measurements from a borehole at Aespoe. The laboratory samples consist mainly of Aespoe diorite and fine-grained granite and the rock surrounding the borehole of Aespoe diorite, Smaaland granite and fine-grained granite. The comparison shows good agreement between laboratory measurements and in-situ data

  18. Comparison of parameter-adapted segmentation methods for fluorescence micrographs.

    Science.gov (United States)

    Held, Christian; Palmisano, Ralf; Häberle, Lothar; Hensel, Michael; Wittenberg, Thomas

    2011-11-01

    Interpreting images from fluorescence microscopy is often a time-consuming task with poor reproducibility. Various image processing routines that can help investigators evaluate the images are therefore useful. The critical aspect for a reliable automatic image analysis system is a robust segmentation algorithm that can perform accurate segmentation for different cell types. In this study, several image segmentation methods were therefore compared and evaluated in order to identify the most appropriate segmentation schemes that are usable with little new parameterization and robustly with different types of fluorescence-stained cells for various biological and biomedical tasks. The study investigated, compared, and enhanced four different methods for segmentation of cultured epithelial cells. The maximum-intensity linking (MIL) method, an improved MIL, a watershed method, and an improved watershed method based on morphological reconstruction were used. Three manually annotated datasets consisting of 261, 817, and 1,333 HeLa or L929 cells were used to compare the different algorithms. The comparisons and evaluations showed that the segmentation performance of methods based on the watershed transform was significantly superior to the performance of the MIL method. The results also indicate that using morphological opening by reconstruction can improve the segmentation of cells stained with a marker that exhibits the dotted surface of cells. Copyright © 2011 International Society for Advancement of Cytometry.

  19. Comparisons of ratchetting analysis methods using RCC-M, RCC-MR and ASME codes

    International Nuclear Information System (INIS)

    Yang Yu; Cabrillat, M.T.

    2005-01-01

    The present paper compares the simplified ratcheting analysis methods used in RCC-M, RCC-MR and ASME with some examples. Firstly, comparisons of the methods in RCC-M and efficiency diagram in RCC-MR are investigated. A special method is used to describe these two methods with curves in one coordinate, and the different conservation is demonstrated. RCC-M method is also be interpreted by SR (second ratio) and v (efficiency index) which is used in RCC-MR. Hence, we can easily compare the previous two methods by defining SR as abscissa and v as ordinate and plotting two curves of them. Secondly, comparisons of the efficiency curve in RCC-MR and methods in ASME-NH APPENDIX T are investigated, with significant creep. At last, two practical evaluations are performed to show the comparisons of aforementioned methods. (authors)

  20. A comparison of the weights-of-evidence method and probabilistic neural networks

    Science.gov (United States)

    Singer, Donald A.; Kouda, Ryoichi

    1999-01-01

    The need to integrate large quantities of digital geoscience information to classify locations as mineral deposits or nondeposits has been met by the weights-of-evidence method in many situations. Widespread selection of this method may be more the result of its ease of use and interpretation rather than comparisons with alternative methods. A comparison of the weights-of-evidence method to probabilistic neural networks is performed here with data from Chisel Lake-Andeson Lake, Manitoba, Canada. Each method is designed to estimate the probability of belonging to learned classes where the estimated probabilities are used to classify the unknowns. Using these data, significantly lower classification error rates were observed for the neural network, not only when test and training data were the same (0.02 versus 23%), but also when validation data, not used in any training, were used to test the efficiency of classification (0.7 versus 17%). Despite these data containing too few deposits, these tests of this set of data demonstrate the neural network's ability at making unbiased probability estimates and lower error rates when measured by number of polygons or by the area of land misclassified. For both methods, independent validation tests are required to ensure that estimates are representative of real-world results. Results from the weights-of-evidence method demonstrate a strong bias where most errors are barren areas misclassified as deposits. The weights-of-evidence method is based on Bayes rule, which requires independent variables in order to make unbiased estimates. The chi-square test for independence indicates no significant correlations among the variables in the Chisel Lake–Andeson Lake data. However, the expected number of deposits test clearly demonstrates that these data violate the independence assumption. Other, independent simulations with three variables show that using variables with correlations of 1.0 can double the expected number of deposits

  1. Comparison methods between methane and hydrogen combustion for useful transfer in furnaces

    International Nuclear Information System (INIS)

    Ghiea, V.V.

    2009-01-01

    The advantages and disadvantages of hydrogen use by industrial combustion are critically presented. Greenhouse effect due natural water vapors from atmosphere and these produced by hydrogen industrial combustion is critically analyzed, together with problems of gas fuels containing hydrogen as the relative largest component. A comparison method between methane and hydrogen combustion for pressure loss in burner feeding pipe, is conceived. It is deduced the ratio of radiation useful heat transfer characteristics and convection heat transfer coefficients from combustion gases at industrial furnaces and heat recuperators for hydrogen and methane combustion, establishing specific comparison methods. Using criterial equations special processed for convection heat transfer determination, a calculation generalizing formula is established. The proposed comparison methods are general valid for different gaseous fuels. (author)

  2. A comparison of radiosity with current methods of sound level prediction in commercial spaces

    Science.gov (United States)

    Beamer, C. Walter, IV; Muehleisen, Ralph T.

    2002-11-01

    The ray tracing and image methods (and variations thereof) are widely used for the computation of sound fields in architectural spaces. The ray tracing and image methods are best suited for spaces with mostly specular reflecting surfaces. The radiosity method, a method based on solving a system of energy balance equations, is best applied to spaces with mainly diffusely reflective surfaces. Because very few spaces are either purely specular or purely diffuse, all methods must deal with both types of reflecting surfaces. A comparison of the radiosity method to other methods for the prediction of sound levels in commercial environments is presented. [Work supported by NSF.

  3. A Comparison of underground opening support design methods in jointed rock mass

    International Nuclear Information System (INIS)

    Gharavi, M.; Shafiezadeh, N.

    2008-01-01

    It is of great importance to consider long-term stability of rock mass around the openings of underground structure. during design, construction and operation of the said structures in rock. In this context. three methods namely. empirical. analytical and numerical have been applied to design and analyze the stability of underground infrastructure at the Siah Bisheh Pumping Storage Hydro-Electric Power Project in Iran. The geological and geotechnical data utilized in this article were selected and based on the preliminary studies of this project. In the initial stages of design. it was recommended that, two methods of rock mass classification Q and rock mass rating should be utilized for the support system of the underground cavern. Next, based on the structural instability, the support system was adjusted by the analytical method. The performance of the recommended support system was reviewed by the comparison of the ground response curve and rock support interactions with surrounding rock mass, using FEST03 software. Moreover, for further assessment of the realistic rock mass behavior and support system, the numerical modeling was performed utilizing FEST03 software. Finally both the analytical and numerical methods were compared, to obtain satisfactory results complimenting each other

  4. Comparison between PET template-based method and MRI-based method for cortical quantification of florbetapir (AV-45) uptake in vivo

    Energy Technology Data Exchange (ETDEWEB)

    Saint-Aubert, L.; Nemmi, F.; Peran, P. [Inserm, Imagerie Cerebrale et Handicaps neurologiques UMR 825, Centre Hospitalier Universitaire de Toulouse, Toulouse (France); Centre Hospitalier Universitaire de Toulouse, Universite de Toulouse, UPS, Imagerie Cerebrale et Handicaps Neurologiques UMR 825, Toulouse (France); Barbeau, E.J. [Universite de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, France, CNRS, CerCo, Toulouse (France); Service de Neurologie, Pole Neurosciences, Centre Hospitalier Universitaire de Toulouse, Toulouse (France); Payoux, P. [Inserm, Imagerie Cerebrale et Handicaps neurologiques UMR 825, Centre Hospitalier Universitaire de Toulouse, Toulouse (France); Centre Hospitalier Universitaire de Toulouse, Universite de Toulouse, UPS, Imagerie Cerebrale et Handicaps Neurologiques UMR 825, Toulouse (France); Service de Medecine Nucleaire, Pole Imagerie, Centre Hospitalier Universitaire de Toulouse, Toulouse (France); Chollet, F.; Pariente, J. [Inserm, Imagerie Cerebrale et Handicaps neurologiques UMR 825, Centre Hospitalier Universitaire de Toulouse, Toulouse (France); Centre Hospitalier Universitaire de Toulouse, Universite de Toulouse, UPS, Imagerie Cerebrale et Handicaps Neurologiques UMR 825, Toulouse (France); Service de Neurologie, Pole Neurosciences, Centre Hospitalier Universitaire de Toulouse, Toulouse (France)

    2014-05-15

    Florbetapir (AV-45) has been shown to be a reliable tool for assessing in vivo amyloid load in patients with Alzheimer's disease from the early stages. However, nonspecific white matter binding has been reported in healthy subjects as well as in patients with Alzheimer's disease. To avoid this issue, cortical quantification might increase the reliability of AV-45 PET analyses. In this study, we compared two quantification methods for AV-45 binding, a classical method relying on PET template registration (route 1), and a MRI-based method (route 2) for cortical quantification. We recruited 22 patients at the prodromal stage of Alzheimer's disease and 17 matched controls. AV-45 binding was assessed using both methods, and target-to-cerebellum mean global standard uptake values (SUVr) were obtained for each of them, together with SUVr in specific regions of interest. Quantification using the two routes was compared between the clinical groups (intragroup comparison), and between groups for each route (intergroup comparison). Discriminant analysis was performed. In the intragroup comparison, differences in uptake values were observed between route 1 and route 2 in both groups. In the intergroup comparison, AV-45 uptake was higher in patients than controls in all regions of interest using both methods, but the effect size of this difference was larger using route 2. In the discriminant analysis, route 2 showed a higher specificity (94.1 % versus 70.6 %), despite a lower sensitivity (77.3 % versus 86.4 %), and D-prime values were higher for route 2. These findings suggest that, although both quantification methods enabled patients at early stages of Alzheimer's disease to be well discriminated from controls, PET template-based quantification seems adequate for clinical use, while the MRI-based cortical quantification method led to greater intergroup differences and may be more suitable for use in current clinical research. (orig.)

  5. Human body mass estimation: a comparison of "morphometric" and "mechanical" methods.

    Science.gov (United States)

    Auerbach, Benjamin M; Ruff, Christopher B

    2004-12-01

    In the past, body mass was reconstructed from hominin skeletal remains using both "mechanical" methods which rely on the support of body mass by weight-bearing skeletal elements, and "morphometric" methods which reconstruct body mass through direct assessment of body size and shape. A previous comparison of two such techniques, using femoral head breadth (mechanical) and stature and bi-iliac breadth (morphometric), indicated a good general correspondence between them (Ruff et al. [1997] Nature 387:173-176). However, the two techniques were never systematically compared across a large group of modern humans of diverse body form. This study incorporates skeletal measures taken from 1,173 Holocene adult individuals, representing diverse geographic origins, body sizes, and body shapes. Femoral head breadth, bi-iliac breadth (after pelvic rearticulation), and long bone lengths were measured on each individual. Statures were estimated from long bone lengths using appropriate reference samples. Body masses were calculated using three available femoral head breadth (FH) formulae and the stature/bi-iliac breadth (STBIB) formula, and compared. All methods yielded similar results. Correlations between FH estimates and STBIB estimates are 0.74-0.81. Slight differences in results between the three FH estimates can be attributed to sampling differences in the original reference samples, and in particular, the body-size ranges included in those samples. There is no evidence for systematic differences in results due to differences in body proportions. Since the STBIB method was validated on other samples, and the FH methods produced similar estimates, this argues that either may be applied to skeletal remains with some confidence. 2004 Wiley-Liss, Inc.

  6. A comparison of ancestral state reconstruction methods for quantitative characters.

    Science.gov (United States)

    Royer-Carenzi, Manuela; Didier, Gilles

    2016-09-07

    Choosing an ancestral state reconstruction method among the alternatives available for quantitative characters may be puzzling. We present here a comparison of seven of them, namely the maximum likelihood, restricted maximum likelihood, generalized least squares under Brownian, Brownian-with-trend and Ornstein-Uhlenbeck models, phylogenetic independent contrasts and squared parsimony methods. A review of the relations between these methods shows that the maximum likelihood, the restricted maximum likelihood and the generalized least squares under Brownian model infer the same ancestral states and can only be distinguished by the distributions accounting for the reconstruction uncertainty which they provide. The respective accuracy of the methods is assessed over character evolution simulated under a Brownian motion with (and without) directional or stabilizing selection. We give the general form of ancestral state distributions conditioned on leaf states under the simulation models. Ancestral distributions are used first, to give a theoretical lower bound of the expected reconstruction error, and second, to develop an original evaluation scheme which is more efficient than comparing the reconstructed and the simulated states. Our simulations show that: (i) the distributions of the reconstruction uncertainty provided by the methods generally make sense (some more than others); (ii) it is essential to detect the presence of an evolutionary trend and to choose a reconstruction method accordingly; (iii) all the methods show good performances on characters under stabilizing selection; (iv) without trend or stabilizing selection, the maximum likelihood method is generally the most accurate. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. What Is Social Comparison and How Should We Study It?

    Science.gov (United States)

    Wood, Joanne V.

    1996-01-01

    Examines frequently used measures and procedures in social comparison research. The question of whether a method truly captures social comparison requires a clear understanding of what social comparison is; hence a definition of social comparison is proposed, multiple ancillary processes in social comparison are identified, and definitional…

  8. A comparison of different quasi-newton acceleration methods for partitioned multi-physics codes

    CSIR Research Space (South Africa)

    Haelterman, R

    2018-02-01

    Full Text Available & structures, 88/7, pp. 446–457 (2010) 8. J.E. Dennis, J.J. More´, Quasi-Newton methods: motivation and theory. SIAM Rev. 19, pp. 46–89 (1977) A Comparison of Quasi-Newton Acceleration Methods 15 9. J.E. Dennis, R.B. Schnabel, Least Change Secant Updates... Dois Metodos de Broyden. Mat. Apl. Comput. 1/2, pp. 135– 143 (1982) 25. J.M. Martinez, A quasi-Newton method with modification of one column per iteration. Com- puting 33, pp. 353–362 (1984) 26. J.M. Martinez, M.C. Zambaldi, An Inverse Column...

  9. Comparative study between EDXRF and ASTM E572 methods using two-way ANOVA

    Science.gov (United States)

    Krummenauer, A.; Veit, H. M.; Zoppas-Ferreira, J.

    2018-03-01

    Comparison with reference method is one of the necessary requirements for the validation of non-standard methods. This comparison was made using the experiment planning technique with two-way ANOVA. In ANOVA, the results obtained using the EDXRF method, to be validated, were compared with the results obtained using the ASTM E572-13 standard test method. Fisher's tests (F-test) were used to comparative study between of the elements: molybdenum, niobium, copper, nickel, manganese, chromium and vanadium. All F-tests of the elements indicate that the null hypothesis (Ho) has not been rejected. As a result, there is no significant difference between the methods compared. Therefore, according to this study, it is concluded that the EDXRF method was approved in this method comparison requirement.

  10. Comparison of a clinical gait analysis method using videography and temporal-distance measures with 16-mm cinematography.

    Science.gov (United States)

    Stuberg, W A; Colerick, V L; Blanke, D J; Bruce, W

    1988-08-01

    The purpose of this study was to compare a clinical gait analysis method using videography and temporal-distance measures with 16-mm cinematography in a gait analysis laboratory. Ten children with a diagnosis of cerebral palsy (means age = 8.8 +/- 2.7 years) and 9 healthy children (means age = 8.9 +/- 2.4 years) participated in the study. Stride length, walking velocity, and goniometric measurements of the hip, knee, and ankle were recorded using the two gait analysis methods. A multivariate analysis of variance was used to determine significant differences between the data collected using the two methods. Pearson product-moment correlation coefficients were determined to examine the relationship between the measurements recorded by the two methods. The consistency of performance of the subjects during walking was examined by intraclass correlation coefficients. No significant differences were found between the methods for the variables studied. Pearson product-moment correlation coefficients ranged from .79 to .95, and intraclass coefficients ranged from .89 to .97. The clinical gait analysis method was found to be a valid tool in comparison with 16-mm cinematography for the variables that were studied.

  11. Comparison results on preconditioned SOR-type iterative method for Z-matrices linear systems

    Science.gov (United States)

    Wang, Xue-Zhong; Huang, Ting-Zhu; Fu, Ying-Ding

    2007-09-01

    In this paper, we present some comparison theorems on preconditioned iterative method for solving Z-matrices linear systems, Comparison results show that the rate of convergence of the Gauss-Seidel-type method is faster than the rate of convergence of the SOR-type iterative method.

  12. Comparison of different methods for thoron progeny measurement

    International Nuclear Information System (INIS)

    Bi Lei; Zhu Li; Shang Bing; Cui Hongxing; Zhang Qingzhao

    2009-01-01

    Four popular methods for thoron progeny measurement were discussed, including the aspects of detector,principle, precondition, calculation advantages and disadvantages. Comparison experiments were made in mine and houses with high background in Yunnan Province. Since indoor thoron progeny changes with time obviously and with no rule, α track method is recommended in the area of radiation protection for environmental detection and assessment. (authors)

  13. System Accuracy Evaluation of Four Systems for Self-Monitoring of Blood Glucose Following ISO 15197 Using a Glucose Oxidase and a Hexokinase-Based Comparison Method.

    Science.gov (United States)

    Link, Manuela; Schmid, Christina; Pleus, Stefan; Baumstark, Annette; Rittmeyer, Delia; Haug, Cornelia; Freckmann, Guido

    2015-04-14

    The standard ISO (International Organization for Standardization) 15197 is widely accepted for the accuracy evaluation of systems for self-monitoring of blood glucose (SMBG). Accuracy evaluation was performed for 4 SMBG systems (Accu-Chek Aviva, ContourXT, GlucoCheck XL, GlucoMen LX PLUS) with 3 test strip lots each. To investigate a possible impact of the comparison method on system accuracy data, 2 different established methods were used. The evaluation was performed in a standardized manner following test procedures described in ISO 15197:2003 (section 7.3). System accuracy was assessed by applying ISO 15197:2003 and in addition ISO 15197:2013 criteria (section 6.3.3). For each system, comparison measurements were performed with a glucose oxidase (YSI 2300 STAT Plus glucose analyzer) and a hexokinase (cobas c111) method. All 4 systems fulfilled the accuracy requirements of ISO 15197:2003 with the tested lots. More stringent accuracy criteria of ISO 15197:2013 were fulfilled by 3 systems (Accu-Chek Aviva, ContourXT, GlucoMen LX PLUS) when compared to the manufacturer's comparison method and by 2 systems (Accu-Chek Aviva, ContourXT) when compared to the alternative comparison method. All systems showed lot-to-lot variability to a certain degree; 2 systems (Accu-Chek Aviva, ContourXT), however, showed only minimal differences in relative bias between the 3 evaluated lots. In this study, all 4 systems complied with the evaluated test strip lots with accuracy criteria of ISO 15197:2003. Applying ISO 15197:2013 accuracy limits, differences in the accuracy of the tested systems were observed, also demonstrating that the applied comparison method/system and the lot-to-lot variability can have a decisive influence on accuracy data obtained for a SMBG system. © 2015 Diabetes Technology Society.

  14. A Study on Influencing Factors of Knowledge Management Systems Adoption: Models Comparison Approach

    OpenAIRE

    Mei-Chun Yeh; Ming-Shu Yuan

    2007-01-01

    Using Linear Structural Relation model (LISREL model) as analysis method and technology acceptance model and decomposed theory of planned behavior as research foundation, this study approachesmainly from the angle of behavioral intention to examine the influential factors of 421 employees adopting knowledge management systems and in the meantime to compare the two method models mentioned on the top. According to the research, there is no, in comparison with technology acceptance model anddeco...

  15. Comparison of two inductive learning methods: A case study in failed fuel identification

    Energy Technology Data Exchange (ETDEWEB)

    Reifman, J. [Argonne National Lab., IL (United States); Lee, J.C. [Michigan Univ., Ann Arbor, MI (United States). Dept. of Nuclear Engineering

    1992-05-01

    Two inductive learning methods, the ID3 and Rg algorithms, are studied as a means for systematically and automatically constructing the knowledge base of expert systems. Both inductive learning methods are general-purpose and use information entropy as a discriminatory measure in order to group objects of a common class. ID3 constructs a knowledge base by building decision trees that discriminate objects of a data set as a function of their class. Rg constructs a knowledge base by grouping objects of the same class into patterns or clusters. The two inductive methods are applied to the construction of a knowledge base for failed fuel identification in the Experimental Breeder Reactor II. Through analysis of the knowledge bases generated, the ID3 and Rg algorithms are compared for their knowledge representation, data overfitting, feature space partition, feature selection, and search procedure.

  16. Comparison of two inductive learning methods: A case study in failed fuel identification

    Energy Technology Data Exchange (ETDEWEB)

    Reifman, J. (Argonne National Lab., IL (United States)); Lee, J.C. (Michigan Univ., Ann Arbor, MI (United States). Dept. of Nuclear Engineering)

    1992-01-01

    Two inductive learning methods, the ID3 and Rg algorithms, are studied as a means for systematically and automatically constructing the knowledge base of expert systems. Both inductive learning methods are general-purpose and use information entropy as a discriminatory measure in order to group objects of a common class. ID3 constructs a knowledge base by building decision trees that discriminate objects of a data set as a function of their class. Rg constructs a knowledge base by grouping objects of the same class into patterns or clusters. The two inductive methods are applied to the construction of a knowledge base for failed fuel identification in the Experimental Breeder Reactor II. Through analysis of the knowledge bases generated, the ID3 and Rg algorithms are compared for their knowledge representation, data overfitting, feature space partition, feature selection, and search procedure.

  17. Comparison of two inductive learning methods: A case study in failed fuel identification

    International Nuclear Information System (INIS)

    Reifman, J.; Lee, J.C.

    1992-01-01

    Two inductive learning methods, the ID3 and Rg algorithms, are studied as a means for systematically and automatically constructing the knowledge base of expert systems. Both inductive learning methods are general-purpose and use information entropy as a discriminatory measure in order to group objects of a common class. ID3 constructs a knowledge base by building decision trees that discriminate objects of a data set as a function of their class. Rg constructs a knowledge base by grouping objects of the same class into patterns or clusters. The two inductive methods are applied to the construction of a knowledge base for failed fuel identification in the Experimental Breeder Reactor II. Through analysis of the knowledge bases generated, the ID3 and Rg algorithms are compared for their knowledge representation, data overfitting, feature space partition, feature selection, and search procedure

  18. Comparison of deterministic and stochastic methods for time-dependent Wigner simulations

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Sihong, E-mail: sihong@math.pku.edu.cn [LMAM and School of Mathematical Sciences, Peking University, Beijing 100871 (China); Sellier, Jean Michel, E-mail: jeanmichel.sellier@parallel.bas.bg [IICT, Bulgarian Academy of Sciences, Acad. G. Bonchev str. 25A, 1113 Sofia (Bulgaria)

    2015-11-01

    Recently a Monte Carlo method based on signed particles for time-dependent simulations of the Wigner equation has been proposed. While it has been thoroughly validated against physical benchmarks, no technical study about its numerical accuracy has been performed. To this end, this paper presents the first step towards the construction of firm mathematical foundations for the signed particle Wigner Monte Carlo method. An initial investigation is performed by means of comparisons with a cell average spectral element method, which is a highly accurate deterministic method and utilized to provide reference solutions. Several different numerical tests involving the time-dependent evolution of a quantum wave-packet are performed and discussed in deep details. In particular, this allows us to depict a set of crucial criteria for the signed particle Wigner Monte Carlo method to achieve a satisfactory accuracy.

  19. Methodology or method? A critical review of qualitative case study reports

    Directory of Open Access Journals (Sweden)

    Nerida Hyett

    2014-05-01

    Full Text Available Despite on-going debate about credibility, and reported limitations in comparison to other approaches, case study is an increasingly popular approach among qualitative researchers. We critically analysed the methodological descriptions of published case studies. Three high-impact qualitative methods journals were searched to locate case studies published in the past 5 years; 34 were selected for analysis. Articles were categorized as health and health services (n=12, social sciences and anthropology (n=7, or methods (n=15 case studies. The articles were reviewed using an adapted version of established criteria to determine whether adequate methodological justification was present, and if study aims, methods, and reported findings were consistent with a qualitative case study approach. Findings were grouped into five themes outlining key methodological issues: case study methodology or method, case of something particular and case selection, contextually bound case study, researcher and case interactions and triangulation, and study design inconsistent with methodology reported. Improved reporting of case studies by qualitative researchers will advance the methodology for the benefit of researchers and practitioners.

  20. Methodology or method? A critical review of qualitative case study reports

    Science.gov (United States)

    Hyett, Nerida; Kenny, Amanda; Dickson-Swift, Virginia

    2014-01-01

    Despite on-going debate about credibility, and reported limitations in comparison to other approaches, case study is an increasingly popular approach among qualitative researchers. We critically analysed the methodological descriptions of published case studies. Three high-impact qualitative methods journals were searched to locate case studies published in the past 5 years; 34 were selected for analysis. Articles were categorized as health and health services (n=12), social sciences and anthropology (n=7), or methods (n=15) case studies. The articles were reviewed using an adapted version of established criteria to determine whether adequate methodological justification was present, and if study aims, methods, and reported findings were consistent with a qualitative case study approach. Findings were grouped into five themes outlining key methodological issues: case study methodology or method, case of something particular and case selection, contextually bound case study, researcher and case interactions and triangulation, and study design inconsistent with methodology reported. Improved reporting of case studies by qualitative researchers will advance the methodology for the benefit of researchers and practitioners. PMID:24809980

  1. Comparison of estimation methods for fitting weibull distribution to ...

    African Journals Online (AJOL)

    Comparison of estimation methods for fitting weibull distribution to the natural stand of Oluwa Forest Reserve, Ondo State, Nigeria. ... Journal of Research in Forestry, Wildlife and Environment ... The result revealed that maximum likelihood method was more accurate in fitting the Weibull distribution to the natural stand.

  2. Comparison of Data Fusion Methods Using Crowdsourced Data in Creating a Hybrid Forest Cover Map

    Directory of Open Access Journals (Sweden)

    Myroslava Lesiv

    2016-03-01

    Full Text Available Data fusion represents a powerful way of integrating individual sources of information to produce a better output than could be achieved by any of the individual sources on their own. This paper focuses on the data fusion of different land cover products derived from remote sensing. In the past, many different methods have been applied, without regard to their relative merit. In this study, we compared some of the most commonly-used methods to develop a hybrid forest cover map by combining available land cover/forest products and crowdsourced data on forest cover obtained through the Geo-Wiki project. The methods include: nearest neighbour, naive Bayes, logistic regression and geographically-weighted logistic regression (GWR, as well as classification and regression trees (CART. We ran the comparison experiments using two data types: presence/absence of forest in a grid cell; percentage of forest cover in a grid cell. In general, there was little difference between the methods. However, GWR was found to perform better than the other tested methods in areas with high disagreement between the inputs.

  3. Comparison of multiple-criteria decision-making methods - results of simulation study

    Directory of Open Access Journals (Sweden)

    Michał Adamczak

    2016-12-01

    Full Text Available Background: Today, both researchers and practitioners have many methods for supporting the decision-making process. Due to the conditions in which supply chains function, the most interesting are multi-criteria methods. The use of sophisticated methods for supporting decisions requires the parameterization and execution of calculations that are often complex. So is it efficient to use sophisticated methods? Methods: The authors of the publication compared two popular multi-criteria decision-making methods: the  Weighted Sum Model (WSM and the Analytic Hierarchy Process (AHP. A simulation study reflects these two decision-making methods. Input data for this study was a set of criteria weights and the value of each in terms of each criterion. Results: The iGrafx Process for Six Sigma simulation software recreated how both multiple-criteria decision-making methods (WSM and AHP function. The result of the simulation was a numerical value defining the preference of each of the alternatives according to the WSM and AHP methods. The alternative producing a result of higher numerical value  was considered preferred, according to the selected method. In the analysis of the results, the relationship between the values of the parameters and the difference in the results presented by both methods was investigated. Statistical methods, including hypothesis testing, were used for this purpose. Conclusions: The simulation study findings prove that the results obtained with the use of two multiple-criteria decision-making methods are very similar. Differences occurred more frequently in lower-value parameters from the "value of each alternative" group and higher-value parameters from the "weight of criteria" group.

  4. Comparison of optical methods for surface roughness characterization

    International Nuclear Information System (INIS)

    Feidenhans’l, Nikolaj A; Hansen, Poul-Erik; Madsen, Morten H; Petersen, Jan C; Pilný, Lukáš; Bissacco, Giuliano; Taboryski, Rafael

    2015-01-01

    We report a study of the correlation between three optical methods for characterizing surface roughness: a laboratory scatterometer measuring the bi-directional reflection distribution function (BRDF instrument), a simple commercial scatterometer (rBRDF instrument), and a confocal optical profiler. For each instrument, the effective range of spatial surface wavelengths is determined, and the common bandwidth used when comparing the evaluated roughness parameters. The compared roughness parameters are: the root-mean-square (RMS) profile deviation (Rq), the RMS profile slope (Rdq), and the variance of the scattering angle distribution (Aq). The twenty-two investigated samples were manufactured with several methods in order to obtain a suitable diversity of roughness patterns.Our study shows a one-to-one correlation of both the Rq and the Rdq roughness values when obtained with the BRDF and the confocal instruments, if the common bandwidth is applied. Likewise, a correlation is observed when determining the Aq value with the BRDF and the rBRDF instruments.Furthermore, we show that it is possible to determine the Rq value from the Aq value, by applying a simple transfer function derived from the instrument comparisons. The presented method is validated for surfaces with predominantly 1D roughness, i.e. consisting of parallel grooves of various periods, and a reflectance similar to stainless steel. The Rq values are predicted with an accuracy of 38% at the 95% confidence interval. (paper)

  5. A BAND SELECTION METHOD FOR SUB-PIXEL TARGET DETECTION IN HYPERSPECTRAL IMAGES BASED ON LABORATORY AND FIELD REFLECTANCE SPECTRAL COMPARISON

    Directory of Open Access Journals (Sweden)

    S. Sharifi hashjin

    2016-06-01

    Full Text Available In recent years, developing target detection algorithms has received growing interest in hyperspectral images. In comparison to the classification field, few studies have been done on dimension reduction or band selection for target detection in hyperspectral images. This study presents a simple method to remove bad bands from the images in a supervised manner for sub-pixel target detection. The proposed method is based on comparing field and laboratory spectra of the target of interest for detecting bad bands. For evaluation, the target detection blind test dataset is used in this study. Experimental results show that the proposed method can improve efficiency of the two well-known target detection methods, ACE and CEM.

  6. A comparison of goniophotometric measurement facilities

    DEFF Research Database (Denmark)

    Thorseth, Anders; Lindén, Johannes; Dam-Hansen, Carsten

    2016-01-01

    In this paper, we present the preliminary results of a comparison between widely different goniophotometric and goniospectroradiometric measurement facilities. The objective of the comparison is to increase consistency and clarify the capabilities among Danish test laboratories. The study will seek...... to find the degree of equivalence between the various facilities and methods. The collected data is compared by using a three-way variation of principal component analysis, which is well suited for modelling large sets of correlated data. This method drastically decreases the number of numerical values...

  7. A method comparison of total and HMW adiponectin : HMW/total adiponectin ratio varies versus total adiponectin, independent of clinical condition

    NARCIS (Netherlands)

    van Andel, Merel; Drent, Madeleine L; van Herwaarden, Antonius E; Ackermans, Mariëtte T; Heijboer, Annemieke C

    BACKGROUND: Total and high-molecular-weight (HMW) adiponectin have been associated with endocrine and cardiovascular pathology. As no gold standard is available, the discussion about biological relevance of isoforms is complicated. In our study we perform a method comparison between two commercially

  8. A method comparison of total and HMW adiponectin: HMW/total adiponectin ratio varies versus total adiponectin, independent of clinical condition

    NARCIS (Netherlands)

    van Andel, Merel; Drent, Madeleine L.; van Herwaarden, Antonius E.; Ackermans, Mariëtte T.; Heijboer, Annemieke C.

    2017-01-01

    Background: Total and high-molecular-weight (HMW) adiponectin have been associated with endocrine and cardiovascular pathology. As no gold standard is available, the discussion about biological relevance of isoforms is complicated. In our study we perform a method comparison between two commercially

  9. COMPARISON OF TWO METHODS FOR THE DETECTION OF LISTERIA MONOCYTOGENES

    Directory of Open Access Journals (Sweden)

    G. Tantillo

    2013-02-01

    Full Text Available The aim of this study was to compare the performance of the conventional methods for detection of Listeria monocytogenes in food using media Oxford and ALOA (Agar Listeria acc. to Ottaviani & Agosti in according to the ISO 11290-1 to a new chromogenic medium “CHROMagar Listeria” standardized in 2005 AFNOR ( CHR – 21/1-12/01. A total of 40 pre-packed ready-to-eat food samples were examined. Using two methods six samples were found positive for Listeria monocytogenes but the medium “CHROMagar Listeria” was more selective in comparison with the others. In conclusion this study has demonstrated that isolation medium able to target specifically the detection of L. monocytogenes such as “CHROMagar Listeria” is highly recommendable because of that detection time is significantly reduced and the analysis cost is less expensive.

  10. Modified X-ray method of a study of duodenum

    Energy Technology Data Exchange (ETDEWEB)

    Korolyuk, I.P.; Bugakov, V.M.; Shinkin, V.M.

    A modified X-ray examination of duodenum under hypotension conditions is described. In comparison with the existing method, the above-mentioned modification allows one to investigate the duodenum by using double contrast - the high-concentrated barium suspension and the gas, formed after the gasificated powder dose. 327 patients have been examined by the given method, 126 of them have been diagnosed to suffer from inflammatory diseases of the stomach and the duodenum, 22 of them suffering from the duodenum peptic ulcer, 107 of them - pancreatitis, 48-cholelithiasis, 24 - the tumor of the pancreatoduodenum zone. 65 patients have been operated on. Roentgenomorphologic comparisons have been carried out for 66 patients suffering from inflammatory deseases of the duodenum. Duodenum visualization of 283 patients is found to be good and satisfactory. The given method may be used under any conditions, including polyclinics, due to the sparing nature.

  11. How Many Alternatives Can Be Ranked? A Comparison of the Paired Comparison and Ranking Methods.

    Science.gov (United States)

    Ock, Minsu; Yi, Nari; Ahn, Jeonghoon; Jo, Min-Woo

    2016-01-01

    To determine the feasibility of converting ranking data into paired comparison (PC) data and suggest the number of alternatives that can be ranked by comparing a PC and a ranking method. Using a total of 222 health states, a household survey was conducted in a sample of 300 individuals from the general population. Each respondent performed a PC 15 times and a ranking method 6 times (two attempts of ranking three, four, and five health states, respectively). The health states of the PC and the ranking method were constructed to overlap each other. We converted the ranked data into PC data and examined the consistency of the response rate. Applying probit regression, we obtained the predicted probability of each method. Pearson correlation coefficients were determined between the predicted probabilities of those methods. The mean absolute error was also assessed between the observed and the predicted values. The overall consistency of the response rate was 82.8%. The Pearson correlation coefficients were 0.789, 0.852, and 0.893 for ranking three, four, and five health states, respectively. The lowest mean absolute error was 0.082 (95% confidence interval [CI] 0.074-0.090) in ranking five health states, followed by 0.123 (95% CI 0.111-0.135) in ranking four health states and 0.126 (95% CI 0.113-0.138) in ranking three health states. After empirically examining the consistency of the response rate between a PC and a ranking method, we suggest that using five alternatives in the ranking method may be superior to using three or four alternatives. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  12. Methods for cost management during product development: A review and comparison of different literatures

    NARCIS (Netherlands)

    Wouters, M.; Morales, S.; Grollmuss, S.; Scheer, M.

    2016-01-01

    Purpose The paper provides an overview of research published in the innovation and operations management (IOM) literature on 15 methods for cost management in new product development, and it provides a comparison to an earlier review of the management accounting (MA) literature (Wouters & Morales,

  13. A comprehensive comparison of RNA-Seq-based transcriptome analysis from reads to differential gene expression and cross-comparison with microarrays: a case study in Saccharomyces cerevisiae

    DEFF Research Database (Denmark)

    Nookaew, Intawat; Papini, Marta; Pornputtapong, Natapol

    2012-01-01

    RNA-seq, has recently become an attractive method of choice in the studies of transcriptomes, promising several advantages compared with microarrays. In this study, we sought to assess the contribution of the different analytical steps involved in the analysis of RNA-seq data generated with the I......RNA-seq, has recently become an attractive method of choice in the studies of transcriptomes, promising several advantages compared with microarrays. In this study, we sought to assess the contribution of the different analytical steps involved in the analysis of RNA-seq data generated...... gene expression identification derived from the different statistical methods, as well as their integrated analysis results based on gene ontology annotation are in good agreement. Overall, our study provides a useful and comprehensive comparison between the two platforms (RNA-seq and microrrays...

  14. Estimation of potential evapotranspiration of a coastal savannah environment; comparison of methods

    International Nuclear Information System (INIS)

    Asare, D.K.; Ayeh, E.O.; Amenorpe, G.; Banini, G.K.

    2011-01-01

    Six potential evapotranspiration models namely, Penman-Monteith, Hargreaves-Samani , Priestley-Taylor, IRMAK1, IRMAK2 and TURC, were used to estimate daily PET values at Atomic-Kwabenya in the coastal savannah environment of Ghana for the year 2005. The study compared PET values generated by six models and identified which ones compared favourably with the Penman-Monteith model which is the recommended standard method for estimating PET. Cross comparison analysis showed that only the daily estimates of PET of Hargreaves-Samani model correlated reasonably (r = 0.82) with estimates by the Penman-Monteith model. Additionally, PET values by the Priestley-Taylor and TURC models were highly correlated (r = 0.99) as well as those generated by IRMAK2 and TURC models (r = 0.96). Statistical analysis, based on pair comparison of means, showed that daily PET estimates of the Penman-Monteith model were not different from the Priestley-Taylor model for the Kwabenya-Atomic area located in the coastal savannah environment of Ghana. The Priestley-Taylor model can be used, in place of the Penman-Monteith model, to estimate daily PET for the Atomic-Kwabenya area of the coastal savannah environment of Ghana. The Hargreaves-Samani model can also be used to estimate PET for the study area because its PET estimates correlated reasonably with those of the Penman-Monteith model (r = 0.82) and requires only air temperature measurements as inputs. (au)

  15. Comparison of three different prehospital wrapping methods for preventing hypothermia - a crossover study in humans

    Directory of Open Access Journals (Sweden)

    Zakariassen Erik

    2011-06-01

    Full Text Available Abstract Background Accidental hypothermia increases mortality and morbidity in trauma patients. Various methods for insulating and wrapping hypothermic patients are used worldwide. The aim of this study was to compare the thermal insulating effects and comfort of bubble wrap, ambulance blankets / quilts, and Hibler's method, a low-cost method combining a plastic outer layer with an insulating layer. Methods Eight volunteers were dressed in moistened clothing, exposed to a cold and windy environment then wrapped using one of the three different insulation methods in random order on three different days. They were rested quietly on their back for 60 minutes in a cold climatic chamber. Skin temperature, rectal temperature, oxygen consumption were measured, and metabolic heat production was calculated. A questionnaire was used for a subjective evaluation of comfort, thermal sensation, and shivering. Results Skin temperature was significantly higher 15 minutes after wrapping using Hibler's method compared with wrapping with ambulance blankets / quilts or bubble wrap. There were no differences in core temperature between the three insulating methods. The subjects reported more shivering, they felt colder, were more uncomfortable, and had an increased heat production when using bubble wrap compared with the other two methods. Hibler's method was the volunteers preferred method for preventing hypothermia. Bubble wrap was the least effective insulating method, and seemed to require significantly higher heat production to compensate for increased heat loss. Conclusions This study demonstrated that a combination of vapour tight layer and an additional dry insulating layer (Hibler's method is the most efficient wrapping method to prevent heat loss, as shown by increased skin temperatures, lower metabolic rate and better thermal comfort. This should then be the method of choice when wrapping a wet patient at risk of developing hypothermia in prehospital

  16. Sampling methods in archaeomagnetic dating: A comparison using case studies from Wörterberg, Eisenerz and Gams Valley (Austria)

    Science.gov (United States)

    Trapanese, A.; Batt, C. M.; Schnepp, E.

    The aim of this research was to review the relative merits of different methods of taking samples for archaeomagnetic dating. To allow different methods to be investigated, two archaeological structures and one modern fireplace were sampled in Austria. On each structure a variety of sampling methods were used: the tube and disc techniques of Clark et al. (Clark, A.J., Tarling, D.H., Noel, M., 1988. Developments in archaeomagnetic dating in Great Britain. Journal of Archaeological Science 15, 645-667), the drill core technique, the mould plastered hand block method of Thellier, and a modification of it. All samples were oriented with a magnetic compass and sun compass, where weather conditions allowed. Approximately 12 discs, tubes, drill cores or plaster hand blocks were collected from each structure, with one mould plaster hand block being collected and cut into specimens. The natural remanent magnetisation (NRM) of the samples was measured and stepwise alternating field (AF) or thermal demagnetisation was applied. Samples were measured either in the UK or in Austria, which allowed the comparison of results between magnetometers with different sensitivity. The tubes and plastered hand block specimens showed good agreement in directional results, and the samples obtained showed good stability. The discs proved to be unreliable as both NRM and the characteristic remanent magnetisation (ChRM) distribution were very scattered. The failure of some methods may be related to the suitability of the material sampled, for example if it was disturbed before sampling, had been insufficiently heated or did not contain appropriate magnetic minerals to retain a remanent magnetisation. Caution is also recommended for laboratory procedures as the cutting of poorly consolidated specimens may disturb the material and therefore the remanent magnetisation. Criteria and guidelines were established to aid researchers in selecting the most appropriate method for a particular

  17. Activity optimization method in nuclear medicine: a comparison with roc analysis

    International Nuclear Information System (INIS)

    Perez Diaz, M.; Diaz Rizo, O.; Lopez, A.; Estevez Aparicio, E.; Roque Diaz, R.

    2006-01-01

    Full text of publication follows: A discriminant method for optimizing the administered activity is validated by comparison with R.O.C. curves. The method is tested in 21 SPECT studies, performed with a Cardiac phantom. Three different cold lesions (L1, L2 and L3) are placed in the myocardium-wall by pairs for each SPECT. Three activities (84 MBq, 37 MBq or 18.5 MBq) of Tc 99 m diluted in water are used as background. The discriminant analysis is used to select the parameters that characterize image quality among the measured variables in the obtained tomographic images. They are a group of Lesion-to-Background (L/B) and Signal-to-Noise (S/N) ratios. Two clusters with different image quality (p=0.021) are obtained following the measured variables. The first one contains the studies performed with 37 MBq and 84 MBq and the second one the studies made with 18.5 MBq. Cluster classifications constitute the dependent variable in the discriminant function. The ratios B/L1, B/L2 and B/L3 are the parameters able to construct the function with 100 % of cases correctly classified into the clusters. The value of 37 MBq is the lowest tested activity that permits good results for the L/B variables, without significant differences respect to 84 MBq (p>0.05). The result is coincident with the applied R.O.C.-analysis, in which 37 MBq permits the highest area under the curve and low false-positive and false-negative rates with significant differences respect to 18.5 MBq (p=0.008)

  18. Comparison of sine dwell and broadband methods for modal testing

    Science.gov (United States)

    Chen, Jay-Chung

    1989-01-01

    The objectives of modal tests for large complex spacecraft structural systems are outlined. The comparison criteria for the modal test methods, namely, the broadband excitation and the sine dwell methods, are established. Using the Galileo spacecraft modal test and the Centaur G Prime upper stage vehicle modal test as examples, the relative advantage or disadvantage of each method is examined. The usefulness or shortcomings of the methods are given from a practical engineering viewpoint.

  19. Quantitative methods for reconstructing tissue biomechanical properties in optical coherence elastography: a comparison study

    International Nuclear Information System (INIS)

    Han, Zhaolong; Li, Jiasong; Singh, Manmohan; Wu, Chen; Liu, Chih-hao; Wang, Shang; Idugboe, Rita; Raghunathan, Raksha; Sudheendran, Narendran; Larin, Kirill V; Aglyamov, Salavat R; Twa, Michael D

    2015-01-01

    We present a systematic analysis of the accuracy of five different methods for extracting the biomechanical properties of soft samples using optical coherence elastography (OCE). OCE is an emerging noninvasive technique, which allows assessment of biomechanical properties of tissues with micrometer spatial resolution. However, in order to accurately extract biomechanical properties from OCE measurements, application of a proper mechanical model is required. In this study, we utilize tissue-mimicking phantoms with controlled elastic properties and investigate the feasibilities of four available methods for reconstructing elasticity (Young’s modulus) based on OCE measurements of an air-pulse induced elastic wave. The approaches are based on the shear wave equation (SWE), the surface wave equation (SuWE), Rayleigh-Lamb frequency equation (RLFE), and finite element method (FEM), Elasticity values were compared with uniaxial mechanical testing. The results show that the RLFE and the FEM are more robust in quantitatively assessing elasticity than the other simplified models. This study provides a foundation and reference for reconstructing the biomechanical properties of tissues from OCE data, which is important for the further development of noninvasive elastography methods. (paper)

  20. Comparison of DNA Quantification Methods for Next Generation Sequencing.

    Science.gov (United States)

    Robin, Jérôme D; Ludlow, Andrew T; LaRanger, Ryan; Wright, Woodring E; Shay, Jerry W

    2016-04-06

    Next Generation Sequencing (NGS) is a powerful tool that depends on loading a precise amount of DNA onto a flowcell. NGS strategies have expanded our ability to investigate genomic phenomena by referencing mutations in cancer and diseases through large-scale genotyping, developing methods to map rare chromatin interactions (4C; 5C and Hi-C) and identifying chromatin features associated with regulatory elements (ChIP-seq, Bis-Seq, ChiA-PET). While many methods are available for DNA library quantification, there is no unambiguous gold standard. Most techniques use PCR to amplify DNA libraries to obtain sufficient quantities for optical density measurement. However, increased PCR cycles can distort the library's heterogeneity and prevent the detection of rare variants. In this analysis, we compared new digital PCR technologies (droplet digital PCR; ddPCR, ddPCR-Tail) with standard methods for the titration of NGS libraries. DdPCR-Tail is comparable to qPCR and fluorometry (QuBit) and allows sensitive quantification by analysis of barcode repartition after sequencing of multiplexed samples. This study provides a direct comparison between quantification methods throughout a complete sequencing experiment and provides the impetus to use ddPCR-based quantification for improvement of NGS quality.

  1. What Happened to Remote Usability Testing? An Empirical Study of Three Methods

    DEFF Research Database (Denmark)

    Stage, Jan; Andreasen, M. S.; Nielsen, H. V.

    2007-01-01

    The idea of conducting usability tests remotely emerged ten years ago. Since then, it has been studied empirically, and some software organizations employ remote methods. Yet there are still few comparisons involving more than one remote method. This paper presents results from a systematic...... empirical comparison of three methods for remote usability testing and a conventional laboratorybased think-aloud method. The three remote methods are a remote synchronous condition, where testing is conducted in real time but the test monitor is separated spatially from the test subjects, and two remote...

  2. Onset Detection in Surface Electromyographic Signals: A Systematic Comparison of Methods

    Directory of Open Access Journals (Sweden)

    Claus Flachenecker

    2001-06-01

    Full Text Available Various methods to determine the onset of the electromyographic activity which occurs in response to a stimulus have been discussed in the literature over the last decade. Due to the stochastic characteristic of the surface electromyogram (SEMG, onset detection is a challenging task, especially in weak SEMG responses. The performance of the onset detection methods were tested, mostly by comparing their automated onset estimations to the manually determined onsets found by well-trained SEMG examiners. But a systematic comparison between methods, which reveals the benefits and the drawbacks of each method compared to the other ones and shows the specific dependence of the detection accuracy on signal parameters, is still lacking. In this paper, several classical threshold-based approaches as well as some statistically optimized algorithms were tested on large samples of simulated SEMG data with well-known signal parameters. Rating between methods is performed by comparing their performance to that of a statistically optimal maximum likelihood estimator which serves as reference method. In addition, performance was evaluated on real SEMG data obtained in a reaction time experiment. Results indicate that detection behavior strongly depends on SEMG parameters, such as onset rise time, signal-to-noise ratio or background activity level. It is shown that some of the threshold-based signal-power-estimation procedures are very sensitive to signal parameters, whereas statistically optimized algorithms are generally more robust.

  3. Studying the method of linearization of exponential calibration curves

    International Nuclear Information System (INIS)

    Bunzh, Z.A.

    1989-01-01

    The results of study of the method for linearization of exponential calibration curves are given. The calibration technique and comparison of the proposed method with piecewise-linear approximation and power series expansion, are given

  4. Investigation of Industrialised Building System Performance in Comparison to Conventional Construction Method

    Directory of Open Access Journals (Sweden)

    Othuman Mydin M.A.

    2014-03-01

    Full Text Available Conventional construction methods are still widely practised, although many researches have indicated that this method is less effective compared to the IBS construction method. The existence of the IBS has added to the many techniques within the construction industry. This study is aimed at making comparisons between the two construction approaches. Case studies were conducted at four sites in the state of Penang, Malaysia. Two projects were IBS-based while the remaining two deployed the conventional method of construction. Based on an analysis of the results, it can be concluded that the IBS approach has more to offer compared to the conventional method. Among these advantages are shorter construction periods, reduced overall costs, less labour needs, better site conditions and the production of higher quality components.

  5. Structural pattern recognition methods based on string comparison for fusion databases

    International Nuclear Information System (INIS)

    Dormido-Canto, S.; Farias, G.; Dormido, R.; Vega, J.; Sanchez, J.; Duro, N.; Vargas, H.; Ratta, G.; Pereira, A.; Portas, A.

    2008-01-01

    Databases for fusion experiments are designed to store several million waveforms. Temporal evolution signals show the same patterns under the same plasma conditions and, therefore, pattern recognition techniques allow the identification of similar plasma behaviours. This article is focused on the comparison of structural pattern recognition methods. A pattern can be composed of simpler sub-patterns, where the most elementary sub-patterns are known as primitives. Selection of primitives is an essential issue in structural pattern recognition methods, because they determine what types of structural components can be constructed. However, it should be noted that there is not a general solution to extract structural features (primitives) from data. So, four different ways to compute the primitives of plasma waveforms are compared: (1) constant length primitives, (2) adaptive length primitives, (3) concavity method and (4) concavity method for noisy signals. Each method defines a code alphabet and, in this way, the pattern recognition problem is carried out via string comparisons. Results of the four methods with the TJ-II stellarator databases will be discussed

  6. Structural pattern recognition methods based on string comparison for fusion databases

    Energy Technology Data Exchange (ETDEWEB)

    Dormido-Canto, S. [Dpto. Informatica y Automatica - UNED 28040, Madrid (Spain)], E-mail: sebas@dia.uned.es; Farias, G.; Dormido, R. [Dpto. Informatica y Automatica - UNED 28040, Madrid (Spain); Vega, J. [Asociacion EURATOM/CIEMAT para Fusion, 28040, Madrid (Spain); Sanchez, J.; Duro, N.; Vargas, H. [Dpto. Informatica y Automatica - UNED 28040, Madrid (Spain); Ratta, G.; Pereira, A.; Portas, A. [Asociacion EURATOM/CIEMAT para Fusion, 28040, Madrid (Spain)

    2008-04-15

    Databases for fusion experiments are designed to store several million waveforms. Temporal evolution signals show the same patterns under the same plasma conditions and, therefore, pattern recognition techniques allow the identification of similar plasma behaviours. This article is focused on the comparison of structural pattern recognition methods. A pattern can be composed of simpler sub-patterns, where the most elementary sub-patterns are known as primitives. Selection of primitives is an essential issue in structural pattern recognition methods, because they determine what types of structural components can be constructed. However, it should be noted that there is not a general solution to extract structural features (primitives) from data. So, four different ways to compute the primitives of plasma waveforms are compared: (1) constant length primitives, (2) adaptive length primitives, (3) concavity method and (4) concavity method for noisy signals. Each method defines a code alphabet and, in this way, the pattern recognition problem is carried out via string comparisons. Results of the four methods with the TJ-II stellarator databases will be discussed.

  7. A comparison of different interpolation methods for wind data in Central Asia

    Science.gov (United States)

    Reinhardt, Katja; Samimi, Cyrus

    2017-04-01

    For the assessment of the global climate change and its consequences, the results of computer based climate models are of central importance. The quality of these results and the validity of the derived forecasts are strongly determined by the quality of the underlying climate data. However, in many parts of the world high resolution data are not available. This is particularly true for many regions in Central Asia, where the density of climatological stations has often to be described as thinned out. Due to this insufficient data base the use of statistical methods to improve the resolution of existing climate data is of crucial importance. Only this can provide a substantial data base for a well-founded analysis of past climate changes as well as for a reliable forecast of future climate developments for the particular region. The study presented here shows a comparison of different interpolation methods for the wind components u and v for a region in Central Asia with a pronounced topography. The aim of the study is to find out whether there is an optimal interpolation method which can equally be applied for all pressure levels or if different interpolation methods have to be applied for each pressure level. The European reanalysis data Era-Interim for the years 1989 - 2015 are used as input data for the pressure levels of 850 hPa, 500 hPa and 200 hPa. In order to improve the input data, two different interpolation procedures were applied: On the one hand pure interpolation methods were used, such as inverse distance weighting and ordinary kriging. On the other hand machine learning algorithms, generalized additive models and regression kriging were applied, considering additional influencing factors, e.g. geopotential and topography. As a result it can be concluded that regression kriging provides the best results for all pressure levels, followed by support vector machine, neural networks and ordinary kriging. Inverse distance weighting showed the worst

  8. A new ART iterative method and a comparison of performance among various ART methods

    International Nuclear Information System (INIS)

    Tan, Yufeng; Sato, Shunsuke

    1993-01-01

    Many algebraic reconstruction techniques (ART) image reconstruction algorithms, for instance, simultaneous iterative reconstruction technique (SIRT), the relaxation method and multiplicative ART (MART), have been proposed and their convergent properties have been studied. SIRT and the underrelaxed relaxation method converge to the least-squares solution, but the convergent speeds are very slow. The Kaczmarz method converges very quickly, but the reconstructed images contain a lot of noise. The comparative studies between these algorithms have been done by Gilbert and others, but are not adequate. In this paper, we (1) propose a new method which is a modified Kaczmarz method and prove its convergence property, (2) study performance of 7 algorithms including the one proposed here by computer simulation for 3 kinds of typical phantoms. The method proposed here does not give the least-square solution, but the root mean square errors of its reconstructed images decrease very quickly after few interations. The result shows that the method proposed here gives a better reconstructed image. (author)

  9. Mapping biomass with remote sensing: a comparison of methods for the case study of Uganda

    Directory of Open Access Journals (Sweden)

    Henry Matieu

    2011-10-01

    Full Text Available Abstract Background Assessing biomass is gaining increasing interest mainly for bioenergy, climate change research and mitigation activities, such as reducing emissions from deforestation and forest degradation and the role of conservation, sustainable management of forests and enhancement of forest carbon stocks in developing countries (REDD+. In response to these needs, a number of biomass/carbon maps have been recently produced using different approaches but the lack of comparable reference data limits their proper validation. The objectives of this study are to compare the available maps for Uganda and to understand the sources of variability in the estimation. Uganda was chosen as a case-study because it presents a reliable national biomass reference dataset. Results The comparison of the biomass/carbon maps show strong disagreement between the products, with estimates of total aboveground biomass of Uganda ranging from 343 to 2201 Tg and different spatial distribution patterns. Compared to the reference map based on country-specific field data and a national Land Cover (LC dataset (estimating 468 Tg, maps based on biome-average biomass values, such as the Intergovernmental Panel on Climate Change (IPCC default values, and global LC datasets tend to strongly overestimate biomass availability of Uganda (ranging from 578 to 2201 Tg, while maps based on satellite data and regression models provide conservative estimates (ranging from 343 to 443 Tg. The comparison of the maps predictions with field data, upscaled to map resolution using LC data, is in accordance with the above findings. This study also demonstrates that the biomass estimates are primarily driven by the biomass reference data while the type of spatial maps used for their stratification has a smaller, but not negligible, impact. The differences in format, resolution and biomass definition used by the maps, as well as the fact that some datasets are not independent from the

  10. A comparison of five methods of measuring mammographic density: a case-control study.

    Science.gov (United States)

    Astley, Susan M; Harkness, Elaine F; Sergeant, Jamie C; Warwick, Jane; Stavrinos, Paula; Warren, Ruth; Wilson, Mary; Beetles, Ursula; Gadde, Soujanya; Lim, Yit; Jain, Anil; Bundred, Sara; Barr, Nicola; Reece, Valerie; Brentnall, Adam R; Cuzick, Jack; Howell, Tony; Evans, D Gareth

    2018-02-05

    High mammographic density is associated with both risk of cancers being missed at mammography, and increased risk of developing breast cancer. Stratification of breast cancer prevention and screening requires mammographic density measures predictive of cancer. This study compares five mammographic density measures to determine the association with subsequent diagnosis of breast cancer and the presence of breast cancer at screening. Women participating in the "Predicting Risk Of Cancer At Screening" (PROCAS) study, a study of cancer risk, completed questionnaires to provide personal information to enable computation of the Tyrer-Cuzick risk score. Mammographic density was assessed by visual analogue scale (VAS), thresholding (Cumulus) and fully-automated methods (Densitas, Quantra, Volpara) in contralateral breasts of 366 women with unilateral breast cancer (cases) detected at screening on entry to the study (Cumulus 311/366) and in 338 women with cancer detected subsequently. Three controls per case were matched using age, body mass index category, hormone replacement therapy use and menopausal status. Odds ratios (OR) between the highest and lowest quintile, based on the density distribution in controls, for each density measure were estimated by conditional logistic regression, adjusting for classic risk factors. The strongest predictor of screen-detected cancer at study entry was VAS, OR 4.37 (95% CI 2.72-7.03) in the highest vs lowest quintile of percent density after adjustment for classical risk factors. Volpara, Densitas and Cumulus gave ORs for the highest vs lowest quintile of 2.42 (95% CI 1.56-3.78), 2.17 (95% CI 1.41-3.33) and 2.12 (95% CI 1.30-3.45), respectively. Quantra was not significantly associated with breast cancer (OR 1.02, 95% CI 0.67-1.54). Similar results were found for subsequent cancers, with ORs of 4.48 (95% CI 2.79-7.18), 2.87 (95% CI 1.77-4.64) and 2.34 (95% CI 1.50-3.68) in highest vs lowest quintiles of VAS, Volpara and Densitas

  11. A comparison of two methods of measuring static coefficient of friction at low normal forces: a pilot study.

    Science.gov (United States)

    Seo, Na Jin; Armstrong, Thomas J; Drinkaus, Philip

    2009-01-01

    This study compares two methods for estimating static friction coefficients for skin. In the first method, referred to as the 'tilt method', a hand supporting a flat object is tilted until the object slides. The friction coefficient is estimated as the tangent of the angle of the object at the slip. The second method estimates the friction coefficient as the pull force required to begin moving a flat object over the surface of the hand, divided by object weight. Both methods were used to estimate friction coefficients for 12 subjects and three materials (cardboard, aluminium, rubber) against a flat hand and against fingertips. No differences in static friction coefficients were found between the two methods, except for that of rubber, where friction coefficient was 11% greater for the tilt method. As with previous studies, the friction coefficients varied with contact force and contact area. Static friction coefficient data are needed for analysis and design of objects that are grasped or manipulated with the hand. The tilt method described in this study can easily be used by ergonomic practitioners to estimate static friction coefficients in the field in a timely manner.

  12. Specific activity measurement of 64Cu: A comparison of methods

    International Nuclear Information System (INIS)

    Mastren, Tara; Guthrie, James; Eisenbeis, Paul; Voller, Tom; Mebrahtu, Efrem; Robertson, J. David; Lapi, Suzanne E.

    2014-01-01

    Effective specific activity of 64 Cu (amount of radioactivity per µmol metal) is important in order to determine purity of a particular 64 Cu lot and to assist in optimization of the purification process. Metal impurities can affect effective specific activity and therefore it is important to have a simple method that can measure trace amounts of metals. This work shows that ion chromatography (IC) yields similar results to ICP mass spectrometry for copper, nickel and iron contaminants in 64 Cu production solutions. - Highlights: • Comparison of TETA titration, ICP mass spectrometry, and ion chromatography to measure specific activity. • Validates ion chromatography by using ICP mass spectrometry as the “gold standard”. • Shows different types and amounts of metal impurities present in 64 Cu

  13. Numerical simulation and comparison of two ventilation methods for a restaurant - displacement vs mixed flow ventilation

    Science.gov (United States)

    Chitaru, George; Berville, Charles; Dogeanu, Angel

    2018-02-01

    This paper presents a comparison between a displacement ventilation method and a mixed flow ventilation method using computational fluid dynamics (CFD) approach. The paper analyses different aspects of the two systems, like the draft effect in certain areas, the air temperatureand velocity distribution in the occupied zone. The results highlighted that the displacement ventilation system presents an advantage for the current scenario, due to the increased buoyancy driven flows caused by the interior heat sources. For the displacement ventilation case the draft effect was less prone to appear in the occupied zone but the high heat emissions from the interior sources have increased the temperature gradient in the occupied zone. Both systems have been studied in similar conditions, concentrating only on the flow patterns for each case.

  14. Method for a national tariff comparison for natural gas, electricity and heat. Set-up and presentation

    International Nuclear Information System (INIS)

    1998-05-01

    Several groups (within distribution companies and outside those companies) have a need for information and data on energy tariffs. It is the opinion of the ad-hoc working group that a comparison of tariffs on the basis of standard cases is the most practical method to meet the information demand of all the parties involved. Those standard cases are formulated and presented for prices of electricity, natural gas and heat, including applied consumption parameters. A comparison of such tariffs must be made periodically

  15. Correction for tissue attenuation in radionuclide gastric emptying studies: a comparison of a lateral image method and a geometric mean method

    Energy Technology Data Exchange (ETDEWEB)

    Collins, P.J.; Chatterton, B.E. (Royal Adelaide Hospital (Australia)); Horowitz, M.; Shearman, D.J.C. (Adelaide Univ. (Australia). Dept. of Medicine)

    1984-08-01

    Variation in depth of radionuclide within the stomach may result in significant errors in the measurement of gastric emptying if no attempt is made to correct for gamma-ray attenuation by the patient's tissues. A method of attenuation correction, which uses a single posteriorly located scintillation camera and correction factors derived from a lateral image of the stomach, was compared with a two-camera geometric mean method, in phantom studies and in five volunteer subjects. A meal of 100 g of ground beef containing /sup 99/Tcsup(m)-chicken liver, and 150 ml of water was used in the in vivo studies. In all subjects the geometric mean data showed that solid food emptied in two phases: an initial lag period, followed by a linear emptying phase. Using the geometric mean data as a standard, the anterior camera overestimated the 50% emptying time (T/sub 50/) by an average of 15% (range 5-18) and the posterior camera underestimated this parameter by 15% (4-22). The posterior data, corrected for attenuation using the lateral image method, underestimated the T/sub 50/ by 2% (-7 to +7). The difference in the distances of the proximal and distal stomach from the posterior detector was large in all subjects (mean 5.7 cm, range 3.9-7.4).

  16. Comparison of Satellite Surveying to Traditional Surveying Methods for the Resources Industry

    Science.gov (United States)

    Osborne, B. P.; Osborne, V. J.; Kruger, M. L.

    Modern ground-based survey methods involve detailed survey, which provides three-space co-ordinates for surveyed points, to a high level of accuracy. The instruments are operated by surveyors, who process the raw results to create survey location maps for the subject of the survey. Such surveys are conducted for a location or region and referenced to the earth global co- ordinate system with global positioning system (GPS) positioning. Due to this referencing the survey is only as accurate as the GPS reference system. Satellite survey remote sensing utilise satellite imagery which have been processed using commercial geographic information system software. Three-space co-ordinate maps are generated, with an accuracy determined by the datum position accuracy and optical resolution of the satellite platform.This paper presents a case study, which compares topographic surveying undertaken by traditional survey methods with satellite surveying, for the same location. The purpose of this study is to assess the viability of satellite remote sensing for surveying in the resources industry. The case study involves a topographic survey of a dune field for a prospective mining project area in Pakistan. This site has been surveyed using modern surveying techniques and the results are compared to a satellite survey performed on the same area.Analysis of the results from traditional survey and from the satellite survey involved a comparison of the derived spatial co- ordinates from each method. In addition, comparisons have been made of costs and turnaround time for both methods.The results of this application of remote sensing is of particular interest for survey in areas with remote and extreme environments, weather extremes, political unrest, poor travel links, which are commonly associated with mining projects. Such areas frequently suffer language barriers, poor onsite technical support and resources.

  17. ABCD Matrix Method a Case Study

    CERN Document Server

    Seidov, Zakir F; Yahalom, Asher

    2004-01-01

    In the Israeli Electrostatic Accelerator FEL, the distance between the accelerator's end and the wiggler's entrance is about 2.1 m, and 1.4 MeV electron beam is transported through this space using four similar quadrupoles (FODO-channel). The transfer matrix method (ABCD matrix method) was used for simulating the beam transport, a set of programs is written in the several programming languages (MATHEMATICA, MATLAB, MATCAD, MAPLE) and reasonable agreement is demonstrated between experimental results and simulations. Comparison of ABCD matrix method with the direct "numerical experiments" using EGUN, ELOP, and GPT programs with and without taking into account the space-charge effects showed the agreement to be good enough as well. Also the inverse problem of finding emittance of the electron beam at the S1 screen position (before FODO-channel), by using the spot image at S2 screen position (after FODO-channel) as function of quad currents, is considered. Spot and beam at both screens are described as tilted eel...

  18. Comparison of soil CO2 fluxes by eddy-covariance and chamber methods in fallow periods of a corn-soybean rotation

    Science.gov (United States)

    Soil carbon dioxide (CO2) fluxes are typically measured by eddy-covariance (EC) or chamber (Ch) methods, but a long-term comparison has not been undertaken. This study was conducted to assess the agreement between EC and Ch techniques when measuring CO2 flux during fallow periods of a corn-soybean r...

  19. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    Science.gov (United States)

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Experimental Sentinel-2 LAI estimation using parametric, non-parametric and physical retrieval methods - A comparison

    NARCIS (Netherlands)

    Verrelst, Jochem; Rivera, Juan Pablo; Veroustraete, Frank; Muñoz-Marí, Jordi; Clevers, J.G.P.W.; Camps-Valls, Gustau; Moreno, José

    2015-01-01

    Given the forthcoming availability of Sentinel-2 (S2) images, this paper provides a systematic comparison of retrieval accuracy and processing speed of a multitude of parametric, non-parametric and physically-based retrieval methods using simulated S2 data. An experimental field dataset (SPARC),

  1. Comparison of the xenon-133 washout method with the microsphere method in dog brain perfusion studies

    International Nuclear Information System (INIS)

    Heikkitae, J.; Kettunen, R.; Ahonen, A.

    1982-01-01

    The validity of the Xenon-washout method in estimation of regional cerebral blood flow was tested against a radioactive microsphere method in anaesthetized dogs. The two compartmental model seemed not to be well suited for cerebral perfusion studies by Xe-washout method, although bi-exponential analysis of washout curves gave perfusion values correlating with the microsphere method but depending on calculation method

  2. Comparison of three noninvasive methods for hemoglobin screening of blood donors.

    Science.gov (United States)

    Ardin, Sergey; Störmer, Melanie; Radojska, Stela; Oustianskaia, Larissa; Hahn, Moritz; Gathof, Birgit S

    2015-02-01

    To prevent phlebotomy of anemic individuals and to ensure hemoglobin (Hb) content of the blood units, Hb screening of blood donors before donation is essential. Hb values are mostly evaluated by measurement of capillary blood obtained from fingerstick. Rapid noninvasive methods have recently become available and may be preferred by donors and staff. The aim of this study was to evaluate for the first time all different noninvasive methods for Hb screening. Blood donors were screened for Hb levels in three different trials using three different noninvasive methods (Haemospect [MBR Optical Systems GmbH & Co. KG], NBM 200 [LMB Technology GmbH], Pronto-7 [Masimo Europe Ltd]) in comparison to the established fingerstick method (CompoLab Hb [Fresenius Kabi GmbH]) and to levels obtained from venous samples on a cell counter (Sysmex [Sysmex Europe GmbH]) as reference. The usability of the noninvasive methods was assessed with an especially developed survey. Technical failures occurred by using the Pronto-7 due to nail polish, skin color, or ambient light. The NBM 200 also showed a high sensitivity to ambient light and noticeably lower Hb levels for women than obtained from the Sysmex. The statistical analysis showed the following bias and standard deviation of differences of all methods in comparison to the venous results: Haemospect, -0.22 ± 1.24; NBM, 200 -0.12 ± 1.14; Pronto-7, -0.50 ± 0.99; and CompoLab Hb, -0.53 ± 0.81. Noninvasive Hb tests represent an attractive alternative by eliminating pain and reducing risks of blood contamination. The main problem for generating reliable results seems to be preanalytical variability in sampling. Despite the sensitivity to environmental stress, all methods are suitable for Hb measurement. © 2014 AABB.

  3. A comparison of land use change accounting methods: seeking common grounds for key modeling choices in biofuel assessments

    DEFF Research Database (Denmark)

    de Bikuna Salinas, Koldo Saez; Hamelin, Lorie; Hauschild, Michael Zwicky

    2018-01-01

    Five currently used methods to account for the global warming (GW) impact of the induced land-use change (LUC) greenhouse gas (GHG) emissions have been applied to four biofuel case studies. Two of the investigated methods attempt to avoid the need of considering a definite occupation -thus...... amortization period by considering ongoing LUC trends as a dynamic baseline. This leads to the accounting of a small fraction (0.8%) of the related emissions from the assessed LUC, thus their validity is disputed. The comparison of methods and contrasting case studies illustrated the need of clearly...... distinguishing between the different time horizons involved in life cycle assessments (LCA) of land-demanding products like biofuels. Absent in ISO standards, and giving rise to several confusions, definitions for the following time horizons have been proposed: technological scope, inventory model, impact...

  4. Comparison of Video Steganography Methods for Watermark Embedding

    Directory of Open Access Journals (Sweden)

    Griberman David

    2016-05-01

    Full Text Available The paper focuses on the comparison of video steganography methods for the purpose of digital watermarking in the context of copyright protection. Four embedding methods that use Discrete Cosine and Discrete Wavelet Transforms have been researched and compared based on their embedding efficiency and fidelity. A video steganography program has been developed in the Java programming language with all of the researched methods implemented for experiments. The experiments used 3 video containers with different amounts of movement. The impact of the movement has been addressed in the paper as well as the ways of potential improvement of embedding efficiency using adaptive embedding based on the movement amount. Results of the research have been verified using a survey with 17 participants.

  5. Comparison of radioimmunology and serology methods for LH determination

    International Nuclear Information System (INIS)

    Szymanski, W.; Jakowicki, J.

    1976-01-01

    A comparison is presented of LH determinations by immunoassay and radioimmunoassay using the 125 I-labelled HCG double antibody system. The results obtained by both methods are well comparable and in normal and elevated LH levels the serological determinations may be used for estimating concentration levels found by RIA. (L.O.)

  6. Comparison of three flaw-location methods for automated ultrasonic testing

    International Nuclear Information System (INIS)

    Seiger, H.

    1982-01-01

    Two well-known methods for locating flaws by measurement of the transit time of ultrasonic pulses are examined theoretically. It is shown that neither is sufficiently reliable for use in automated ultrasonic testing. A third method, which takes into account the shape of the sound field from the probe and the uncertainty in measurement of probe-flaw distance and probe position, is introduced. An experimental comparison of the three methods indicates that use of the advanced method results in more accurate location of flaws. (author)

  7. Comparison of different methods for work accidents investigation in hospitals: A Portuguese case study.

    Science.gov (United States)

    Nunes, Cláudia; Santos, Joana; da Silva, Manuela Vieira; Lourenço, Irina; Carvalhais, Carlos

    2015-01-01

    The hospital environment has many occupational health risks that predispose healthcare workers to various kinds of work accidents. This study aims to compare different methods for work accidents investigation and to verify their suitability in hospital environment. For this purpose, we selected three types of accidents that were related with needle stick, worker fall and inadequate effort/movement during the mobilization of patients. A total of thirty accidents were analysed with six different work accidents investigation methods. The results showed that organizational factors were the group of causes which had the greatest impact in the three types of work accidents. The methods selected to be compared in this paper are applicable and appropriate for the work accidents investigation in hospitals. However, the Registration, Research and Analysis of Work Accidents method (RIAAT) showed to be an optimal technique to use in this context.

  8. Comparison of vibrational conductivity and radiative energy transfer methods

    Science.gov (United States)

    Le Bot, A.

    2005-05-01

    This paper is concerned with the comparison of two methods well suited for the prediction of the wideband response of built-up structures subjected to high-frequency vibrational excitation. The first method is sometimes called the vibrational conductivity method and the second one is rather known as the radiosity method in the field of acoustics, or the radiative energy transfer method. Both are based on quite similar physical assumptions i.e. uncorrelated sources, mean response and high-frequency excitation. Both are based on analogies with some equations encountered in the field of heat transfer. However these models do not lead to similar results. This paper compares the two methods. Some numerical simulations on a pair of plates joined along one edge are provided to illustrate the discussion.

  9. Comparison of some effects of modification of a polylactide surface layer by chemical, plasma, and laser methods

    Science.gov (United States)

    Moraczewski, Krzysztof; Rytlewski, Piotr; Malinowski, Rafał; Żenkiewicz, Marian

    2015-08-01

    The article presents the results of studies and comparison of selected properties of the modified PLA surface layer. The modification was carried out with three methods. In the chemical method, a 0.25 M solution of sodium hydroxide in water and ethanol was utilized. In the plasma method, a 50 W generator was used, which produced plasma in the air atmosphere under reduced pressure. In the laser method, a pulsed ArF excimer laser with fluency of 60 mJ/cm2 was applied. Polylactide samples were examined by using the following techniques: scanning electron microscopy (SEM), atomic force microscopy (AFM), goniometry and X-ray photoelectron spectroscopy (XPS). Images of surfaces of the modified samples were recorded, contact angles were measured, and surface free energy was calculated. Qualitative and quantitative analyses of chemical composition of the PLA surface layer were performed as well. Based on the survey it was found that the best modification results are obtained using the plasma method.

  10. cp-R, an interface the R programming language for clinical laboratory method comparisons.

    Science.gov (United States)

    Holmes, Daniel T

    2015-02-01

    Clinical scientists frequently need to compare two different bioanalytical methods as part of assay validation/monitoring. As a matter necessity, regression methods for quantitative comparison in clinical chemistry, hematology and other clinical laboratory disciplines must allow for error in both the x and y variables. Traditionally the methods popularized by 1) Deming and 2) Passing and Bablok have been recommended. While commercial tools exist, no simple open source tool is available. The purpose of this work was to develop and entirely open-source GUI-driven program for bioanalytical method comparisons capable of performing these regression methods and able to produce highly customized graphical output. The GUI is written in python and PyQt4 with R scripts performing regression and graphical functions. The program can be run from source code or as a pre-compiled binary executable. The software performs three forms of regression and offers weighting where applicable. Confidence bands of the regression are calculated using bootstrapping for Deming and Passing Bablok methods. Users can customize regression plots according to the tools available in R and can produced output in any of: jpg, png, tiff, bmp at any desired resolution or ps and pdf vector formats. Bland Altman plots and some regression diagnostic plots are also generated. Correctness of regression parameter estimates was confirmed against existing R packages. The program allows for rapid and highly customizable graphical output capable of conforming to the publication requirements of any clinical chemistry journal. Quick method comparisons can also be performed and cut and paste into spreadsheet or word processing applications. We present a simple and intuitive open source tool for quantitative method comparison in a clinical laboratory environment. Copyright © 2014 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  11. Variation in Results of Volume Measurements of Stumps of Lower-Limb Amputees : A Comparison of 4 Methods

    NARCIS (Netherlands)

    de Boer-Wilzing, Vera G.; Bolt, Arjen; Geertzen, Jan H.; Emmelot, Cornelis H.; Baars, Erwin C.; Dijkstra, Pieter U.

    de Boer-Wilzing VG, Bolt A, Geertzen JH, Emmelot CH, Baars EC, Dijkstra PU. Variation in results of volume measurements of stumps of lower-limb amputees: a comparison of 4 methods. Arch Phys Med Rehabil 2011;92:941-6. Objective: To analyze the reliability of 4 methods (water immersion,

  12. Isolating DNA from sexual assault cases: a comparison of standard methods with a nuclease-based approach

    Science.gov (United States)

    2012-01-01

    Background Profiling sperm DNA present on vaginal swabs taken from rape victims often contributes to identifying and incarcerating rapists. Large amounts of the victim’s epithelial cells contaminate the sperm present on swabs, however, and complicate this process. The standard method for obtaining relatively pure sperm DNA from a vaginal swab is to digest the epithelial cells with Proteinase K in order to solubilize the victim’s DNA, and to then physically separate the soluble DNA from the intact sperm by pelleting the sperm, removing the victim’s fraction, and repeatedly washing the sperm pellet. An alternative approach that does not require washing steps is to digest with Proteinase K, pellet the sperm, remove the victim’s fraction, and then digest the residual victim’s DNA with a nuclease. Methods The nuclease approach has been commercialized in a product, the Erase Sperm Isolation Kit (PTC Labs, Columbia, MO, USA), and five crime laboratories have tested it on semen-spiked female buccal swabs in a direct comparison with their standard methods. Comparisons have also been performed on timed post-coital vaginal swabs and evidence collected from sexual assault cases. Results For the semen-spiked buccal swabs, Erase outperformed the standard methods in all five laboratories and in most cases was able to provide a clean male profile from buccal swabs spiked with only 1,500 sperm. The vaginal swabs taken after consensual sex and the evidence collected from rape victims showed a similar pattern of Erase providing superior profiles. Conclusions In all samples tested, STR profiles of the male DNA fractions obtained with Erase were as good as or better than those obtained using the standard methods. PMID:23211019

  13. A comparison of radiological risk assessment methods for environmental restoration

    International Nuclear Information System (INIS)

    Dunning, D.E. Jr.; Peterson, J.M.

    1993-01-01

    Evaluation of risks to human health from exposure to ionizing radiation at radioactively contaminated sites is an integral part of the decision-making process for determining the need for remediation and selecting remedial actions that may be required. At sites regulated under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), a target risk range of 10 -4 to 10 -6 incremental cancer incidence over a lifetime is specified by the US Environmental Protection Agency (EPA) as generally acceptable, based on the reasonable maximum exposure to any individual under current and future land use scenarios. Two primary methods currently being used in conducting radiological risk assessments at CERCLA sites are compared in this analysis. Under the first method, the radiation dose equivalent (i.e., Sv or rem) to the receptors of interest over the appropriate period of exposure is estimated and multiplied by a risk factor (cancer risk/Sv). Alternatively, incremental cancer risk can be estimated by combining the EPA's cancer slope factors (previously termed potency factors) for radionuclides with estimates of radionuclide intake by ingestion and inhalation, as well as radionuclide concentrations in soil that contribute to external dose. The comparison of the two methods has demonstrated that resulting estimates of lifetime incremental cancer risk under these different methods may differ significantly, even when all other exposure assumptions are held constant, with the magnitude of the discrepancy depending upon the dominant radionuclides and exposure pathways for the site. The basis for these discrepancies, the advantages and disadvantages of each method, and the significance of the discrepant results for environmental restoration decisions are presented

  14. Evaluation and Comparison of Extremal Hypothesis-Based Regime Methods

    Directory of Open Access Journals (Sweden)

    Ishwar Joshi

    2018-03-01

    Full Text Available Regime channels are important for stable canal design and to determine river response to environmental changes, e.g., due to the construction of a dam, land use change, and climate shifts. A plethora of methods is available describing the hydraulic geometry of alluvial rivers in the regime. However, comparison of these methods using the same set of data seems lacking. In this study, we evaluate and compare four different extremal hypothesis-based regime methods, namely minimization of Froude number (MFN, maximum entropy and minimum energy dissipation rate (ME and MEDR, maximum flow efficiency (MFE, and Millar’s method, by dividing regime channel data into sand and gravel beds. The results show that for sand bed channels MFN gives a very high accuracy of prediction for regime channel width and depth. For gravel bed channels we find that MFN and ‘ME and MEDR’ give a very high accuracy of prediction for width and depth. Therefore the notion that extremal hypotheses which do not contain bank stability criteria are inappropriate for use is shown false as both MFN and ‘ME and MEDR’ lack bank stability criteria. Also, we find that bank vegetation has significant influence in the prediction of hydraulic geometry by MFN and ‘ME and MEDR’.

  15. COMPARISON OF METHODS FOR GEOMETRIC CAMERA CALIBRATION

    Directory of Open Access Journals (Sweden)

    J. Hieronymus

    2012-09-01

    Full Text Available Methods for geometric calibration of cameras in close-range photogrammetry are established and well investigated. The most common one is based on test-fields with well-known pattern, which are observed from different directions. The parameters of a distortion model are calculated using bundle-block-adjustment-algorithms. This methods works well for short focal lengths, but is essentially more problematic to use with large focal lengths. Those would require very large test-fields and surrounding space. To overcome this problem, there is another common method for calibration used in remote sensing. It employs measurements using collimator and a goniometer. A third calibration method uses diffractive optical elements (DOE to project holograms of well known pattern. In this paper these three calibration methods are compared empirically, especially in terms of accuracy. A camera has been calibrated with those methods mentioned above. All methods provide a set of distortion correction parameters as used by the photogrammetric software Australis. The resulting parameter values are very similar for all investigated methods. The three sets of distortion parameters are crosscompared against all three calibration methods. This is achieved by inserting the gained distortion parameters as fixed input into the calibration algorithms and only adjusting the exterior orientation. The RMS (root mean square of the remaining image coordinate residuals are taken as a measure of distortion correction quality. There are differences resulting from the different calibration methods. Nevertheless the measure is small for every comparison, which means that all three calibration methods can be used for accurate geometric calibration.

  16. Development of a method for the comparison of final repository sites in different host rock formations; Weiterentwicklung einer Methode zum Vergleich von Endlagerstandorten in unterschiedlichen Wirtsgesteinsformationen

    Energy Technology Data Exchange (ETDEWEB)

    Fischer-Appelt, Klaus; Frieling, Gerd; Kock, Ingo; and others

    2017-10-15

    The report on the development of a method for the comparison of final repository sites in different host rock formations covers the following issues: influence of the requirement of retrievability on the methodology, study on the possible extension of the methodology for repository sites with crystalline host rocks: boundary conditions in Germany, final disposal concept for crystalline host rocks, generic extension of the VerSi method, identification, classification and relevance weighting of safety functions, relevance of the safety functions for the crystalline host rock formation, review of the methodological need for changes for crystalline rock sites under low-permeability covering; study on the applicability of the methodology for the determination of site regions for surface exploitation (phase 1).

  17. Comparative study of the geostatistical ore reserve estimation method over the conventional methods

    International Nuclear Information System (INIS)

    Kim, Y.C.; Knudsen, H.P.

    1975-01-01

    Part I contains a comprehensive treatment of the comparative study of the geostatistical ore reserve estimation method over the conventional methods. The conventional methods chosen for comparison were: (a) the polygon method, (b) the inverse of the distance squared method, and (c) a method similar to (b) but allowing different weights in different directions. Briefly, the overall result from this comparative study is in favor of the use of geostatistics in most cases because the method has lived up to its theoretical claims. A good exposition on the theory of geostatistics, the adopted study procedures, conclusions and recommended future research are given in Part I. Part II of this report contains the results of the second and the third study objectives, which are to assess the potential benefits that can be derived by the introduction of the geostatistical method to the current state-of-the-art in uranium reserve estimation method and to be instrumental in generating the acceptance of the new method by practitioners through illustrative examples, assuming its superiority and practicality. These are given in the form of illustrative examples on the use of geostatistics and the accompanying computer program user's guide

  18. Comparison Re-invented: Adaptation of Universal Methods to African Studies (Conference Report Die Wiederentdeckung des Vergleichs: Zur Anwendung universeller Methoden in der Afrikaforschung (Konferenzbericht

    Directory of Open Access Journals (Sweden)

    Franzisca Zanker

    2013-01-01

    Full Text Available Drawing from a combination of specific, empirical research projects with different theoretical backgrounds, a workshop discussed one methodological aspect often somewhat overlooked in African Studies: comparison. Participants addressed several questions, along with presenting overviews of how different disciplines within African Studies approach comparison in their research and naming specific challenges within individual research projects. The questions examined included: Why is explicit comparative research so rare in African Studies? Is comparative research more difficult in the African context than in other regions? Does it benefit our research? Should scholars strive to generalise beyond individual cases? Do studies in our field require an explicit comparative design, or will implicit comparison suffice? Cross-discipline communication should help us to move forward in this methodological debate, though in the end the subject matter and specific research question will lead to the appropriate comparative approach, not the other way round.Mit Blick auf einige empirische Forschungsprojekte mit jeweils unterschiedlichem theoretischem Hintergrund wurde im Rahmen eines Workshops ein methodologischer Aspekt debattiert, der in der Afrikaforschung wenig Beachtung erfährt: der Vergleich. Die Teilnehmer(innen entwickelten Fragestellungen, stellten jeweils dar, inwieweit in den verschiedene Disziplinen der Afrikaforschung der Vergleich als Methode eingesetzt wird, und benannten spezifische Herausforderungen in diesem Zusammenhang für einzelne Forschungsprojekte. Unter anderem wurden folgende Fragen erörtert: Warum ist die explizit vergleichende Methode in der Afrikaforschung so selten? Ist vergleichende Forschung im Kontext Afrikas schwieriger anwendbar als in der Forschung zu anderen Regionen? Verbessert sie unsere Forschungsresultate? Sollten sich Forscher um Generalisierungen jenseits der Einzelfallstudien bemühen? Ist in der Afrikaforschung eine

  19. A Comparison of Mindray BC-6800, Sysmex XN-2000, and Beckman Coulter LH750 Automated Hematology Analyzers: A Pediatric Study.

    Science.gov (United States)

    Ciepiela, Olga; Kotuła, Iwona; Kierat, Szymon; Sieczkowska, Sandra; Podsiadłowska, Anna; Jenczelewska, Anna; Księżarczyk, Karolina; Demkow, Urszula

    2016-11-01

    Modern automated laboratory hematology analyzers allow the measurement of over 30 different hematological parameters useful in the diagnostic and clinical interpretation of patient symptoms. They use different methods to measure the same parameters. Thus, a comparison of complete blood count made by Mindray BC-6800, Sysmex XN-2000 and Beckman Coulter LH750 was performed. A comparison of results obtained by automated analysis of 807 anticoagulated blood samples from children and 125 manual microscopic differentiations were performed. This comparative study included white blood cell count, red blood cell count, and erythrocyte indices, as well as platelet count. The present study showed a poor level of agreement between white blood cell enumeration and differentiation of the three automated hematology analyzers under comparison. A very good agreement was found when comparing manual blood smear and automated granulocytes, monocytes, and lymphocytes differentiation. Red blood cell evaluation showed better agreement than white blood cells between the studied analyzers. To conclude, studied instruments did not ensure satisfactory interchangeability and did not facilitate a substitution of one analyzer by another. © 2016 Wiley Periodicals, Inc.

  20. Comparison of seven optical clearing methods for mouse brain

    Science.gov (United States)

    Wan, Peng; Zhu, Jingtan; Yu, Tingting; Zhu, Dan

    2018-02-01

    Recently, a variety of tissue optical clearing techniques have been developed to reduce light scattering for imaging deeper and three-dimensional reconstruction of tissue structures. Combined with optical imaging techniques and diverse labeling methods, these clearing methods have significantly promoted the development of neuroscience. However, most of the protocols were proposed aiming for specific tissue type. Though there are some comparison results, the clearing methods covered are limited and the evaluation indices are lack of uniformity, which made it difficult to select a best-fit protocol for clearing in practical applications. Hence, it is necessary to systematically assess and compare these clearing methods. In this work, we evaluated the performance of seven typical clearing methods, including 3DISCO, uDISCO, SeeDB, ScaleS, ClearT2, CUBIC and PACT, on mouse brain samples. First, we compared the clearing capability on both brain slices and whole-brains by observing brain transparency. Further, we evaluated the fluorescence preservation and the increase of imaging depth. The results showed that 3DISCO, uDISCO and PACT posed excellent clearing capability on mouse brains, ScaleS and SeeDB rendered moderate transparency, while ClearT2 was the worst. Among those methods, ScaleS was the best on fluorescence preservation, and PACT achieved the highest increase of imaging depth. This study is expected to provide important reference for users in choosing most suitable brain optical clearing method.

  1. A comparison of methods in estimating soil water erosion

    Directory of Open Access Journals (Sweden)

    Marisela Pando Moreno

    2012-02-01

    Full Text Available A comparison between direct field measurements and predictions of soil water erosion using two variant; (FAO and R/2 index of the Revised Universal Soil Loss Equation (RUSLE was carried out in a microcatchment o 22.32 km2 in Northeastern Mexico. Direct field measurements were based on a geomorphologic classification of the area; while environmental units were defined for applying the equation. Environmental units were later grouped withir geomorphologic units to compare results. For the basin as a whole, erosion rates from FAO index were statistical!; equal to those measured on the field, while values obtained from the R/2 index were statistically different from the res and overestimated erosion. However, when comparing among geomorphologic units, erosion appeared overestimate! in steep units and underestimated in more flat areas. The most remarkable differences on erosion rates, between th( direct and FAO methods, were for those units where gullies have developed, fn these cases, erosion was underestimated by FAO index. Hence, it is suggested that a weighted factor for presence of gullies should be developed and included in RUSLE equation.

  2. A comparison of methods used to calculate normal background concentrations of potentially toxic elements for urban soil

    Energy Technology Data Exchange (ETDEWEB)

    Rothwell, Katherine A., E-mail: k.rothwell@ncl.ac.uk; Cooke, Martin P., E-mail: martin.cooke@ncl.ac.uk

    2015-11-01

    To meet the requirements of regulation and to provide realistic remedial targets there is a need for the background concentration of potentially toxic elements (PTEs) in soils to be considered when assessing contaminated land. In England, normal background concentrations (NBCs) have been published for several priority contaminants for a number of spatial domains however updated regulatory guidance places the responsibility on Local Authorities to set NBCs for their jurisdiction. Due to the unique geochemical nature of urban areas, Local Authorities need to define NBC values specific to their area, which the national data is unable to provide. This study aims to calculate NBC levels for Gateshead, an urban Metropolitan Borough in the North East of England, using freely available data. The ‘median + 2MAD’, boxplot upper whisker and English NBC (according to the method adopted by the British Geological Survey) methods were compared for test PTEs lead, arsenic and cadmium. Due to the lack of systematically collected data for Gateshead in the national soil chemistry database, the use of site investigation (SI) data collected during the planning process was investigated. 12,087 SI soil chemistry data points were incorporated into a database and 27 comparison samples were taken from undisturbed locations across Gateshead. The SI data gave high resolution coverage of the area and Mann–Whitney tests confirmed statistical similarity for the undisturbed comparison samples and the SI data. SI data was successfully used to calculate NBCs for Gateshead and the median + 2MAD method was selected as most appropriate by the Local Authority according to the precautionary principle as it consistently provided the most conservative NBC values. The use of this data set provides a freely available, high resolution source of data that can be used for a range of environmental applications. - Highlights: • The use of site investigation data is proposed for land contamination studies

  3. COMPARISON OF HOLOGRAPHIC AND ITERATIVE METHODS FOR AMPLITUDE OBJECT RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    I. A. Shevkunov

    2015-01-01

    Full Text Available Experimental comparison of four methods for the wavefront reconstruction is presented. We considered two iterative and two holographic methods with different mathematical models and algorithms for recovery. The first two of these methods do not use a reference wave recording scheme that reduces requirements for stability of the installation. A major role in phase information reconstruction by such methods is played by a set of spatial intensity distributions, which are recorded as the recording matrix is being moved along the optical axis. The obtained data are used consistently for wavefront reconstruction using an iterative procedure. In the course of this procedure numerical distribution of the wavefront between the planes is performed. Thus, phase information of the wavefront is stored in every plane and calculated amplitude distributions are replaced for the measured ones in these planes. In the first of the compared methods, a two-dimensional Fresnel transform and iterative calculation in the object plane are used as a mathematical model. In the second approach, an angular spectrum method is used for numerical wavefront propagation, and the iterative calculation is carried out only between closely located planes of data registration. Two digital holography methods, based on the usage of the reference wave in the recording scheme and differing from each other by numerical reconstruction algorithm of digital holograms, are compared with the first two methods. The comparison proved that the iterative method based on 2D Fresnel transform gives results comparable with the result of common holographic method with the Fourier-filtering. It is shown that holographic method for reconstructing of the object complex amplitude in the process of the object amplitude reduction is the best among considered ones.

  4. The Comparison Study of Short-Term Prediction Methods to Enhance the Model Predictive Controller Applied to Microgrid Energy Management

    Directory of Open Access Journals (Sweden)

    César Hernández-Hernández

    2017-06-01

    Full Text Available Electricity load forecasting, optimal power system operation and energy management play key roles that can bring significant operational advantages to microgrids. This paper studies how methods based on time series and neural networks can be used to predict energy demand and production, allowing them to be combined with model predictive control. Comparisons of different prediction methods and different optimum energy distribution scenarios are provided, permitting us to determine when short-term energy prediction models should be used. The proposed prediction models in addition to the model predictive control strategy appear as a promising solution to energy management in microgrids. The controller has the task of performing the management of electricity purchase and sale to the power grid, maximizing the use of renewable energy sources and managing the use of the energy storage system. Simulations were performed with different weather conditions of solar irradiation. The obtained results are encouraging for future practical implementation.

  5. Comparison between the KBS-3 method and the deep borehole for final disposal of spent nuclear fuel

    International Nuclear Information System (INIS)

    Grundfelt, Bertil

    2010-09-01

    In this report a comparison is made between disposal of spent nuclear fuel according to the KBS-3 method with disposal in very deep boreholes. The objective has been to make a broad comparison between the two methods, and by doing so to pinpoint factors that distinguish them from each other. The ambition has been to make an as fair comparison as possible despite that the quality of the data of relevance is very different between the methods

  6. An Algorithmic Comparison of the Hyper-Reduction and the Discrete Empirical Interpolation Method for a Nonlinear Thermal Problem

    Directory of Open Access Journals (Sweden)

    Felix Fritzen

    2018-02-01

    Full Text Available A novel algorithmic discussion of the methodological and numerical differences of competing parametric model reduction techniques for nonlinear problems is presented. First, the Galerkin reduced basis (RB formulation is presented, which fails at providing significant gains with respect to the computational efficiency for nonlinear problems. Renowned methods for the reduction of the computing time of nonlinear reduced order models are the Hyper-Reduction and the (Discrete Empirical Interpolation Method (EIM, DEIM. An algorithmic description and a methodological comparison of both methods are provided. The accuracy of the predictions of the hyper-reduced model and the (DEIM in comparison to the Galerkin RB is investigated. All three approaches are applied to a simple uncertainty quantification of a planar nonlinear thermal conduction problem. The results are compared to computationally intense finite element simulations.

  7. An enquiry into the method of paired comparison: reliability, scaling, and Thurstone's Law of Comparative Judgment

    Science.gov (United States)

    Thomas C. Brown; George L. Peterson

    2009-01-01

    The method of paired comparisons is used to measure individuals' preference orderings of items presented to them as discrete binary choices. This paper reviews the theory and application of the paired comparison method, describes a new computer program available for eliciting the choices, and presents an analysis of methods for scaling paired choice data to...

  8. Comparison between stochastic and machine learning methods for hydrological multi-step ahead forecasting: All forecasts are wrong!

    Science.gov (United States)

    Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris

    2017-04-01

    Machine learning (ML) is considered to be a promising approach to hydrological processes forecasting. We conduct a comparison between several stochastic and ML point estimation methods by performing large-scale computational experiments based on simulations. The purpose is to provide generalized results, while the respective comparisons in the literature are usually based on case studies. The stochastic methods used include simple methods, models from the frequently used families of Autoregressive Moving Average (ARMA), Autoregressive Fractionally Integrated Moving Average (ARFIMA) and Exponential Smoothing models. The ML methods used are Random Forests (RF), Support Vector Machines (SVM) and Neural Networks (NN). The comparison refers to the multi-step ahead forecasting properties of the methods. A total of 20 methods are used, among which 9 are the ML methods. 12 simulation experiments are performed, while each of them uses 2 000 simulated time series of 310 observations. The time series are simulated using stochastic processes from the families of ARMA and ARFIMA models. Each time series is split into a fitting (first 300 observations) and a testing set (last 10 observations). The comparative assessment of the methods is based on 18 metrics, that quantify the methods' performance according to several criteria related to the accurate forecasting of the testing set, the capturing of its variation and the correlation between the testing and forecasted values. The most important outcome of this study is that there is not a uniformly better or worse method. However, there are methods that are regularly better or worse than others with respect to specific metrics. It appears that, although a general ranking of the methods is not possible, their classification based on their similar or contrasting performance in the various metrics is possible to some extent. Another important conclusion is that more sophisticated methods do not necessarily provide better forecasts

  9. Comparison of three sensory profiling methods based on consumer perception

    DEFF Research Database (Denmark)

    Reinbach, Helene Christine; Giacalone, Davide; Ribeiro, Letícia Machado

    2014-01-01

    The present study compares three profiling methods based on consumer perceptions in their ability to discriminate and describe eight beers. Consumers (N=135) evaluated eight different beers using Check-All-That-Apply (CATA) methodology in two variations, with (n=63) and without (n=73) rating...... the intensity of the checked descriptors. With CATA, consumers rated 38 descriptors grouped in 7 overall categories (berries, floral, hoppy, nutty, roasted, spicy/herbal and woody). Additionally 40 of the consumers evaluated the same samples by partial Napping® followed by Ultra Flash Profiling (UFP). ANOVA...... comparisons the RV coefficients varied between 0.90 and 0.97, indicating a very high similarity between all three methods. These results show that the precision and reproducibility of sensory information obtained by consumers by CATA is comparable to that of Napping. The choice of methodology for consumer...

  10. Comparison of the analysis result between two laboratories using different methods

    International Nuclear Information System (INIS)

    Sri Murniasih; Agus Taftazani

    2017-01-01

    Comparison of the analysis result of volcano ash sample between two laboratories using different analysis methods. The research aims to improve the testing laboratory quality and cooperate with the testing laboratory from other country. Samples were tested at the Center for Accelerator of Science and Technology (CAST)-NAA laboratory using NAA, while at the University of Texas (UT) USA using ICP-MS and ENAA method. From 12 elements of target, CAST-NAA able to present 11 elements of data analysis. The comparison results shows that the analysis of the K, Mn, Ti and Fe elements from both laboratories have a very good comparison and close one to other. It is known from RSD values and correlation coefficients of the both laboratories analysis results. While observed of the results difference known that the analysis results of Al, Na, K, Fe, V, Mn, Ti, Cr and As elements from both laboratories is not significantly different. From 11 elements were reported, only Zn which have significantly different values for both laboratories. (author)

  11. Comparison of biochemical and molecular methods for the identification of bacterial isolates associated with failed loggerhead sea turtle eggs.

    Science.gov (United States)

    Awong-Taylor, J; Craven, K S; Griffiths, L; Bass, C; Muscarella, M

    2008-05-01

    Comparison of biochemical vs molecular methods for identification of microbial populations associated with failed loggerhead turtle eggs. Two biochemical (API and Microgen) and one molecular methods (16s rRNA analysis) were compared in the areas of cost, identification, corroboration of data with other methods, ease of use, resources and software. The molecular method was costly and identified only 66% of the isolates tested compared with 74% for API. A 74% discrepancy in identifications occurred between API and 16s rRNA analysis. The two biochemical methods were comparable in cost, but Microgen was easier to use and yielded the lowest discrepancy among identifications (29%) when compared with both API 20 enteric (API 20E) and API 20 nonenteric (API 20NE) combined. A comparison of API 20E and API 20NE indicated an 83% discrepancy between the two methods. The Microgen identification system appears to be better suited than API or 16s rRNA analysis for identification of environmental isolates associated with failed loggerhead eggs. Most identification methods are not intended for use with environmental isolates. A comparison of identification systems would provide better options for identifying environmental bacteria for ecological studies.

  12. Results of an interlaboratory comparison of analytical methods for contaminants of emerging concern in water.

    Science.gov (United States)

    Vanderford, Brett J; Drewes, Jörg E; Eaton, Andrew; Guo, Yingbo C; Haghani, Ali; Hoppe-Jones, Christiane; Schluesener, Michael P; Snyder, Shane A; Ternes, Thomas; Wood, Curtis J

    2014-01-07

    An evaluation of existing analytical methods used to measure contaminants of emerging concern (CECs) was performed through an interlaboratory comparison involving 25 research and commercial laboratories. In total, 52 methods were used in the single-blind study to determine method accuracy and comparability for 22 target compounds, including pharmaceuticals, personal care products, and steroid hormones, all at ng/L levels in surface and drinking water. Method biases ranged from caffeine, NP, OP, and triclosan had false positive rates >15%. In addition, some methods reported false positives for 17β-estradiol and 17α-ethynylestradiol in unspiked drinking water and deionized water, respectively, at levels higher than published predicted no-effect concentrations for these compounds in the environment. False negative rates were also generally contamination, misinterpretation of background interferences, and/or inappropriate setting of detection/quantification levels for analysis at low ng/L levels. The results of both comparisons were collectively assessed to identify parameters that resulted in the best overall method performance. Liquid chromatography-tandem mass spectrometry coupled with the calibration technique of isotope dilution were able to accurately quantify most compounds with an average bias of <10% for both matrixes. These findings suggest that this method of analysis is suitable at environmentally relevant levels for most of the compounds studied. This work underscores the need for robust, standardized analytical methods for CECs to improve data quality, increase comparability between studies, and help reduce false positive and false negative rates.

  13. Environmental impact from different modes of transport. Method of comparison

    International Nuclear Information System (INIS)

    2002-03-01

    A prerequisite of long-term sustainable development is that activities of various kinds are adjusted to what humans and the natural world can tolerate. Transport is an activity that affects humans and the environment to a very great extent and in this project, several actors within the transport sector have together laid the foundation for the development of a comparative method to be able to compare the environmental impact at the different stages along the transport chain. The method analyses the effects of different transport concepts on the climate, noise levels, human health, acidification, land use and ozone depletion. Within the framework of the method, a calculation model has been created in Excel which acts as a basis for the comparisons. The user can choose to download the model from the Swedish EPA's on-line bookstore or order it on a floppy disk. Neither the method nor the model are as yet fully developed but our hope is that they can still be used in their present form as a basis and inspire further efforts and research in the field. In the report, we describe most of these shortcomings, the problems associated with the work and the existing development potential. This publication should be seen as the first stage in the development of a method of comparison between different modes of transport in non-monetary terms where there remains a considerable need for further development and amplification

  14. The health benefits of yoga and exercise: a review of comparison studies.

    Science.gov (United States)

    Ross, Alyson; Thomas, Sue

    2010-01-01

    Exercise is considered an acceptable method for improving and maintaining physical and emotional health. A growing body of evidence supports the belief that yoga benefits physical and mental health via down-regulation of the hypothalamic-pituitary-adrenal (HPA) axis and the sympathetic nervous system (SNS). The purpose of this article is to provide a scholarly review of the literature regarding research studies comparing the effects of yoga and exercise on a variety of health outcomes and health conditions. Using PubMed((R)) and the key word "yoga," a comprehensive search of the research literature from core scientific and nursing journals yielded 81 studies that met inclusion criteria. These studies subsequently were classified as uncontrolled (n = 30), wait list controlled (n = 16), or comparison (n = 35). The most common comparison intervention (n = 10) involved exercise. These studies were included in this review. In the studies reviewed, yoga interventions appeared to be equal or superior to exercise in nearly every outcome measured except those involving physical fitness. The studies comparing the effects of yoga and exercise seem to indicate that, in both healthy and diseased populations, yoga may be as effective as or better than exercise at improving a variety of health-related outcome measures. Future clinical trials are needed to examine the distinctions between exercise and yoga, particularly how the two modalities may differ in their effects on the SNS/HPA axis. Additional studies using rigorous methodologies are needed to examine the health benefits of the various types of yoga.

  15. WTA estimates using the method of paired comparison: tests of robustness

    Science.gov (United States)

    Patricia A. Champ; John B. Loomis

    1998-01-01

    The method of paired comparison is modified to allow choices between two alternative gains so as to estimate willingness to accept (WTA) without loss aversion. The robustness of WTA values for two public goods is tested with respect to sensitivity of theWTA measure to the context of the bundle of goods used in the paired comparison exercise and to the scope (scale) of...

  16. The choice of statistical methods for comparisons of dosimetric data in radiotherapy.

    Science.gov (United States)

    Chaikh, Abdulhamid; Giraud, Jean-Yves; Perrin, Emmanuel; Bresciani, Jean-Pierre; Balosso, Jacques

    2014-09-18

    Novel irradiation techniques are continuously introduced in radiotherapy to optimize the accuracy, the security and the clinical outcome of treatments. These changes could raise the question of discontinuity in dosimetric presentation and the subsequent need for practice adjustments in case of significant modifications. This study proposes a comprehensive approach to compare different techniques and tests whether their respective dose calculation algorithms give rise to statistically significant differences in the treatment doses for the patient. Statistical investigation principles are presented in the framework of a clinical example based on 62 fields of radiotherapy for lung cancer. The delivered doses in monitor units were calculated using three different dose calculation methods: the reference method accounts the dose without tissues density corrections using Pencil Beam Convolution (PBC) algorithm, whereas new methods calculate the dose with tissues density correction for 1D and 3D using Modified Batho (MB) method and Equivalent Tissue air ratio (ETAR) method, respectively. The normality of the data and the homogeneity of variance between groups were tested using Shapiro-Wilks and Levene test, respectively, then non-parametric statistical tests were performed. Specifically, the dose means estimated by the different calculation methods were compared using Friedman's test and Wilcoxon signed-rank test. In addition, the correlation between the doses calculated by the three methods was assessed using Spearman's rank and Kendall's rank tests. The Friedman's test showed a significant effect on the calculation method for the delivered dose of lung cancer patients (p Wilcoxon signed-rank test of paired comparisons indicated that the delivered dose was significantly reduced using density-corrected methods as compared to the reference method. Spearman's and Kendall's rank tests indicated a positive correlation between the doses calculated with the different methods

  17. A Comparison Study on the Integrated Risk Estimation for Various Power Systems

    International Nuclear Information System (INIS)

    Kim, Tae Woon; Ha, J. J.; Kim, S. H.; Jeong, J. T.; Min, K. R.; Kim, K. Y.

    2007-06-01

    The objective of this study is to establish a system for the comparative analysis of the environmental impacts, risks, health effects, and social acceptance for various electricity generation systems and a computational framework and necessary databases. In this study, the second phase of the nuclear research and development program(2002-2004), the methodologies for the comparative analysis of the environmental impacts, risks, and health effects for various electricity generation systems was investigated and applied to reference power plants. The life cycle assessment (LCA) methodology as a comparative analysis tool for the environmental impacts was adopted and applied to fossil-fueled and nuclear power plants. The scope of the analysis considered in this study are the construction, operation/fuel cycle), and demolition of each power generation system. In the risk analysis part, the empirical and analytical methods were adopted and applied to fossil-fueled and nuclear power plants. In the empirical risk assessment part, we collected historical experiences of worldwide energy-related accidents with fatalities over the last 30 years. The scope of the analysis considered in this study are the construction, operation (fuel cycle), and demolition stages of each power generation systems. The risks for the case of nuclear power plants which have potential releases of radioactive materials were estimated In a probabilistic way (PSA) by considering the occurrence of severe accidents and compared with the risks of other electricity generation systems. The health effects testimated as external cost) resulting from the operation of nuclear, coal, and hydro power systems were estimated and compared by using the program developed by the IAEA. Regarding a comprehensive comparison of the various power systems, the analytic hierarchy process (AHP) method is introduced to aggregate the diverse information under conflicting decision criteria. Social aspect is treated by a web

  18. A Comparison of Three Methods for Measuring Distortion in Optical Windows

    Science.gov (United States)

    Youngquist, Robert C.; Nurge, Mark A.; Skow, Miles

    2015-01-01

    It's important that imagery seen through large-area windows, such as those used on space vehicles, not be substantially distorted. Many approaches are described in the literature for measuring the distortion of an optical window, but most suffer from either poor resolution or processing difficulties. In this paper a new definition of distortion is presented, allowing accurate measurement using an optical interferometer. This new definition is shown to be equivalent to the definitions provided by the military and the standards organizations. In order to determine the advantages and disadvantages of this new approach, the distortion of an acrylic window is measured using three different methods: image comparison, moiré interferometry, and phase-shifting interferometry.

  19. Critical comparison between equation of motion-Green's function methods and configuration interaction methods: analysis of methods and applications

    International Nuclear Information System (INIS)

    Freed, K.F.; Herman, M.F.; Yeager, D.L.

    1980-01-01

    A description is provided of the common conceptual origins of many-body equations of motion and Green's function methods in Liouville operator formulations of the quantum mechanics of atomic and molecular electronic structure. Numerical evidence is provided to show the inadequacies of the traditional strictly perturbative approaches to these methods. Nonperturbative methods are introduced by analogy with techniques developed for handling large configuration interaction calculations and by evaluating individual matrix elements to higher accuracy. The important role of higher excitations is exhibited by the numerical calculations, and explicit comparisons are made between converged equations of motion and configuration interaction calculations for systems where a fundamental theorem requires the equality of the energy differences produced by these different approaches. (Auth.)

  20. Comparison between three different LCIA methods for aquatic ecotoxicity and a product Environmental Risk Assessment – Insights from a Detergent Case Study within OMNIITOX

    DEFF Research Database (Denmark)

    Pant, Rana; Van Hoof, Geert; Feijtel, Tom

    2004-01-01

    set of physico-chemical and toxicological effect data to enable a better comparison of the methodological differences. For the same reason, the system boundaries were kept the same in all cases, focusing on emissions into water at the disposal stage. Results and Discussion. Significant differences...... ecotoxicity is not satisfactory, unless explicit reasons for the differences are identifiable. This can hamper practical decision support, as LCA practitioners usually will not be in a position to choose the 'right' LCIA method for their specific case. This puts a challenge to the entire OMNIITOX project......) with results from an Environmental Risk Assessment (ERA). Material and Methods. The LCIA has been conducted with EDIP97 (chronic aquatic ecotoxicity) [1], USES-LCA (freshwater and marine water aquatic ecotoxicity, sometimes referred to as CML2001) [2, 3] and IMPACT 2002 (covering freshwater aquatic ecotoxicity...

  1. Comparison of the lysis centrifugation method with the conventional blood culture method in cases of sepsis in a tertiary care hospital.

    Science.gov (United States)

    Parikh, Harshal R; De, Anuradha S; Baveja, Sujata M

    2012-07-01

    Physicians and microbiologists have long recognized that the presence of living microorganisms in the blood of a patient carries with it considerable morbidity and mortality. Hence, blood cultures have become critically important and frequently performed test in clinical microbiology laboratories for diagnosis of sepsis. To compare the conventional blood culture method with the lysis centrifugation method in cases of sepsis. Two hundred nonduplicate blood cultures from cases of sepsis were analyzed using two blood culture methods concurrently for recovery of bacteria from patients diagnosed clinically with sepsis - the conventional blood culture method using trypticase soy broth and the lysis centrifugation method using saponin by centrifuging at 3000 g for 30 minutes. Overall bacteria recovered from 200 blood cultures were 17.5%. The conventional blood culture method had a higher yield of organisms, especially Gram positive cocci. The lysis centrifugation method was comparable with the former method with respect to Gram negative bacilli. The sensitivity of lysis centrifugation method in comparison to conventional blood culture method was 49.75% in this study, specificity was 98.21% and diagnostic accuracy was 89.5%. In almost every instance, the time required for detection of the growth was earlier by lysis centrifugation method, which was statistically significant. Contamination by lysis centrifugation was minimal, while that by conventional method was high. Time to growth by the lysis centrifugation method was highly significant (P value 0.000) as compared to time to growth by the conventional blood culture method. For the diagnosis of sepsis, combination of the lysis centrifugation method and the conventional blood culture method with trypticase soy broth or biphasic media is advocable, in order to achieve faster recovery and a better yield of microorganisms.

  2. Multiple comparisons in drug efficacy studies: scientific or marketing principles?

    Science.gov (United States)

    Leo, Jonathan

    2004-01-01

    When researchers design an experiment to compare a given medication to another medication, a behavioral therapy, or a placebo, the experiment often involves numerous comparisons. For instance, there may be several different evaluation methods, raters, and time points. Although scientifically justified, such comparisons can be abused in the interests of drug marketing. This article provides two recent examples of such questionable practices. The first involves the case of the arthritis drug celecoxib (Celebrex), where the study lasted 12 months but the authors only presented 6 months of data. The second case involves the NIMH Multimodal Treatment Study (MTA) study evaluating the efficacy of stimulant medication for attention-deficit hyperactivity disorder where ratings made by several groups are reported in contradictory fashion. The MTA authors have not clarified the confusion, at least in print, suggesting that the actual findings of the study may have played little role in the authors' reported conclusions.

  3. Comparison of potential method in analytic hierarchy process for multi-attribute of catering service companies

    Science.gov (United States)

    Mamat, Siti Salwana; Ahmad, Tahir; Awang, Siti Rahmah

    2017-08-01

    Analytic Hierarchy Process (AHP) is a method used in structuring, measuring and synthesizing criteria, in particular ranking of multiple criteria in decision making problems. On the other hand, Potential Method is a ranking procedure in which utilizes preference graph ς (V, A). Two nodes are adjacent if they are compared in a pairwise comparison whereby the assigned arc is oriented towards the more preferred node. In this paper Potential Method is used to solve problem on a catering service selection. The comparison of result by using Potential method is made with Extent Analysis. The Potential Method is found to produce the same rank as Extent Analysis in AHP.

  4. Comparison of direct and indirect methods of estimating health state utilities for resource allocation: review and empirical analysis.

    Science.gov (United States)

    Arnold, David; Girling, Alan; Stevens, Andrew; Lilford, Richard

    2009-07-22

    Utilities (values representing preferences) for healthcare priority setting are typically obtained indirectly by asking patients to fill in a quality of life questionnaire and then converting the results to a utility using population values. We compared such utilities with those obtained directly from patients or the public. Review of studies providing both a direct and indirect utility estimate. Papers reporting comparisons of utilities obtained directly (standard gamble or time tradeoff) or indirectly (European quality of life 5D [EQ-5D], short form 6D [SF-6D], or health utilities index [HUI]) from the same patient. PubMed and Tufts database of utilities. Sign test for paired comparisons between direct and indirect utilities; least squares regression to describe average relations between the different methods. Mean utility scores (or median if means unavailable) for each method, and differences in mean (median) scores between direct and indirect methods. We found 32 studies yielding 83 instances where direct and indirect methods could be compared for health states experienced by adults. The direct methods used were standard gamble in 57 cases and time trade off in 60(34 used both); the indirect methods were EQ-5D (67 cases), SF-6D (13), HUI-2 (5), and HUI-3 (37). Mean utility values were 0.81 (standard gamble) and 0.77 (time tradeoff) for the direct methods; for the indirect methods: 0.59(EQ-5D), 0.63 (SF-6D), 0.75 (HUI-2) and 0.68 (HUI-3). Direct methods of estimating utilities tend to result in higher health ratings than the more widely used indirect methods, and the difference can be substantial.Use of indirect methods could have important implications for decisions about resource allocation: for example, non-lifesaving treatments are relatively more favoured in comparison with lifesaving interventions than when using direct methods.

  5. Inconsistency between direct and indirect comparisons of competing interventions: meta-epidemiological study.

    Science.gov (United States)

    Song, Fujian; Xiong, Tengbin; Parekh-Bhurke, Sheetal; Loke, Yoon K; Sutton, Alex J; Eastwood, Alison J; Holland, Richard; Chen, Yen-Fu; Glenny, Anne-Marie; Deeks, Jonathan J; Altman, Doug G

    2011-08-16

    To investigate the agreement between direct and indirect comparisons of competing healthcare interventions. Meta-epidemiological study based on sample of meta-analyses of randomised controlled trials. Data sources Cochrane Database of Systematic Reviews and PubMed. Inclusion criteria Systematic reviews that provided sufficient data for both direct comparison and independent indirect comparisons of two interventions on the basis of a common comparator and in which the odds ratio could be used as the outcome statistic. Inconsistency measured by the difference in the log odds ratio between the direct and indirect methods. The study included 112 independent trial networks (including 1552 trials with 478,775 patients in total) that allowed both direct and indirect comparison of two interventions. Indirect comparison had already been explicitly done in only 13 of the 85 Cochrane reviews included. The inconsistency between the direct and indirect comparison was statistically significant in 16 cases (14%, 95% confidence interval 9% to 22%). The statistically significant inconsistency was associated with fewer trials, subjectively assessed outcomes, and statistically significant effects of treatment in either direct or indirect comparisons. Owing to considerable inconsistency, many (14/39) of the statistically significant effects by direct comparison became non-significant when the direct and indirect estimates were combined. Significant inconsistency between direct and indirect comparisons may be more prevalent than previously observed. Direct and indirect estimates should be combined in mixed treatment comparisons only after adequate assessment of the consistency of the evidence.

  6. A comparison of different methods for in-situ determination of heat losses form district heating pipes

    Energy Technology Data Exchange (ETDEWEB)

    Boehm, Benny [Technical Univ. of Denmark, Dept. of Energy Engineering (Denmark)

    1996-11-01

    A comparison of different methods for in-situ determination of heat losses has been carried out on a 273 mm transmission line in Copenhagen. Instrumentation includes temperature sensors, heat flux meters and an infrared camera. The methods differ with regard to time consumption and costs of applying the specific method, demand on accuracy of temperature measurements, sensitivity to computational parameters, e.g. the thermal conductivity of the soil, response to transients in water temperature and the ground, and steady state assumptions in the model used in the interpretation of the measurements. Several of the applied methods work well. (au)

  7. Real-world comparison of two molecular methods for detection of respiratory viruses

    Directory of Open Access Journals (Sweden)

    Miller E Kathryn

    2011-06-01

    Full Text Available Abstract Background Molecular polymerase chain reaction (PCR based assays are increasingly used to diagnose viral respiratory infections and conduct epidemiology studies. Molecular assays have generally been evaluated by comparing them to conventional direct fluorescent antibody (DFA or viral culture techniques, with few published direct comparisons between molecular methods or between institutions. We sought to perform a real-world comparison of two molecular respiratory viral diagnostic methods between two experienced respiratory virus research laboratories. Methods We tested nasal and throat swab specimens obtained from 225 infants with respiratory illness for 11 common respiratory viruses using both a multiplex assay (Respiratory MultiCode-PLx Assay [RMA] and individual real-time RT-PCR (RT-rtPCR. Results Both assays detected viruses in more than 70% of specimens, but there was discordance. The RMA assay detected significantly more human metapneumovirus (HMPV and respiratory syncytial virus (RSV, while RT-rtPCR detected significantly more influenza A. We speculated that primer differences accounted for these discrepancies and redesigned the primers and probes for influenza A in the RMA assay, and for HMPV and RSV in the RT-rtPCR assay. The tests were then repeated and again compared. The new primers led to improved detection of HMPV and RSV by RT-rtPCR assay, but the RMA assay remained similar in terms of influenza detection. Conclusions Given the absence of a gold standard, clinical and research laboratories should regularly correlate the results of molecular assays with other PCR based assays, other laboratories, and with standard virologic methods to ensure consistency and accuracy.

  8. A comparative study of three different gene expression analysis methods.

    Science.gov (United States)

    Choe, Jae Young; Han, Hyung Soo; Lee, Seon Duk; Lee, Hanna; Lee, Dong Eun; Ahn, Jae Yun; Ryoo, Hyun Wook; Seo, Kang Suk; Kim, Jong Kun

    2017-12-04

    TNF-α regulates immune cells and acts as an endogenous pyrogen. Reverse transcription polymerase chain reaction (RT-PCR) is one of the most commonly used methods for gene expression analysis. Among the alternatives to PCR, loop-mediated isothermal amplification (LAMP) shows good potential in terms of specificity and sensitivity. However, few studies have compared RT-PCR and LAMP for human gene expression analysis. Therefore, in the present study, we compared one-step RT-PCR, two-step RT-LAMP and one-step RT-LAMP for human gene expression analysis. We compared three gene expression analysis methods using the human TNF-α gene as a biomarker from peripheral blood cells. Total RNA from the three selected febrile patients were subjected to the three different methods of gene expression analysis. In the comparison of three gene expression analysis methods, the detection limit of both one-step RT-PCR and one-step RT-LAMP were the same, while that of two-step RT-LAMP was inferior. One-step RT-LAMP takes less time, and the experimental result is easy to determine. One-step RT-LAMP is a potentially useful and complementary tool that is fast and reasonably sensitive. In addition, one-step RT-LAMP could be useful in environments lacking specialized equipment or expertise.

  9. Liquid metal reactor/Pressurized water reactor plant comparison study

    International Nuclear Information System (INIS)

    Halverson, T.G.

    1986-01-01

    The selection between alternative electric power generating technologies is mainly based on their overall economics. Capital costs account for over 60% of the total busbar cost of nuclear plants. Estimates reported in the literature have shown capital cost ratios of LMRs to PWRs ranging from less than 1 to as high as 1.8. To reduce this range of uncertainty, the study selected a method for cataloging plant hardware and then performed comparisons using engineering judgment as to the anticipated and reasonable cost differences. The paper summarizes the resulting one-on-one comparisons of components, systems, and buildings and identifies the LMR-PWR similarities and differences which influence costs. The study leads to the conclusion that the capital cost of the most up-to-date large LMR design would be very close to that of the latest PWRs

  10. A method for comparison of animal and human alveolar dose and toxic effect of inhaled ozone

    International Nuclear Information System (INIS)

    Hatch, G.E.; Koren, H.; Aissa, M.

    1989-01-01

    Present models for predicting the pulmonary toxicity of O 3 in humans from the toxic effects observed in animals rely on dosimetric measurements of O 3 mass balance and species comparisons of mechanisms that protect tissue against O 3 . The goal of the study described was to identify a method to directly compare O 3 dose and effect in animals and humans using bronchoalveolar lavage fluid markers. The feasibility of estimating O 3 dose to alveoli of animals and humans was demonstrated through assay of reaction products of 18 O-labeled O 3 in lung surfactant and macrophage pellets of rabbits. The feasibility of using lung lavage fluid protein measurements to quantify the O 3 toxic response in humans was demonstrated by the finding of significantly increased lung lavage protein in 10 subjects exposed to 0.4 ppm O 3 for 2 h with intermittent periods of heavy exercise. The validity of using the lavage protein marker to quantify the response in animals has already been established. The positive results obtained in both the 18 O 3 and the lavage protein studies reported here suggest that it should be possible to obtain a direct comparison of both alveolar dose and toxic effect of O 3 to alveoli of animals or humans

  11. National comparison on volume sample activity measurement methods

    International Nuclear Information System (INIS)

    Sahagia, M.; Grigorescu, E.L.; Popescu, C.; Razdolescu, C.

    1992-01-01

    A national comparison on volume sample activity measurements methods may be regarded as a step toward accomplishing the traceability of the environmental and food chain activity measurements to national standards. For this purpose, the Radionuclide Metrology Laboratory has distributed 137 Cs and 134 Cs water-equivalent solid standard sources to 24 laboratories having responsibilities in this matter. Every laboratory has to measure the activity of the received source(s) by using its own standards, equipment and methods and report the obtained results to the organizer. The 'measured activities' will be compared with the 'true activities'. A final report will be issued, which plans to evaluate the national level of precision of such measurements and give some suggestions for improvement. (Author)

  12. A novel model approach for esophageal burns in rats: A comparison of three methods.

    Science.gov (United States)

    Kalkan, Yildiray; Tumkaya, Levent; Akdogan, Remzi Adnan; Yucel, Ahmet Fikret; Tomak, Yakup; Sehitoglu, İbrahim; Pergel, Ahmet; Kurt, Aysel

    2015-07-01

    Corrosive esophageal injury causes serious clinical problems. We aimed to create a new experimental esophageal burn model using a single catheter without a surgical procedure. We conducted the study with two groups of 12 male rats that fasted for 12 h before application. A modified Foley balloon catheter was inserted into the esophageal lumen. The control group was given 0.9% sodium chloride, while the experimental group was given 37.5% sodium hydroxide with the other part of the catheter. After 60s, esophagus was washed with distilled water. The killed rats were examined using histopathological methods after 28 days. In comparison with the histopathological changes experienced by the study groups, the control groups were observed to have no pathological changes. Basal cell degeneration, dermal edema, and a slight increase in the keratin layer and collagen density of submucosa due to stenosis were all observed in the group subjected to esophageal corrosion. A new burn model can thus, we believe, be created without the involvement of invasive laparoscopic surgery and general anesthesia. The burn in our experiment was formed in both the distal and proximal esophagus, as in other models; it can also be formed optionally in the entire esophagus. © The Author(s) 2013.

  13. Comparison of two percutaneous tracheostomy techniques, guide wire dilating forceps and Ciaglia Blue Rhino: a sequential cohort study.

    NARCIS (Netherlands)

    Fikkers, B.G.; Staatsen, M; Lardenoije, S.G.; Hoogen, F.J.A. van den; Hoeven, J.G. van der

    2004-01-01

    INTRODUCTION: To evaluate and compare the peri-operative and postoperative complications of the two most frequently used percutaneous tracheostomy techniques, namely guide wire dilating forceps (GWDF) and Ciaglia Blue Rhino (CBR). METHODS: A sequential cohort study with comparison of short-term and

  14. Regional cerebral blood flow measurements by a noninvasive microsphere method using 123I-IMP. Comparison with the modified fractional uptake method and the continuous arterial blood sampling method

    International Nuclear Information System (INIS)

    Nakano, Seigo; Matsuda, Hiroshi; Tanizaki, Hiroshi; Ogawa, Masafumi; Miyazaki, Yoshiharu; Yonekura, Yoshiharu

    1998-01-01

    A noninvasive microsphere method using N-isopropyl-p-( 123 I)iodoamphetamine ( 123 I-IMP), developed by Yonekura et al., was performed in 10 patients with neurological diseases to quantify regional cerebral blood flow (rCBF). Regional CBF values by this method were compared with rCBF values simultaneously estimated from both the modified fractional uptake (FU) method using cardiac output developed by Miyazaki et al. and the conventional method with continuous arterial blood sampling. In comparison, we designated the factor which converted raw SPECT voxel counts to rCBF values as a CBF factor. A highly significant correlation (r=0.962, p<0.001) was obtained in the CBF factors between the present method and the continuous arterial blood sampling method. The CBF factors by the present method were only 2.7% higher on the average than those by the continuous arterial blood sampling method. There were significant correlation (r=0.811 and r=O.798, p<0.001) in the CBF factor between modified FU method (threshold for estimating total brain SPECT counts; 10% and 30% respectively) and the continuous arterial blood sampling method. However, the CBF factors of the modified FU method showed 31.4% and 62.3% higher on the average (threshold; 10% and 30% respectively) than those by the continuous arterial blood sampling method. In conclusion, this newly developed method for rCBF measurements was considered to be useful for routine clinical studies without any blood sampling. (author)

  15. Comparison of the direct enzyme assay method with the membrane ...

    African Journals Online (AJOL)

    Comparison of the direct enzyme assay method with the membrane filtration technique in the quantification and monitoring of microbial indicator organisms – seasonal variations in the activities of coliforms and E. coli, temperature and pH.

  16. Comparison of breeding methods for forage yield in red clover

    Directory of Open Access Journals (Sweden)

    Libor Jalůvka

    2009-01-01

    Full Text Available Three methods of red clover (Trifolium pratense L. breeding for forage yield in two harvest years on locations in Bredelokke (Denmark, Hladké Životice (Czech Republic and Les Alleuds (France were compared. Three types of 46 candivars1, developed by A recurrent selection in subsequent generations (37 candivars, divided into early and late group, B polycross progenies (4 candivars and C ge­no-phe­no­ty­pic selection (5 candivars were compared. The trials were sown in 2005 and cut three times in 2006 and 2007; their evaluation is based primarily on total yield of dry matter. The candivars developed by polycross and geno-phenotypic selections gave significantly higher yields than candivars from the recurrent selection. However, the candivars developed by the methods B and C did not differ significantly. The candivars developed by these progressive methods were suitable for higher yielding and drier environment in Hladké Životice (where was the highest yield level even if averaged annual precipitation were lower by 73 and 113 mm in comparison to other locations, respectively; here was ave­ra­ge yield higher by 19 and 13% for B and C in comparison to A method. Highly significant interaction of the candivars with locations was found. It can be concluded that varieties specifically aimed to different locations by the methods B and C should be bred; also the parental entries should be selected there.

  17. A COMPARISON OF METHODS FOR DETERMINING THE MOLECULAR CONTENT OF MODEL GALAXIES

    International Nuclear Information System (INIS)

    Krumholz, Mark R.; Gnedin, Nickolay Y.

    2011-01-01

    Recent observations indicate that star formation occurs only in the molecular phase of a galaxy's interstellar medium. A realistic treatment of star formation in simulations and analytic models of galaxies therefore requires that one determine where the transition from the atomic to molecular gas occurs. In this paper, we compare two methods for making this determination in cosmological simulations where the internal structures of molecular clouds are unresolved: a complex time-dependent chemistry network coupled to a radiative transfer calculation of the dissociating ultraviolet (UV) radiation field and a simple time-independent analytic approximation. We show that these two methods produce excellent agreement at all metallicities ∼>10 -2 of the Milky Way value across a very wide range of UV fields. At lower metallicities the agreement is worse, likely because time-dependent effects become important; however, there are no observational calibrations of molecular gas content at such low metallicities, so it is unclear if either method is accurate. The comparison suggests that, in many but not all applications, the analytic approximation provides a viable and nearly cost-free alternative to full time-dependent chemistry and radiative transfer.

  18. Comparison of three retail data communication and transmission methods

    Directory of Open Access Journals (Sweden)

    MA Yue

    2016-04-01

    Full Text Available With the rapid development of retail trade,the type and complexity of data keep increasing,and single data file size has a great difference between each other.How to realize an accurate,real-time and efficient data transmission based on a fixed cost is an important problem.Regarding the problem of effective transmission for business data files,this article implements analysis and comparison on 3 existing data transmission methods,considering the requirements on aspects like function in enterprise data communication system,we get a conclusion that which method can both meet the enterprise daily business development requirement better and have good extension ability.

  19. Comparison of three analytical methods for the determination of trace elements in whole blood

    International Nuclear Information System (INIS)

    Ward, N.I.; Stephens, R.; Ryan, D.E.

    1979-01-01

    Three different analytical techniques were compared in a study of the role of trace elements in multiple sclerosis. Data for eight elements (Cd, Co, Cr, Cu, Mg, Mn, Pb, Zn) from neutron activation, flame atomic absorption and electrothermal atomic absorption methods were compared and evaluated statistically. No difference (probability less than 0.001) was observed in the elemental values obtained. Comparison of data between suitably different analytical methods gives increased confidence in the results obtained and is of particular value when standard reference materials are not available. (Auth.)

  20. A comparative study on the graft copolymerization of acrylic acid onto rayon fibre by a ceric ion redox system and a γ-radiation method.

    Science.gov (United States)

    Kaur, Inderjeet; Kumar, Raj; Sharma, Neelam

    2010-10-13

    Functionalization of rayon fibre has been carried out by grafting acrylic acid (AAC) both by a chemical method using a Ce(4+)-HNO(3) redox initiator and by a mutual irradiation (γ-rays) method. The reaction conditions affecting the grafting percentage have been optimized for both methods, and the results are compared. The maximum percentage of grafting (50%) by the chemical method was obtained utilizing 18.24 × 10(-3) moles/L of ceric ammonium nitrate (CAN), 39.68 × 10(-2) moles/L of HNO(3), and 104.08 × 10(-2) moles/L of AAc in 20 mL of water at 45°C for 120 min. For the radiation method, the maximum grafting percentage (60%) was higher, and the product was obtained under milder reaction conditions using a lower concentration of AAc (69.38 × 10(-2) moles/L) in 10 mL of water at an optimum total dose of 0.932 kGy. Swelling studies showed higher swelling for the grafted rayon fibre in water (854.54%) as compared to the pristine fibre (407%), while dye uptake studies revealed poor uptake of the dye (crystal violet) by the grafted fibre in comparison with the pristine fibre. The graft copolymers were characterized by IR, TGA, and scanning electron micrographic methods. Grafted fibre, prepared by the radiation-induced method, showed better thermal behaviour. Comparison of the two methods revealed that the radiation method of grafting of acrylic acid onto rayon fibre is a better method of grafting in comparison with the chemical method. Copyright © 2010 Elsevier Ltd. All rights reserved.

  1. Comparison of continuum and atomistic methods for the analysis of InAs/GaAs quantum dots

    DEFF Research Database (Denmark)

    Barettin, D.; Pecchia, A.; Penazzi, G.

    2011-01-01

    We present a comparison of continuum k · p and atomistic empirical Tight Binding methods for the analysis of the optoelectronic properties of InAs/GaAs quantum dots.......We present a comparison of continuum k · p and atomistic empirical Tight Binding methods for the analysis of the optoelectronic properties of InAs/GaAs quantum dots....

  2. Comparison of two dietary assessment methods by food consumption: results of the German National Nutrition Survey II.

    Science.gov (United States)

    Eisinger-Watzl, Marianne; Straßburg, Andrea; Ramünke, Josa; Krems, Carolin; Heuer, Thorsten; Hoffmann, Ingrid

    2015-04-01

    To further characterise the performance of the diet history method and the 24-h recalls method, both in an updated version, a comparison was conducted. The National Nutrition Survey II, representative for Germany, assessed food consumption with both methods. The comparison was conducted in a sample of 9,968 participants aged 14-80. Besides calculating mean differences, statistical agreement measurements encompass Spearman and intraclass correlation coefficients, ranking participants in quartiles and the Bland-Altman method. Mean consumption of 12 out of 18 food groups was higher assessed with the diet history method. Three of these 12 food groups had a medium to large effect size (e.g., raw vegetables) and seven showed at least a small strength while there was basically no difference for coffee/tea or ice cream. Intraclass correlations were strong only for beverages (>0.50) and revealed the least correlation for vegetables (diet history method to remember consumption of the past 4 weeks may be a source of inaccurateness, especially for inhomogeneous food groups. Additionally, social desirability gains significance. There is no assessment method without errors and attention to specific food groups is a critical issue with every method. Altogether, the 24-h recalls method applied in the presented study, offers advantages approximating food consumption as compared to the diet history method.

  3. Comparison of methods and instruments for 222Rn/220Rn progeny measurement

    International Nuclear Information System (INIS)

    Liu Yanyang; Shang Bing; Wu Yunyun; Zhou Qingzhi

    2012-01-01

    In this paper, comparisons were made among three methods of measurement (grab measurement, continuous measurement and integrating measurement) and also measurement of different instruments for a radon/thoron mixed chamber. Taking the optimized five-segment method as a comparison criterion, for the equilibrium-equivalent concentration of 222 Rn, measured results of Balm and 24 h integrating detectors are 31% and 29% higher than the criterion, the results of Wl x, however, is 20% lower; and for 220 Rn progeny, the results of Fiji-142, Kf-602D, BWLM and 24 h integrating detector are 86%, 18%, 28% and 36% higher than the criterion respectively, except that of WLx, which is 5% lower. For the differences shown, further research is needed. (authors)

  4. The choice of statistical methods for comparisons of dosimetric data in radiotherapy

    International Nuclear Information System (INIS)

    Chaikh, Abdulhamid; Giraud, Jean-Yves; Perrin, Emmanuel; Bresciani, Jean-Pierre; Balosso, Jacques

    2014-01-01

    Novel irradiation techniques are continuously introduced in radiotherapy to optimize the accuracy, the security and the clinical outcome of treatments. These changes could raise the question of discontinuity in dosimetric presentation and the subsequent need for practice adjustments in case of significant modifications. This study proposes a comprehensive approach to compare different techniques and tests whether their respective dose calculation algorithms give rise to statistically significant differences in the treatment doses for the patient. Statistical investigation principles are presented in the framework of a clinical example based on 62 fields of radiotherapy for lung cancer. The delivered doses in monitor units were calculated using three different dose calculation methods: the reference method accounts the dose without tissues density corrections using Pencil Beam Convolution (PBC) algorithm, whereas new methods calculate the dose with tissues density correction for 1D and 3D using Modified Batho (MB) method and Equivalent Tissue air ratio (ETAR) method, respectively. The normality of the data and the homogeneity of variance between groups were tested using Shapiro-Wilks and Levene test, respectively, then non-parametric statistical tests were performed. Specifically, the dose means estimated by the different calculation methods were compared using Friedman’s test and Wilcoxon signed-rank test. In addition, the correlation between the doses calculated by the three methods was assessed using Spearman’s rank and Kendall’s rank tests. The Friedman’s test showed a significant effect on the calculation method for the delivered dose of lung cancer patients (p <0.001). The density correction methods yielded to lower doses as compared to PBC by on average (−5 ± 4.4 SD) for MB and (−4.7 ± 5 SD) for ETAR. Post-hoc Wilcoxon signed-rank test of paired comparisons indicated that the delivered dose was significantly reduced using density

  5. Comparison Among Methods of Retinopathy Assessment (CAMRA) Study: Smartphone, Nonmydriatic, and Mydriatic Photography.

    Science.gov (United States)

    Ryan, Martha E; Rajalakshmi, Ramachandran; Prathiba, Vijayaraghavan; Anjana, Ranjit Mohan; Ranjani, Harish; Narayan, K M Venkat; Olsen, Timothy W; Mohan, Viswanathan; Ward, Laura A; Lynn, Michael J; Hendrick, Andrew M

    2015-10-01

    We compared smartphone fundus photography, nonmydriatic fundus photography, and 7-field mydriatic fundus photography for their abilities to detect and grade diabetic retinopathy (DR). This was a prospective, comparative study of 3 photography modalities. Diabetic patients (n = 300) were recruited at the ophthalmology clinic of a tertiary diabetes care center in Chennai, India. Patients underwent photography by all 3 modalities, and photographs were evaluated by 2 retina specialists. The sensitivity and specificity in the detection of DR for both smartphone and nonmydriatic photography were determined by comparison with the standard method, 7-field mydriatic fundus photography. The sensitivity and specificity of smartphone fundus photography, compared with 7-field mydriatic fundus photography, for the detection of any DR were 50% (95% confidence interval [CI], 43-56) and 94% (95% CI, 92-97), respectively, and of nonmydriatic fundus photography were 81% (95% CI, 75-86) and 94% (95% CI, 92-96%), respectively. The sensitivity and specificity of smartphone fundus photography for the detection of vision-threatening DR were 59% (95% CI, 46-72) and 100% (95% CI, 99-100), respectively, and of nonmydriatic fundus photography were 54% (95% CI, 40-67) and 99% (95% CI, 98-100), respectively. Smartphone and nonmydriatic fundus photography are each able to detect DR and sight-threatening disease. However, the nonmydriatic camera is more sensitive at detecting DR than the smartphone. At this time, the benefits of the smartphone (connectivity, portability, and reduced cost) are not offset by the lack of sufficient sensitivity for detection of DR in most clinical circumstances. Copyright © 2015 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  6. A comparison of four methods to evaluate the effect of a utility residential air-conditioner load control program on peak electricity use

    Energy Technology Data Exchange (ETDEWEB)

    Newsham, Guy R., E-mail: guy.newsham@nrc-cnrc.gc.ca [National Research Council Canada-Institute for Research in Construction, Building M24, 1200 Montreal Road, Ottawa, Ontario, K1A 0R6 (Canada); Birt, Benjamin J. [National Research Council Canada-Institute for Research in Construction, Building M24, 1200 Montreal Road, Ottawa, Ontario, K1A 0R6 (Canada); Rowlands, Ian H. [University of Waterloo, Ontario (Canada)

    2011-10-15

    We analyzed the peak load reductions due to a residential direct load control program for air-conditioners in southern Ontario in 2008. In this program, participant thermostats were increased by 2 deg. C for four hours on five event days. We used hourly, whole-house data for 195 participant households and 268 non-participant households, and four different methods of analysis ranging from simple spreadsheet-based comparisons of average loads on event days, to complex time-series regression. Average peak load reductions were 0.2-0.9 kWh/h per household, or 10-35%. However, there were large differences between event days and across event hours, and in results for the same event day/hour, with different analysis methods. There was also a wide range of load reductions between individual households, and only a minority of households contributed to any given event. Policy makers should be aware of how the choice of an analysis method may affect decisions regarding which demand-side management programs to support, and how they might be incentivized. We recommend greater use of time-series methods, although it might take time to become comfortable with their complexity. Further investigation of what type of households contribute most to aggregate load reductions would also help policy makers better target programs. - Highlights: > We analyzed peak load reductions due to residential a/c load control. > We used four methods, ranging from simple comparisons to time-series regression. > Average peak load reductions were 0.2-0.9 kW per household, varying by method. > We recommend a move towards time-series regression for future studies. > A minority of participant households contributed to a given load control event.

  7. A comparison of multivariate genome-wide association methods

    DEFF Research Database (Denmark)

    Galesloot, Tessel E; Van Steen, Kristel; Kiemeney, Lambertus A L M

    2014-01-01

    Joint association analysis of multiple traits in a genome-wide association study (GWAS), i.e. a multivariate GWAS, offers several advantages over analyzing each trait in a separate GWAS. In this study we directly compared a number of multivariate GWAS methods using simulated data. We focused on six...... methods that are implemented in the software packages PLINK, SNPTEST, MultiPhen, BIMBAM, PCHAT and TATES, and also compared them to standard univariate GWAS, analysis of the first principal component of the traits, and meta-analysis of univariate results. We simulated data (N = 1000) for three...... for scenarios with an opposite sign of genetic and residual correlation. All multivariate analyses resulted in a higher power than univariate analyses, even when only one of the traits was associated with the QTL. Hence, use of multivariate GWAS methods can be recommended, even when genetic correlations between...

  8. Comparison between two braking control methods integrating energy recovery for a two-wheel front driven electric vehicle

    International Nuclear Information System (INIS)

    Itani, Khaled; De Bernardinis, Alexandre; Khatir, Zoubir; Jammal, Ahmad

    2016-01-01

    Highlights: • Comparison between two braking methods for an EV maximizing the energy recovery. • Wheels slip ratio control based on robust sliding mode and ECE R13 control methods. • Regenerative braking control strategy. • Energy recovery of a HESS with respect to road surface type and road condition. - Abstract: This paper presents the comparison between two braking methods for a two-wheel front driven Electric Vehicle maximizing the energy recovery on the Hybrid Energy Storage System. The first method consists in controlling the wheels slip ratio while braking using a robust sliding mode controller. The second method will be based on ECE R13H constraints for an M1 passenger vehicle. The vehicle model used for simulation is a simplified five degrees of freedom model. It is driven by two 30 kW permanent magnet synchronous motor (PMSM) recovering energy during braking phases. Several simulation results for extreme braking conditions will be performed and compared on various road type surfaces using Matlab/Simulink®. For an initial speed of 80 km/h, simulation results demonstrate that the difference of energy recovery efficiency between the two control braking methods is beneficial to the ECE constraints control method and it can vary from 3.7% for high friction road type to 11.2% for medium friction road type. At low friction road type, the difference attains 6.6% due to different reasons treated in the paper. The stability deceleration is also discussed and detailed.

  9. A Method for the Comparison of Item Selection Rules in Computerized Adaptive Testing

    Science.gov (United States)

    Barrada, Juan Ramon; Olea, Julio; Ponsoda, Vicente; Abad, Francisco Jose

    2010-01-01

    In a typical study comparing the relative efficiency of two item selection rules in computerized adaptive testing, the common result is that they simultaneously differ in accuracy and security, making it difficult to reach a conclusion on which is the more appropriate rule. This study proposes a strategy to conduct a global comparison of two or…

  10. A comparison of two methods for assessing awareness of antitobacco television advertisements.

    Science.gov (United States)

    Luxenberg, Michael G; Greenseid, Lija O; Depue, Jacob; Mowery, Andrea; Dreher, Marietta; Larsen, Lindsay S; Schillo, Barbara

    2016-05-01

    This study uses an online survey panel to compare two approaches for assessing ad awareness. The first uses a screenshot of a television ad and the second shows participants a full-length video of the ad. We randomly assigned 1034 Minnesota respondents to view a screenshot or a streaming video from two antitobacco ads. The study used one ad from ClearWay Minnesota's ITALIC! We All Pay the Price campaign, and one from the Centers for Disease Control ITALIC! Tips campaign. The key measure used to assess ad awareness was aided ad recall. Multivariate analyses of recall with cessation behaviour and attitudinal beliefs assessed the validity of these approaches. The respondents who saw the video reported significantly higher recall than those who saw the screenshot. Associations of recall with cessation behaviour and attitudinal beliefs were stronger and in the anticipated direction using the screenshot method. Over 20% of the respondents assigned to the video group could not see the ad. People who were under 45 years old, had incomes greater than $35,000 and women were reportedly less able to access the video. The methodology used to assess recall matters. Campaigns may exaggerate the successes or failures of their media campaigns, depending on the approach they employ and how they compare it to other media campaign evaluations. When incorporating streaming video, researchers should consider accessibility and report possible response bias. Researchers should fully define the measures they use, specify any viewing accessibility issues, and make ad comparisons only when using comparable methods. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  11. Stochastic spectral Galerkin and collocation methods for PDEs with random coefficients: A numerical comparison

    KAUST Repository

    Bäck, Joakim

    2010-09-17

    Much attention has recently been devoted to the development of Stochastic Galerkin (SG) and Stochastic Collocation (SC) methods for uncertainty quantification. An open and relevant research topic is the comparison of these two methods. By introducing a suitable generalization of the classical sparse grid SC method, we are able to compare SG and SC on the same underlying multivariate polynomial space in terms of accuracy vs. computational work. The approximation spaces considered here include isotropic and anisotropic versions of Tensor Product (TP), Total Degree (TD), Hyperbolic Cross (HC) and Smolyak (SM) polynomials. Numerical results for linear elliptic SPDEs indicate a slight computational work advantage of isotropic SC over SG, with SC-SM and SG-TD being the best choices of approximation spaces for each method. Finally, numerical results corroborate the optimality of the theoretical estimate of anisotropy ratios introduced by the authors in a previous work for the construction of anisotropic approximation spaces. © 2011 Springer.

  12. A Modified Image Comparison Algorithm Using Histogram Features

    OpenAIRE

    Al-Oraiqat, Anas M.; Kostyukova, Natalya S.

    2018-01-01

    This article discuss the problem of color image content comparison. Particularly, methods of image content comparison are analyzed, restrictions of color histogram are described and a modified method of images content comparison is proposed. This method uses the color histograms and considers color locations. Testing and analyzing of based and modified algorithms are performed. The modified method shows 97% average precision for a collection containing about 700 images without loss of the adv...

  13. Procedures, analysis, and comparison of groundwater velocity measurement methods for unconfined aquifers

    International Nuclear Information System (INIS)

    Kearl, P.M.; Dexter, J.J.; Price, J.E.

    1988-09-01

    Six methods for determining the average linear velocity of ground- water were tested at two separate field sites. The methods tested include bail tests, pumping tests, wave propagation, tracer tests, Geoflo Meter/reg sign/, and borehole dilution. This report presents procedures for performing field tests and compares the results of each method on the basis of application, cost, and accuracy. Comparisons of methods to determine the ground-water velocity at two field sites show certain methods yield similar results while other methods measure significantly different values. The literature clearly supports the reliability of pumping tests for determining hydraulic conductivity. Results of this investigation support this finding. Pumping tests, however, are limited because they measure an average hydraulic conductivity which is only representative of the aquifer within the radius of influence. Bail tests are easy and inexpensive to perform. If the tests are conducted on the majority of wells at a hazardous waste site, then the heterogeneity of the site aquifer can be assessed. However, comparisons of bail-test results with pumping-test and tracer-test results indicate that the accuracy of the method is questionable. Consequently, the principal recommendation of this investigation, based on cost and reliability of the ground-water velocity measurement methods, is that bail tests should be performed on all or a majority of monitoring wells at a site to determine the ''relative'' hydraulic conductivities

  14. Should methods of correction for multiple comparisons be applied in pharmacovigilance?

    Directory of Open Access Journals (Sweden)

    Lorenza Scotti

    2015-12-01

    Full Text Available Purpose. In pharmacovigilance, spontaneous reporting databases are devoted to the early detection of adverse event ‘signals’ of marketed drugs. A common limitation of these systems is the wide number of concurrently investigated associations, implying a high probability of generating positive signals simply by chance. However it is not clear if the application of methods aimed to adjust for the multiple testing problems are needed when at least some of the drug-outcome relationship under study are known. To this aim we applied a robust estimation method for the FDR (rFDR particularly suitable in the pharmacovigilance context. Methods. We exploited the data available for the SAFEGUARD project to apply the rFDR estimation methods to detect potential false positive signals of adverse reactions attributable to the use of non-insulin blood glucose lowering drugs. Specifically, the number of signals generated from the conventional disproportionality measures and after the application of the rFDR adjustment method was compared. Results. Among the 311 evaluable pairs (i.e., drug-event pairs with at least one adverse event report, 106 (34% signals were considered as significant from the conventional analysis. Among them 1 resulted in false positive signals according to rFDR method. Conclusions. The results of this study seem to suggest that when a restricted number of drug-outcome pairs is considered and warnings about some of them are known, multiple comparisons methods for recognizing false positive signals are not so useful as suggested by theoretical considerations.

  15. A method comparison of photovoice and content analysis: research examining challenges and supports of family caregivers.

    Science.gov (United States)

    Faucher, Mary Ann; Garner, Shelby L

    2015-11-01

    The purpose of this manuscript is to compare methods and thematic representations of the challenges and supports of family caregivers identified with photovoice methodology contrasted with content analysis, a more traditional qualitative approach. Results from a photovoice study utilizing a participatory action research framework was compared to an analysis of the audio-transcripts from that study utilizing content analysis methodology. Major similarities between the results are identified with some notable differences. Content analysis provides a more in-depth and abstract elucidation of the nature of the challenges and supports of the family caregiver. The comparison provides evidence to support the trustworthiness of photovoice methodology with limitations identified. The enhanced elaboration of theme and categories with content analysis may have some advantages relevant to the utilization of this knowledge by health care professionals. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. A Comparison of Card-sorting Analysis Methods

    DEFF Research Database (Denmark)

    Nawaz, Ather

    2012-01-01

    This study investigates how the choice of analysis method for card sorting studies affects the suggested information structure for websites. In the card sorting technique, a variety of methods are used to analyse the resulting data. The analysis of card sorting data helps user experience (UX......) designers to discover the patterns in how users make classifications and thus to develop an optimal, user-centred website structure. During analysis, the recurrence of patterns of classification between users influences the resulting website structure. However, the algorithm used in the analysis influences...... the recurrent patterns found and thus has consequences for the resulting website design. This paper draws an attention to the choice of card sorting analysis and techniques and shows how it impacts the results. The research focuses on how the same data for card sorting can lead to different website structures...

  17. Comparison of Witczak NCHRP 1-40D & Hirsh dynamic modulus models based on different binder characterization methods: a case study

    Directory of Open Access Journals (Sweden)

    Khattab Ahmed M.

    2017-01-01

    Full Text Available The Pavement ME Design method considers the hot mix asphalt (HMA dynamic modulus (E* as the main mechanistic property that affects pavement performance. For the HMA, E* can be determined directly by laboratory testing (level 1 or it can be estimated using predictive equations (levels 2 and 3. Pavement-ME Design introduced the NCHRP1-40D model as the latest model for predicting E* when levels 2 or 3 HMA inputs are used. This study focused on utilizing laboratory measured E* data to compare NCHRP1-40D model with Hirsh model. This comparison included the evaluation of the binder characterization level as per Pavement ME Design and its influence on the performance of these models. E*tests were conducted in the laboratory on 25 local mixes representing different road construction projects in the kingdom of Saudi Arabia. The main tests for the mix binders were dynamic Shear Rheometer (DSR and Brookfield Rotational Viscometer (RV. Results showed that both models with level 3 binder data produced very similar accuracy. The highest accuracy and lowest bias for both models occurred with level 3 binder data. Finally, the accuracy of prediction and level of bias for both models were found to be a function of the binder input level.

  18. Variable selection methods in PLS regression - a comparison study on metabolomics data

    DEFF Research Database (Denmark)

    Karaman, İbrahim; Hedemann, Mette Skou; Knudsen, Knud Erik Bach

    . The aim of the metabolomics study was to investigate the metabolic profile in pigs fed various cereal fractions with special attention to the metabolism of lignans using LC-MS based metabolomic approach. References 1. Lê Cao KA, Rossouw D, Robert-Granié C, Besse P: A Sparse PLS for Variable Selection when...... integrated approach. Due to the high number of variables in data sets (both raw data and after peak picking) the selection of important variables in an explorative analysis is difficult, especially when different data sets of metabolomics data need to be related. Variable selection (or removal of irrelevant...... different strategies for variable selection on PLSR method were considered and compared with respect to selected subset of variables and the possibility for biological validation. Sparse PLSR [1] as well as PLSR with Jack-knifing [2] was applied to data in order to achieve variable selection prior...

  19. Comparison of several methods of sires evaluation for total milk ...

    African Journals Online (AJOL)

    2015-01-24

    Jan 24, 2015 ... Comparison of several methods of sires evaluation for total milk yield in a herd of Holstein cows in Yemen. F.R. Al-Samarai1,*, Y.K. Abdulrahman1, F.A. Mohammed2, F.H. Al-Zaidi2 and N.N. Al-Anbari3. 1Department of Veterinary Public Health/College of Veterinary Medicine, University of Baghdad, Iraq.

  20. Statistical learning techniques applied to epidemiology: a simulated case-control comparison study with logistic regression

    Directory of Open Access Journals (Sweden)

    Land Walker H

    2011-01-01

    Full Text Available Abstract Background When investigating covariate interactions and group associations with standard regression analyses, the relationship between the response variable and exposure may be difficult to characterize. When the relationship is nonlinear, linear modeling techniques do not capture the nonlinear information content. Statistical learning (SL techniques with kernels are capable of addressing nonlinear problems without making parametric assumptions. However, these techniques do not produce findings relevant for epidemiologic interpretations. A simulated case-control study was used to contrast the information embedding characteristics and separation boundaries produced by a specific SL technique with logistic regression (LR modeling representing a parametric approach. The SL technique was comprised of a kernel mapping in combination with a perceptron neural network. Because the LR model has an important epidemiologic interpretation, the SL method was modified to produce the analogous interpretation and generate odds ratios for comparison. Results The SL approach is capable of generating odds ratios for main effects and risk factor interactions that better capture nonlinear relationships between exposure variables and outcome in comparison with LR. Conclusions The integration of SL methods in epidemiology may improve both the understanding and interpretation of complex exposure/disease relationships.

  1. Comparison of non-parametric methods for ungrouping coarsely aggregated data

    DEFF Research Database (Denmark)

    Rizzi, Silvia; Thinggaard, Mikael; Engholm, Gerda

    2016-01-01

    group at the highest ages. When histogram intervals are too coarse, information is lost and comparison between histograms with different boundaries is arduous. In these cases it is useful to estimate detailed distributions from grouped data. Methods From an extensive literature search we identify five...

  2. Comparison of three methods of restoration of cosmic radio source profiles

    International Nuclear Information System (INIS)

    Malov, I.F.; Frolov, V.A.

    1986-01-01

    Effectiveness of three methods for restoration of radio brightness distribution over the source: main solution, fitting and minimal - phase method (MPM) - was compared on the basis of data on module and phase of luminosity function (LF) of 15 cosmic radiosources. It is concluded that MPM can soccessfully compete with other known methods. Its obvious advantages in comparison with the fitting method consist in that it gives unambigous and direct restoration and a main advantage as compared with the main solution is the feasibility of restoration in the absence of data on LF phase which reduces restoration errors

  3. A comparison of moving object detection methods for real-time moving object detection

    Science.gov (United States)

    Roshan, Aditya; Zhang, Yun

    2014-06-01

    Moving object detection has a wide variety of applications from traffic monitoring, site monitoring, automatic theft identification, face detection to military surveillance. Many methods have been developed across the globe for moving object detection, but it is very difficult to find one which can work globally in all situations and with different types of videos. The purpose of this paper is to evaluate existing moving object detection methods which can be implemented in software on a desktop or laptop, for real time object detection. There are several moving object detection methods noted in the literature, but few of them are suitable for real time moving object detection. Most of the methods which provide for real time movement are further limited by the number of objects and the scene complexity. This paper evaluates the four most commonly used moving object detection methods as background subtraction technique, Gaussian mixture model, wavelet based and optical flow based methods. The work is based on evaluation of these four moving object detection methods using two (2) different sets of cameras and two (2) different scenes. The moving object detection methods have been implemented using MatLab and results are compared based on completeness of detected objects, noise, light change sensitivity, processing time etc. After comparison, it is observed that optical flow based method took least processing time and successfully detected boundary of moving objects which also implies that it can be implemented for real-time moving object detection.

  4. Comparison of 3 in vivo methods for assessment of alcohol-based hand rubs.

    Science.gov (United States)

    Edmonds-Wilson, Sarah; Campbell, Esther; Fox, Kyle; Macinga, David

    2015-05-01

    Alcohol-based hand rubs (ABHRs) are the primary method of hand hygiene in health-care settings. ICPs increasingly are assessing ABHR product efficacy data as improved products and test methods are developed. As a result, ICPs need better tools and recommendations for how to assess and compare ABHRs. Two ABHRs (70% ethanol) were tested according to 3 in vivo methods approved by ASTM International: E1174, E2755, and E2784. Log10 reductions were measured after a single test product use and after 10 consecutive uses at an application volume of 2 mL. The test method used had a significant influence on ABHR efficacy; however, in this study the test product (gel or foam) did not significantly influence efficacy. In addition, for all test methods, log10 reductions obtained after a single application were not predictive of results after 10 applications. Choice of test method can significantly influence efficacy results. Therefore, when assessing antimicrobial efficacy data of hand hygiene products, ICPs should pay close attention to the test method used, and ensure that product comparisons are made head to head in the same study using the same test methodology. Copyright © 2015 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  5. A real case study on transportation scenario comparison

    Directory of Open Access Journals (Sweden)

    Tsoukiás A.

    2002-01-01

    Full Text Available This paper presents a real case study dealing with the comparison of transport scenarios. The study is conducted within a larger project concerning the establishment of the maritime traffic policy in Greece. The paper presents the problem situation and an appropriate problem formulation. Moreover a detailed version of the evaluation model is presented in the paper. The model consists of a complex hierarchy of evaluation models enabling us to take into account the multiple dimensions and points of view of the actors involved in the evaluations.

  6. Transgingival Probing and Ultrasonographic Methods for Determination of Gingival Thickness- A Comparative Study

    Directory of Open Access Journals (Sweden)

    Rakhi Issrani

    2013-01-01

    Results : It was observed that the younger age group had significantly thicker gingiva than older age group . The gingiva was found to be thinner in females than males and in the mandibular arch than the maxilla. Within the limits of the present study it was demonstrated that thickness of gingiva varies with the tooth sites, i.e. midbuccally and interdental papillary region and also with morphology of the crown. Conclusions : In the present study, it was concluded that GTH varies according to site, age, gender tooth and dental arch wise. In comparison to TGP method, USG method assesses GTH more accurately, rapidl y and atraumatically.

  7. An investigation of methods for free-field comparison calibration of measurement microphones

    DEFF Research Database (Denmark)

    Barrera-Figueroa, Salvador; Moreno Pescador, Guillermo; Jacobsen, Finn

    2010-01-01

    Free-field comparison calibration of measurement microphones requires that a calibrated reference microphone and a test microphone are exposed to the same sound pressure in a free field. The output voltages of the microphones can be measured either sequentially or simultaneously. The sequential...... method requires the sound field to have good temporal stability. The simultaneous method requires instead that the sound pressure is the same in the positions where the microphones are placed. In this paper the results of the application of the two methods are compared. A third combined method...

  8. a Comparison Study of Different Kernel Functions for Svm-Based Classification of Multi-Temporal Polarimetry SAR Data

    Science.gov (United States)

    Yekkehkhany, B.; Safari, A.; Homayouni, S.; Hasanlou, M.

    2014-10-01

    In this paper, a framework is developed based on Support Vector Machines (SVM) for crop classification using polarimetric features extracted from multi-temporal Synthetic Aperture Radar (SAR) imageries. The multi-temporal integration of data not only improves the overall retrieval accuracy but also provides more reliable estimates with respect to single-date data. Several kernel functions are employed and compared in this study for mapping the input space to higher Hilbert dimension space. These kernel functions include linear, polynomials and Radial Based Function (RBF). The method is applied to several UAVSAR L-band SAR images acquired over an agricultural area near Winnipeg, Manitoba, Canada. In this research, the temporal alpha features of H/A/α decomposition method are used in classification. The experimental tests show an SVM classifier with RBF kernel for three dates of data increases the Overall Accuracy (OA) to up to 3% in comparison to using linear kernel function, and up to 1% in comparison to a 3rd degree polynomial kernel function.

  9. Comparison of some effects of modification of a polylactide surface layer by chemical, plasma, and laser methods

    Energy Technology Data Exchange (ETDEWEB)

    Moraczewski, Krzysztof, E-mail: kmm@ukw.edu.pl [Department of Materials Engineering, Kazimierz Wielki University, Department of Materials Engineering, ul. Chodkiewicza 30, 85-064 Bydgoszcz (Poland); Rytlewski, Piotr [Department of Materials Engineering, Kazimierz Wielki University, Department of Materials Engineering, ul. Chodkiewicza 30, 85-064 Bydgoszcz (Poland); Malinowski, Rafał [Institute for Engineering of Polymer Materials and Dyes, ul. M. Skłodowskiej–Curie 55, 87-100 Toruń (Poland); Żenkiewicz, Marian [Department of Materials Engineering, Kazimierz Wielki University, Department of Materials Engineering, ul. Chodkiewicza 30, 85-064 Bydgoszcz (Poland)

    2015-08-15

    Highlights: • We modified polylactide surface layer with chemical, plasma or laser methods. • We tested selected properties and surface structure of modified samples. • We stated that the plasma treatment appears to be the most beneficial. - Abstract: The article presents the results of studies and comparison of selected properties of the modified PLA surface layer. The modification was carried out with three methods. In the chemical method, a 0.25 M solution of sodium hydroxide in water and ethanol was utilized. In the plasma method, a 50 W generator was used, which produced plasma in the air atmosphere under reduced pressure. In the laser method, a pulsed ArF excimer laser with fluency of 60 mJ/cm{sup 2} was applied. Polylactide samples were examined by using the following techniques: scanning electron microscopy (SEM), atomic force microscopy (AFM), goniometry and X-ray photoelectron spectroscopy (XPS). Images of surfaces of the modified samples were recorded, contact angles were measured, and surface free energy was calculated. Qualitative and quantitative analyses of chemical composition of the PLA surface layer were performed as well. Based on the survey it was found that the best modification results are obtained using the plasma method.

  10. Comparison between two methods of methanol production from carbon dioxide

    International Nuclear Information System (INIS)

    Anicic, B.; Trop, P.; Goricanec, D.

    2014-01-01

    Over recent years there has been a significant increase in the amount of technology contributing to lower emissions of carbon dioxide. The aim of this paper is to provide a comparison between two technologies for methanol production, both of which use carbon dioxide and hydrogen as initial raw materials. The first methanol production technology includes direct synthesis of methanol from CO 2 , and the second has two steps. During the first step CO 2 is converted into CO via RWGS (reverse water gas shift) reaction, and methanol is produced during the second step. A comparison between these two methods was achieved in terms of economical and energy-efficiency bases. The price of electricity had the greatest impact from the economical point of view as hydrogen is produced via the electrolysis of water. Furthermore, both the cost of CO 2 capture and the amounts of carbon taxes were taken into consideration. Energy-efficiency comparison is based on cold gas efficiency, while economic feasibility is compared using net present value. Even though the mentioned processes are similar, it was shown that direct methanol synthesis has higher energy and economic efficiency. - Highlights: • We compared two methods for methanol production. • Process schemes for both, direct synthesis and two-step synthesis, are described. • Direct synthesis has higher economical and energy efficiency

  11. A study on asymptomatic bacteriuria in pregnancy: prevalence, etiology and comparison of screening methods

    OpenAIRE

    Kheya Mukherjee; Saroj Golia; Vasudha CL; Babita; Debojyoti Bhattacharjee; Goutam Chakroborti

    2014-01-01

    Background: Asymptomatic bacteriuria is common in women with prevalence of 4-7% in pregnancy. The traditional reference test for bacteriuria is quantitative culture of urine which is relatively expensive time consuming and laborious. The aim of this study was to know the prevalence of asymptomatic bacteriuria in pregnancy, to identify pathogens and their antibiotic susceptibility patterns and to device a single or combined rapid screening method as an acceptable alternative to urine culture. ...

  12. Comments on the paper 'Optical properties of bovine muscle tissue in vitro; a comparison of methods'

    International Nuclear Information System (INIS)

    Marchesini, R.

    1999-01-01

    In reply to R. Marchesini's comments that optical values derived by himself and other authors given in the paper entitled 'Optical properties of bovine muscle tissue in vitro; a comparison of methods' were incorrectly cited, the author, J.R. Zijp, apologizes for this mistake and explains the reasons for this misinterpretation. Letter-to-the-editor

  13. Comparison of the surface friction model with the time-dependent Hartree-Fock method

    International Nuclear Information System (INIS)

    Froebrich, P.

    1984-01-01

    A comparison is made between the classical phenomenological surface friction model and a time-dependent Hartree-Fock study by Dhar for the system 208 Pb+ 74 Ge at E/sub lab/(Pb) = 1600 MeV. The general trends for energy loss, mean values for charge and mass, interaction times and energy-angle correlations turn out to be fairly similar in both methods. However, contrary to Dhar, the events close to capture are interpreted as normal deep-inelastic, i.e., not as fast fission processes

  14. A Comparison Study on the Assessment of Ride Comfort for LRT Passengers

    Science.gov (United States)

    Tengku Munawir, Tengku Imran; Abqari Abu Samah, Ahmad; Afiq Akmal Rosle, Muhammad; Azlis-Sani, Jalil; Hasnan, Khalid; Sabri, S. M.; Ismail, S. M.; Yunos, Muhammad Nur Annuar Mohd; Yen Bin, Teo

    2017-08-01

    Ride comfort in railway transportation is very mind boggling and it relies on different dynamic performance criteria as well as subjective observation from the train passengers. Vibration discomfort from different elements such as vehicle condition, track area condition and working condition can prompt poor ride comfort. However, there are no universal applicable standards to analyse the ride comfort. There are several factors including local condition, vehicle condition and the track condition. In this current work, level of ride comfort by previous Adtranz-Walker light rapid transit (LRT) passengers at Ampang line were analysed. A comparison was done via two possible methods which are BS EN 12299 (2009) and Sperling’s Ride Index equation. BS EN 12299 standard is used to measure and evaluate the ride comfort of seating (Nvd) and standing (Nva) of train passenger in three different routes. Next, Sperling’s ride comfort equation is used to conduct validation and comparison between the obtained data. The result indicates a higher extent of vibration in the vertical axis which impacts the overall result. The standing position demonstrates a higher exposure of vibration in all the three tested routes. Comparison of the ride comfort assessment of passenger in sitting and standing position for both methods indicates that all the track sections exceeds “pronounced but not unpleasant (medium)” limit range. Nevertheless, the seating position at track section AU did not exceed the limit and stayed at the comfortable zone. The highest discomfort level achieved for both methods for seating position are 3.34 m/s2 for Nva and 2.63 m/s2 respectively, which is at route C uptrack that is from Chan Sow Lin station to Sri Petaling station. Meanwhile, the highest discomfort level achieved for both methods for standing are 3.80 m/s2 for Nvd and 2.88 m/s2 for Wz respectively, at uptrack section which is from Sri Petaling station to Chan Sow Lin station. Thus, the highest

  15. Measuring digit lengths with 3D digital stereophotogrammetry: A comparison across methods.

    Science.gov (United States)

    Gremba, Allison; Weinberg, Seth M

    2018-05-09

    We compared digital 3D stereophotogrammetry to more traditional measurement methods (direct anthropometry and 2D scanning) to capture digit lengths and ratios. The length of the second and fourth digits was measured by each method and the second-to-fourth ratio was calculated. For each digit measurement, intraobserver agreement was calculated for each of the three collection methods. Further, measurements from the three methods were compared directly to one another. Agreement statistics included the intraclass correlation coefficient (ICC) and technical error of measurement (TEM). Intraobserver agreement statistics for the digit length measurements were high for all three methods; ICC values exceeded 0.97 and TEM values were below 1 mm. For digit ratio, intraobserver agreement was also acceptable for all methods, with direct anthropometry exhibiting lower agreement (ICC = 0.87) compared to indirect methods. For the comparison across methods, the overall agreement was high for digit length measurements (ICC values ranging from 0.93 to 0.98; TEM values below 2 mm). For digit ratios, high agreement was observed between the two indirect methods (ICC = 0.93), whereas indirect methods showed lower agreement when compared to direct anthropometry (ICC < 0.75). Digit measurements and derived ratios from 3D stereophotogrammetry showed high intraobserver agreement (similar to more traditional methods) suggesting that landmarks could be placed reliably on 3D hand surface images. While digit length measurements were found to be comparable across all three methods, ratios derived from direct anthropometry tended to be higher than those calculated indirectly from 2D or 3D images. © 2018 Wiley Periodicals, Inc.

  16. Buildup factors for multilayer shieldings in deterministic methods and their comparison with Monte Carlo

    International Nuclear Information System (INIS)

    Listjak, M.; Slavik, O.; Kubovcova, D.; Vermeersch, F.

    2008-01-01

    In general there are two ways how to calculate effective doses. The first way is by use of deterministic methods like point kernel method which is implemented in Visiplan or Microshield. These kind of calculations are very fast, but they are not very convenient for a complex geometry with shielding composed of more then one material in meaning of result precision. In spite of this that programs are sufficient for ALARA optimisation calculations. On other side there are Monte Carlo methods which can be used for calculations. This way of calculation is quite precise in comparison with reality but calculation time is usually very large. Deterministic method like programs have one disadvantage -usually there is option to choose buildup factor (BUF) only for one material in multilayer stratified slabs shielding calculation problems even if shielding is composed from different materials. In literature there are proposed different formulas for multilayer BUF approximation. Aim of this paper was to examine these different formulas and their comparison with MCNP calculations. At first ware compared results of Visiplan and Microshield. Simple geometry was modelled - point source behind single and double slab shielding. For Build-up calculations was chosen Geometric Progression method (feature of the newest version of Visiplan) because there are lower deviations in comparison with Taylor fitting. (authors)

  17. Buildup factors for multilayer shieldings in deterministic methods and their comparison with Monte Carlo

    International Nuclear Information System (INIS)

    Listjak, M.; Slavik, O.; Kubovcova, D.; Vermeersch, F.

    2009-01-01

    In general there are two ways how to calculate effective doses. The first way is by use of deterministic methods like point kernel method which is implemented in Visiplan or Microshield. These kind of calculations are very fast, but they are not very convenient for a complex geometry with shielding composed of more then one material in meaning of result precision. In spite of this that programs are sufficient for ALARA optimisation calculations. On other side there are Monte Carlo methods which can be used for calculations. This way of calculation is quite precise in comparison with reality but calculation time is usually very large. Deterministic method like programs have one disadvantage -usually there is option to choose buildup factor (BUF) only for one material in multilayer stratified slabs shielding calculation problems even if shielding is composed from different materials. In literature there are proposed different formulas for multilayer BUF approximation. Aim of this paper was to examine these different formulas and their comparison with MCNP calculations. At first ware compared results of Visiplan and Microshield. Simple geometry was modelled - point source behind single and double slab shielding. For Build-up calculations was chosen Geometric Progression method (feature of the newest version of Visiplan) because there are lower deviations in comparison with Taylor fitting. (authors)

  18. A comparison of two methods of logMAR visual acuity data scoring for statistical analysis

    Directory of Open Access Journals (Sweden)

    O. A. Oduntan

    2009-12-01

    Full Text Available The purpose of this study was to compare two methods of logMAR visual acuity (VA scoring. The two methods are referred to as letter scoring (method 1 and line scoring (method 2. The two methods were applied to VA data obtained from one hundred and forty (N=140 children with oculocutaneous albinism. Descriptive, correlation andregression statistics were then used to analyze the data.  Also, where applicable, the Bland and Altman analysis was used to compare sets of data from the two methods.  The right and left eyes data were included in the study, but because the findings were similar in both eyes, only the results for the right eyes are presented in this paper.  For method 1, the mean unaided VA (mean UAOD1 = 0.39 ±0.15 logMAR. The mean aided (mean ADOD1 VA = 0.50 ± 0.16 logMAR.  For method 2, the mean unaided (mean UAOD2 VA = 0.71 ± 0.15 logMAR, while the mean aided VA (mean ADOD2 = 0.60 ± 0.16 logMAR. The range and mean values of the improvement in VA for both methods were the same. The unaided VAs (UAOD1, UAOD2 and aided (ADOD1, ADOD2 for methods 1 and 2 correlated negatively (Unaided, r = –1, p<0.05, (Aided, r = –1, p<0.05.  The improvement in VA (differences between the unaided and aided VA values (DOD1 and DOD2 were positively correlated (r = +1, p <0.05. The Bland and Altman analyses showed that the VA improvement (unaided – aided VA values (DOD1 and DOD2 were similar for the two methods. Findings indicated that only the improvement in VA could be compared when different scoring methods are used. Therefore the scoring method used in any VA research project should be stated in the publication so that appropriate comparisons could be made by other researchers.

  19. Reduction in the Number of Comparisons Required to Create Matrix of Expert Judgment in the Comet Method

    Directory of Open Access Journals (Sweden)

    Sałabun Wojciech

    2014-09-01

    Full Text Available Multi-criteria decision-making (MCDM methods are associated with the ranking of alternatives based on expert judgments made using a number of criteria. In the MCDM field, the distance-based approach is one popular method for receiving a final ranking. One of the newest MCDM method, which uses the distance-based approach, is the Characteristic Objects Method (COMET. In this method, the preferences of each alternative are obtained on the basis of the distance from the nearest characteristic ob jects and their values. For this purpose, the domain and fuzzy numbers set for all the considered criteria are determined. The characteristic objects are obtained as the combination of the crisp values of all the fuzzy numbers. The preference values of all the characteristic ob ject are determined based on the tournament method and the principle of indifference. Finally, the fuzzy model is constructed and is used to calculate preference values of the alternatives. In this way, a multi-criteria model is created and it is free of rank reversal phenomenon. In this approach, the matrix of expert judgment is necessary to create. For this purpose, an expert has to compare all the characteristic ob jects with each other. The number of necessary comparisons depends squarely to the number of ob jects. This study proposes the improvement of the COMET method by using the transitivity of pairwise comparisons. Three numerical examples are used to illustrate the efficiency of the proposed improvement with respect to results from the original approach. The proposed improvement reduces significantly the number of necessary comparisons to create the matrix of expert judgment.

  20. A Comparison of Distillery Stillage Disposal Methods

    Directory of Open Access Journals (Sweden)

    V. Sajbrt

    2010-01-01

    Full Text Available This paper compares the main stillage disposal methods from the point of view of technology, economics and energetics. Attention is paid to the disposal of both solid and liquid phase. Specifically, the following methods are considered: a livestock feeding, b combustion of granulated stillages, c fertilizer production, d anaerobic digestion with biogas production and e chemical pretreatment and subsequent secondary treatment. Other disposal techniques mentioned in the literature (electrofenton reaction, electrocoagulation and reverse osmosis have not been considered, due to their high costs and technological requirements.Energy and economic calculations were carried out for a planned production of 120 m3 of stillage per day in a given distillery. Only specific treatment operating costs (per 1 m3 of stillage were compared, including operational costs for energy, transport and chemicals. These values were determined for January 31st, 2009. Resulting sequence of cost effectiveness: 1. – chemical pretreatment, 2. – combustion of granulated stillage, 3. – transportation of stillage to a biogas station, 4. – fertilizer production, 5. – livestock feeding. This study found that chemical pretreatment of stillage with secondary treatment (a method developed at the Department of Process Engineering, CTU was more suitable than the other methods. Also, there are some important technical advantages. Using this method, the total operating costs are approximately 1 150 ??/day, i.e. about 9,5 ??/m3 of stillage. The price of chemicals is the most important item in these costs, representing about 85 % of the total operating costs.

  1. From a tree to a stand in Finnish boreal forests - biomass estimation and comparison of methods

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Chunjiang

    2009-07-01

    There is an increasing need to compare the results obtained with different methods of estimation of tree biomass in order to reduce the uncertainty in the assessment of forest biomass carbon. In this study, tree biomass was investigated in a 30-year-old Scots pine (Pinus sylvestris) (Young-Stand) and a 130-year-old mixed Norway spruce (Picea abies)-Scots pine stand (Mature-Stand) located in southern Finland (61deg50' N, 24deg22' E). In particular, a comparison of the results of different estimation methods was conducted to assess the reliability and suitability of their applications. For the trees in Mature-Stand, annual stem biomass increment fluctuated following a sigmoid equation, and the fitting curves reached a maximum level (from about 1 kg yr-1 for understorey spruce to 7 kg yr-1 for dominant pine) when the trees were 100 years old). Tree biomass was estimated to be about 70 Mg ha-1 in Young-Stand and about 220 Mg ha-1 in Mature-Stand. In the region (58.00-62.13 degN, 14-34 degE, <= 300 m a.s.l.) surrounding the study stands, the tree biomass accumulation in Norway spruce and Scots pine stands followed a sigmoid equation with stand age, with a maximum of 230 Mg ha-1 at the age of 140 years. In Mature-Stand, lichen biomass on the trees was 1.63 Mg ha-1 with more than half of the biomass occurring on dead branches, and the standing crop of litter lichen on the ground was about 0.09 Mg ha-1. There were substantial differences among the results estimated by different methods in the stands. These results imply that a possible estimation error should be taken into account when calculating tree biomass in a stand with an indirect approach. (orig.)

  2. Formation of oiliness and sebum output--comparison of a lipid-absorbant and occlusive-tape method with photometry.

    Science.gov (United States)

    Serup, J

    1991-07-01

    Sebum output and the development of oiliness formation were studied in 24 acne-prone volunteers. Sebutape and photometric measurement by the Sebumeter were compared. Tapes were taken 1, 2 and 3 h after degreasing. Tapes were scored, and light transmission was measured by a special densitometric device. Sebumeter recordings were performed after 3 h. Densitometric evaluation of tapes was accurate with a high correlation to scoring. Sebutape and Sebumeter assessments correlated. However, in some individuals the tape method overestimated the sebum output. Right-left comparison indicated that the tape method was less reproducible, particularly after longer sampling periods. It is suggested that Sebutape, due to water occlusion and temperature insulation during the sampling period, interferes with sebum droplet formation and spreading. However, such systematic interference may be advantageous since sweating and heat are important clinical prerequisites in the formation of oiliness. Thus, it is suggested that the Sebutape is a specialized method for the determination of 'oiliness', the very last phase of sebum output, in which sebum droplets spread over the skin surface. The tape is not automatically comparable to other methods for determining sebum output from the follicular reservoir.

  3. Comparison of methods used to estimate coral cover in the Hawaiian Islands

    Directory of Open Access Journals (Sweden)

    Paul L. Jokiel

    2015-05-01

    Full Text Available Nine coral survey methods were compared at ten sites in various reef habitats with different levels of coral cover in Kāne‘ohe Bay, O’ahu, Hawaiʻi. Mean estimated coverage at the different sites ranged from less than 10% cover to greater than 90% cover. The methods evaluated include line transects, various visual and photographic belt transects, video transects and visual estimates. At each site 25 m transect lines were laid out and secured. Observers skilled in each method measured coral cover at each site. The time required to run each transect, time required to process data and time to record the results were documented. Cost of hardware and software for each method was also tabulated. Results of this investigation indicate that all of the methods used provide a good first estimate of coral cover on a reef. However, there were differences between the methods in detecting the number of coral species. For example, the classic “quadrat” method allows close examination of small and cryptic coral species that are not detected by other methods such as the “towboard” surveys. The time, effort and cost involved with each method varied widely, and the suitability of each method for answering particular research questions in various environments was evaluated. Results of this study support the finding of three other comparison method studies conducted at various geographic locations throughout the world. Thus, coral cover measured by different methods can be legitimately combined or compared in many situations. The success of a recent modeling effort based on coral cover data consisting of observations taken in Hawai‘i using the different methods supports this conclusion.

  4. Comparison of methods used to estimate coral cover in the Hawaiian Islands.

    Science.gov (United States)

    Jokiel, Paul L; Rodgers, Kuʻulei S; Brown, Eric K; Kenyon, Jean C; Aeby, Greta; Smith, William R; Farrell, Fred

    2015-01-01

    Nine coral survey methods were compared at ten sites in various reef habitats with different levels of coral cover in Kāne'ohe Bay, O'ahu, Hawai'i. Mean estimated coverage at the different sites ranged from less than 10% cover to greater than 90% cover. The methods evaluated include line transects, various visual and photographic belt transects, video transects and visual estimates. At each site 25 m transect lines were laid out and secured. Observers skilled in each method measured coral cover at each site. The time required to run each transect, time required to process data and time to record the results were documented. Cost of hardware and software for each method was also tabulated. Results of this investigation indicate that all of the methods used provide a good first estimate of coral cover on a reef. However, there were differences between the methods in detecting the number of coral species. For example, the classic "quadrat" method allows close examination of small and cryptic coral species that are not detected by other methods such as the "towboard" surveys. The time, effort and cost involved with each method varied widely, and the suitability of each method for answering particular research questions in various environments was evaluated. Results of this study support the finding of three other comparison method studies conducted at various geographic locations throughout the world. Thus, coral cover measured by different methods can be legitimately combined or compared in many situations. The success of a recent modeling effort based on coral cover data consisting of observations taken in Hawai'i using the different methods supports this conclusion.

  5. SPACE CHARGE SIMULATION METHODS INCORPORATED IN SOME MULTI - PARTICLE TRACKING CODES AND THEIR RESULTS COMPARISON

    International Nuclear Information System (INIS)

    BEEBE - WANG, J.; LUCCIO, A.U.; D IMPERIO, N.; MACHIDA, S.

    2002-01-01

    Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed

  6. SPACE CHARGE SIMULATION METHODS INCORPORATED IN SOME MULTI - PARTICLE TRACKING CODES AND THEIR RESULTS COMPARISON.

    Energy Technology Data Exchange (ETDEWEB)

    BEEBE - WANG,J.; LUCCIO,A.U.; D IMPERIO,N.; MACHIDA,S.

    2002-06-03

    Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed.

  7. Comparison of Chemical Extraction Methods for Determination of Soil Potassium in Different Soil Types

    Science.gov (United States)

    Zebec, V.; Rastija, D.; Lončarić, Z.; Bensa, A.; Popović, B.; Ivezić, V.

    2017-12-01

    Determining potassium supply of soil plays an important role in intensive crop production, since it is the basis for balancing nutrients and issuing fertilizer recommendations for achieving high and stable yields within economic feasibility. The aim of this study was to compare the different extraction methods of soil potassium from arable horizon of different types of soils with ammonium lactate method (KAL), which is frequently used as analytical method for determining the accessibility of nutrients and it is a common method used for issuing fertilizer recommendations in many Europe countries. In addition to the ammonium lactate method (KAL, pH 3.75), potassium was extracted with ammonium acetate (KAA, pH 7), ammonium acetate ethylenediaminetetraacetic acid (KAAEDTA, pH 4.6), Bray (KBRAY, pH 2.6) and with barium chloride (K_{BaCl_2 }, pH 8.1). The analyzed soils were extremely heterogeneous with a wide range of determined values. Soil pH reaction ( {pH_{H_2 O} } ) ranged from 4.77 to 8.75, organic matter content ranged from 1.87 to 4.94% and clay content from 8.03 to 37.07%. In relation to KAL method as the standard method, K_{BaCl_2 } method extracts 12.9% more on average of soil potassium, while in relation to standard method, on average KAA extracts 5.3%, KAAEDTA 10.3%, and KBRAY 27.5% less of potassium. Comparison of analyzed extraction methods of potassium from the soil is of high precision, and most reliable comparison was KAL method with KAAEDTA, followed by a: KAA, K_{BaCl_2 } and KBRAY method. Extremely significant statistical correlation between different extractive methods for determining potassium in the soil indicates that any of the methods can be used to accurately predict the concentration of potassium in the soil, and that carried out research can be used to create prediction model for concentration of potassium based on different methods of extraction.

  8. Development and comparison in uncertainty assessment based Bayesian modularization method in hydrological modeling

    Science.gov (United States)

    Li, Lu; Xu, Chong-Yu; Engeland, Kolbjørn

    2013-04-01

    SummaryWith respect to model calibration, parameter estimation and analysis of uncertainty sources, various regression and probabilistic approaches are used in hydrological modeling. A family of Bayesian methods, which incorporates different sources of information into a single analysis through Bayes' theorem, is widely used for uncertainty assessment. However, none of these approaches can well treat the impact of high flows in hydrological modeling. This study proposes a Bayesian modularization uncertainty assessment approach in which the highest streamflow observations are treated as suspect information that should not influence the inference of the main bulk of the model parameters. This study includes a comprehensive comparison and evaluation of uncertainty assessments by our new Bayesian modularization method and standard Bayesian methods using the Metropolis-Hastings (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions were used in combination with standard Bayesian method: the AR(1) plus Normal model independent of time (Model 1), the AR(1) plus Normal model dependent on time (Model 2) and the AR(1) plus Multi-normal model (Model 3). The results reveal that the Bayesian modularization method provides the most accurate streamflow estimates measured by the Nash-Sutcliffe efficiency and provide the best in uncertainty estimates for low, medium and entire flows compared to standard Bayesian methods. The study thus provides a new approach for reducing the impact of high flows on the discharge uncertainty assessment of hydrological models via Bayesian method.

  9. Modified Right Heart Contrast Echocardiography Versus Traditional Method in Diagnosis of Right-to-Left Shunt: A Comparative Study

    OpenAIRE

    Wang, Yi; Zeng, Jie; Yin, Lixue; Zhang, Mei; Hou, Dailun

    2016-01-01

    BACKGROUND: The purpose of this study was to evaluate the reliability, effectiveness, and safety of modified right heart contrast transthoracic echocardiography (cTTE) in comparison with the traditional method. MATERIAL AND METHODS: We performed a modified right heart cTTE using saline mixed with a small sample of patient's own blood. Samples were agitated with varying intensity. This study protocol involved microscopic analysis and patient evaluation. 1. Microscopic analysis: After two contr...

  10. Comparison of classification algorithms for various methods of preprocessing radar images of the MSTAR base

    Science.gov (United States)

    Borodinov, A. A.; Myasnikov, V. V.

    2018-04-01

    The present work is devoted to comparing the accuracy of the known qualification algorithms in the task of recognizing local objects on radar images for various image preprocessing methods. Preprocessing involves speckle noise filtering and normalization of the object orientation in the image by the method of image moments and by a method based on the Hough transform. In comparison, the following classification algorithms are used: Decision tree; Support vector machine, AdaBoost, Random forest. The principal component analysis is used to reduce the dimension. The research is carried out on the objects from the base of radar images MSTAR. The paper presents the results of the conducted studies.

  11. Comparison of different methods for determining the size of a focal spot of microfocus X-ray tubes

    International Nuclear Information System (INIS)

    Salamon, M.; Hanke, R.; Krueger, P.; Sukowski, F.; Uhlmann, N.; Voland, V.

    2008-01-01

    The EN 12543-5 describes a method for determining the focal spot size of microfocus X-ray tubes up to a minimum spot size of 5 μm. The wide application of X-ray tubes with even smaller focal spot sizes in computed tomography and radioscopy applications requires the evaluation of existing methods for focal spot sizes below 5 μm. In addition, new methods and conditions for determining submicron focal spot sizes have to be developed. For the evaluation and extension of the present methods to smaller focal spot sizes, different procedures in comparison with the existing EN 12543-5 were analyzed and applied, and the results are presented

  12. Application of the neutron gamma method to a study of water seepage under a rice plantation

    International Nuclear Information System (INIS)

    Puard, M.; Couchat, P.; Moutonnet, P.

    1980-01-01

    In order to determine the share of percolation in the pollution by pesticides (particularly Lindane) being carried down in the drainage water of rice plantations, an application of the neutron gamma method under rice cultivation in the Camargue is suggested. A preliminary laboratory study enabled a comparison to be made between deuteriated water (DHO) and tritiated water (THO) used as water tracers in the determination of the dispersive phenomena and retention in a column of saturated soil [fr

  13. Comparison of two split-window methods for retrieving land surface temperature from MODIS data

    Science.gov (United States)

    Zhao, Shaohua; Qin, Qiming; Yang, Yonghui; Xiong, Yujiu; Qiu, Guoyu

    2009-08-01

    Land surface temperature (LST) is a key parameter in environment and earth science study, especially for monitoring drought. The objective of this work is a comparison of two split-window methods: Mao method and Sobrino method, for retrieving LST using MODIS (Moderate-resolution Imaging Spectroradiometer) data in North China Plain. The results show that the max, min and mean errors of Mao method are 1.33K, 1.54K and 0.13K lower than the standard LST product respectively; while those of Sobrino method are 0.73K, 1.46K and 1.50K higher than the standard respectively. Validation of the two methods using LST product based on weather stations shows a good agreement between the standard and Sobrino method, with RMSE of 1.17K, whereas RMSE of Mao method is 1.85K. Finally, the study introduces the Sobmao method, which is based on Sobrino method but simplifies the estimation of atmospheric water vapour content using Mao method. The Sobmao method has almost the same accuracy with Sobrino method. With high accuracy and simplification of water vapour content estimation, the Sobmao method is recommendable in LST inversion for good application in Ningxia region, the northwest China, with mean error of 0.33K and the RMSE value of 0.91K.

  14. A Study on a Control Method with a Ventilation Requirement of a VAV System in Multi-Zone

    Directory of Open Access Journals (Sweden)

    Hyo-Jun Kim

    2017-11-01

    Full Text Available The objective of this study was to propose a control method with a ventilation requirement of variable air volume (VAV system in multi-zone. In order to control the VAV system inmulti-zone, it is essential to control the terminal unit installed in each zone. A VAV terminal unit with conventional control method using a fixed minimum air flow can cause indoor air quality (IAQ issues depending on the variation in the number of occupants. This research proposes a control method with a ventilation requirement of the VAV terminal unit and AHU inmulti-zone. The integrated control method with an air flow increase model in the VAV terminal unit, AHU, and outdoor air intake rate increase model in the AHU was based on the indoor CO2 concentration. The conventional and proposed control algorithms were compared through a TRNSYS simulation program. The proposed VAV terminal unit control method satisfies all the conditions of indoor temperature, IAQ, and stratification. An energy comparison with the conventional control method showed that the method satisfies not only the indoor thermal comfort, IAQ, and stratification issue, but also reduces the energy consumption.

  15. A study of drying and cleaning methods used in preparation for fluorescent penetrant inspection - Part II

    International Nuclear Information System (INIS)

    Brasche, L.; Lopez, R.; Larson, B.

    2003-01-01

    Fluorescent penetrant inspection is the most widely used method for aerospace components such as critical rotating components of gas turbine engines. Successful use of FPI begins with a clean and dry part, followed by a carefully controlled and applied FPI process, and conscientious inspection by well trained personnel. A variety of cleaning methods are in use for cleaning of titanium and nickel parts with selection based on the soils or contamination to be removed. Cleaning methods may include chemical or mechanical methods with sixteen different types studied as part of this program. Several options also exist for use in drying parts prior to FPI. Samples were generated and exposed to a range of conditions to study the effect of both drying and cleaning methods on the flaw response of FPI. Low cycle fatigue (LCF) cracks were generated in approximately 40 nickel and 40 titanium samples for evaluation of the various cleaning methods. Baseline measurements were made for each of the samples using a photometer to measure sample brightness and a UVA videomicroscope to capture digital images of the FPI indications. Samples were exposed to various contaminants, cleaned and inspected. Brightness measurements and digital images were also taken to compare to the baseline data. A comparison of oven drying to flash dry in preparation for FPI has been completed and will be reported in Part I. Comparison of the effectiveness of various cleaning methods for the contaminants will be presented in Part II. The cleaning and drying studies were completed in cooperation with Delta Airlines using cleaning, drying and FPI processes typical of engine overhaul processes and equipment. The work was completed as part of the Engine Titanium Consortium and included investigators from Honeywell, General Electric, Pratt and Whitney, and Rolls Royce

  16. A study of drying and cleaning methods used in preparation for fluorescent penetrant inspection - Part I

    International Nuclear Information System (INIS)

    Brasche, L.; Lopez, R.; Larson, B.

    2003-01-01

    Fluorescent penetrant inspection is the most widely used method for aerospace components such as critical rotating components of gas turbine engines. Successful use of FPI begins with a clean and dry part, followed by a carefully controlled and applied FPI process, and conscientious inspection by well trained personnel. A variety of cleaning methods are in use for cleaning of titanium and nickel parts with selection based on the soils or contamination to be removed. Cleaning methods may include chemical or mechanical methods with sixteen different types studied as part of this program. Several options also exist for use in drying parts prior to FPI. Samples were generated and exposed to a range of conditions to study the effect of both drying and cleaning methods on the flaw response of FPI. Low cycle fatigue (LCF) cracks were generated in approximately 40 nickel and 40 titanium samples for evaluation of the various cleaning methods. Baseline measurements were made for each of the samples using a photometer to measure sample brightness and a UVA videomicroscope to capture digital images of the FPI indications. Samples were exposed to various contaminants, cleaned and inspected. Brightness measurements and digital images were also taken to compare to the baseline data. A comparison of oven drying to flash dry in preparation for FPI has been completed and will be reported in Part I. Comparison of the effectiveness of various cleaning methods for the contaminants will be presented in Part II. The cleaning and drying studies were completed in cooperation with Delta Airlines using cleaning, drying and FPI processes typical of engine overhaul processes and equipment. The work was completed as part of the Engine Titanium Consortium and included investigators from Honeywell, General Electric, Pratt and Whitney, and Rolls Royce

  17. Testing a Preliminary Live with Love Conceptual Framework for cancer couple dyads: A mixed-methods study.

    Science.gov (United States)

    Li, Qiuping; Xu, Yinghua; Zhou, Huiya; Loke, Alice Yuen

    2015-12-01

    The purpose of this study was to test the previous proposed Preliminary Live with Love Conceptual Framework (P-LLCF) that focuses on spousal caregiver-patient couples in their journey of coping with cancer as dyads. A mixed-methods study that included qualitative and quantitative approaches was conducted. Methods of concept and theory analysis, and structural equation modeling (SEM) were applied in testing the P-LLCF. In the qualitative approach in testing the concepts included in the P-LLCF, a comparison was made between the P-LLCF with a preliminary conceptual framework derived from focus group interviews among Chinese couples' coping with cancer. The comparison showed that the concepts identified in the P-LLCF are relevant to the phenomenon under scrutiny, and attributes of the concepts are consistent with those identified among Chinese cancer couple dyads. In the quantitative study, 117 cancer couples were recruited. The findings showed that inter-relationships exist among the components included in the P-LLCF: event situation, dyadic mediators, dyadic appraisal, dyadic coping, and dyadic outcomes. In that the event situation will impact the dyadic outcomes directly or indirectly through Dyadic Mediators. The dyadic mediators, dyadic appraisal, and dyadic coping are interrelated and work together to benefit the dyadic outcomes. This study provides evidence that supports the interlinked components and the relationship included in the P-LLCF. The findings of this study are important in that they provide healthcare professionals with guidance and directions according to the P-LLCF on how to plan supportive programs for couples coping with cancer. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Manual versus automatic bladder wall thickness measurements: a method comparison study

    NARCIS (Netherlands)

    Oelke, M.; Mamoulakis, C.; Ubbink, D.T.; de la Rosette, J.J.; Wijkstra, H.

    2009-01-01

    Purpose To compare repeatability and agreement of conventional ultrasound bladder wall thickness (BWT) measurements with automatically obtained BWT measurements by the BVM 6500 device. Methods Adult patients with lower urinary tract symptoms, urinary incontinence, or postvoid residual urine were

  19. Regional Attenuation in Northern California: A Comparison of Five 1-D Q Methods

    Energy Technology Data Exchange (ETDEWEB)

    Ford, S R; Dreger, D S; Mayeda, K; Walter, W R; Malagnini, L; Phillips, W S

    2007-08-03

    The determination of regional attenuation Q{sup -1} can depend upon the analysis method employed. The discrepancies between methods are due to differing parameterizations (e.g., geometrical spreading rates), employed datasets (e.g., choice of path lengths and sources), and the methodologies themselves (e.g., measurement in the frequency or time domain). Here we apply five different attenuation methodologies to a Northern California dataset. The methods are: (1) coda normalization (CN), (2) two-station (TS), (3) reverse two-station (RTS), (4) source-pair/receiver-pair (SPRP), and (5) coda-source normalization (CS). The methods are used to measure Q of the regional phase, Lg (Q{sub Lg}), and its power-law dependence on frequency of the form Q{sub 0}f{sup {eta}} with controlled parameterization in the well-studied region of Northern California using a high-quality dataset from the Berkeley Digital Seismic Network. We investigate the difference in power-law Q calculated among the methods by focusing on the San Francisco Bay Area, where knowledge of attenuation is an important part of seismic hazard mitigation. This approximately homogeneous subset of our data lies in a small region along the Franciscan block. All methods return similar power-law parameters, though the range of the joint 95% confidence regions is large (Q{sub 0} = 85 {+-} 40; {eta} = 0.65 {+-} 0.35). The RTS and TS methods differ the most from the other methods and from each other. This may be due to the removal of the site term in the RTS method, which is shown to be significant in the San Francisco Bay Area. In order to completely understand the range of power-law Q in a region, it is advisable to use several methods to calculate the model. We also test the sensitivity of each method to changes in geometrical spreading, Lg frequency bandwidth, the distance range of data, and the Lg measurement window. For a given method, there are significant differences in the power-law parameters, Q{sub 0} and {eta

  20. A Normalized Transfer Matrix Method for the Free Vibration of Stepped Beams: Comparison with Experimental and FE(3D Methods

    Directory of Open Access Journals (Sweden)

    Tamer Ahmed El-Sayed

    2017-01-01

    Full Text Available The exact solution for multistepped Timoshenko beam is derived using a set of fundamental solutions. This set of solutions is derived to normalize the solution at the origin of the coordinates. The start, end, and intermediate boundary conditions involve concentrated masses and linear and rotational elastic supports. The beam start, end, and intermediate equations are assembled using the present normalized transfer matrix (NTM. The advantage of this method is that it is quicker than the standard method because the size of the complete system coefficient matrix is 4 × 4. In addition, during the assembly of this matrix, there are no inverse matrix steps required. The validity of this method is tested by comparing the results of the current method with the literature. Then the validity of the exact stepped analysis is checked using experimental and FE(3D methods. The experimental results for stepped beams with single step and two steps, for sixteen different test samples, are in excellent agreement with those of the three-dimensional finite element FE(3D. The comparison between the NTM method and the finite element method results shows that the modal percentage deviation is increased when a beam step location coincides with a peak point in the mode shape. Meanwhile, the deviation decreases when a beam step location coincides with a straight portion in the mode shape.

  1. Automatic variable selection method and a comparison for quantitative analysis in laser-induced breakdown spectroscopy

    Science.gov (United States)

    Duan, Fajie; Fu, Xiao; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Zhang, Cong

    2018-05-01

    In this work, an automatic variable selection method for quantitative analysis of soil samples using laser-induced breakdown spectroscopy (LIBS) is proposed, which is based on full spectrum correction (FSC) and modified iterative predictor weighting-partial least squares (mIPW-PLS). The method features automatic selection without artificial processes. To illustrate the feasibility and effectiveness of the method, a comparison with genetic algorithm (GA) and successive projections algorithm (SPA) for different elements (copper, barium and chromium) detection in soil was implemented. The experimental results showed that all the three methods could accomplish variable selection effectively, among which FSC-mIPW-PLS required significantly shorter computation time (12 s approximately for 40,000 initial variables) than the others. Moreover, improved quantification models were got with variable selection approaches. The root mean square errors of prediction (RMSEP) of models utilizing the new method were 27.47 (copper), 37.15 (barium) and 39.70 (chromium) mg/kg, which showed comparable prediction effect with GA and SPA.

  2. Comparison of calculational methods for liquid metal reactor shields

    International Nuclear Information System (INIS)

    Carter, L.L.; Moore, F.S.; Morford, R.J.; Mann, F.M.

    1985-09-01

    A one-dimensional comparison is made between Monte Carlo (MCNP), discrete ordinances (ANISN), and diffusion theory (MlDX) calculations of neutron flux and radiation damage from the core of the Fast Flux Test Facility (FFTF) out to the reactor vessel. Diffusion theory was found to be reasonably accurate for the calculation of both total flux and radiation damage. However, for large distances from the core, the calculated flux at very high energies is low by an order of magnitude or more when the diffusion theory is used. Particular emphasis was placed in this study on the generation of multitable cross sections for use in discrete ordinates codes that are self-shielded, consistent with the self-shielding employed in the generation of cross sections for use with diffusion theory. The Monte Carlo calculation, with a pointwise representation of the cross sections, was used as the benchmark for determining the limitations of the other two calculational methods. 12 refs., 33 figs

  3. Consumer preferences for hearing aid attributes: a comparison of rating and conjoint analysis methods.

    Science.gov (United States)

    Bridges, John F P; Lataille, Angela T; Buttorff, Christine; White, Sharon; Niparko, John K

    2012-03-01

    Low utilization of hearing aids has drawn increased attention to the study of consumer preferences using both simple ratings (e.g., Likert scale) and conjoint analyses, but these two approaches often produce inconsistent results. The study aims to directly compare Likert scales and conjoint analysis in identifying important attributes associated with hearing aids among those with hearing loss. Seven attributes of hearing aids were identified through qualitative research: performance in quiet settings, comfort, feedback, frequency of battery replacement, purchase price, water and sweat resistance, and performance in noisy settings. The preferences of 75 outpatients with hearing loss were measured with both a 5-point Likert scale and with 8 paired-comparison conjoint tasks (the latter being analyzed using OLS [ordinary least squares] and logistic regression). Results were compared by examining implied willingness-to-pay and Pearson's Rho. A total of 56 respondents (75%) provided complete responses. Two thirds of respondents were male, most had sensorineural hearing loss, and most were older than 50; 44% of respondents had never used a hearing aid. Both methods identified improved performance in noisy settings as the most valued attribute. Respondents were twice as likely to buy a hearing aid with better functionality in noisy environments (p < .001), and willingness to pay for this attribute ranged from US$2674 on the Likert to US$9000 in the conjoint analysis. The authors find a high level of concordance between the methods-a result that is in stark contrast with previous research. The authors conclude that their result stems from constraining the levels on the Likert scale.

  4. Comparison of the dose evaluation methods for criticality accident

    International Nuclear Information System (INIS)

    Shimizu, Yoshio; Oka, Tsutomu

    2004-01-01

    The improvement of the dose evaluation method for criticality accidents is important to rationalize design of the nuclear fuel cycle facilities. The source spectrums of neutron and gamma ray of a criticality accident depend on the condition of the source, its materials, moderation, density and so on. The comparison of the dose evaluation methods for a criticality accident is made. Some methods, which are combination of criticality calculation and shielding calculation, are proposed. Prompt neutron and gamma ray doses from nuclear criticality of some uranium systems have been evaluated as the Nuclear Criticality Slide Rule. The uranium metal source (unmoderated system) and the uranyl nitrate solution source (moderated system) in the rule are evaluated by some calculation methods, which are combinations of code and cross section library, as follows: (a) SAS1X (ENDF/B-IV), (b) MCNP4C (ENDF/B-VI)-ANISN (DLC23E or JSD120), (c) MCNP4C-MCNP4C (ENDF/B-VI). They have consisted of criticality calculation and shielding calculation. These calculation methods are compared about the tissue absorbed dose and the spectrums at 2 m from the source. (author)

  5. German precursor study: methods and results

    International Nuclear Information System (INIS)

    Hoertner, H.; Frey, W.; von Linden, J.; Reichart, G.

    1985-01-01

    This study has been prepared by the GRS by contract of the Federal Minister of Interior. The purpose of the study is to show how the application of system-analytic tools and especially of probabilistic methods on the Licensee Event Reports (LERs) and on other operating experience can support a deeper understanding of the safety-related importance of the events reported in reactor operation, the identification of possible weak points, and further conclusions to be drawn from the events. Additionally, the study aimed at a comparison of its results for the severe core damage frequency with those of the German Risk Study as far as this is possible and useful. The German Precursor Study is a plant-specific study. The reference plant is Biblis NPP with its very similar Units A and B, whereby the latter was also the reference plant for the German Risk Study

  6. A comparison of earthquake backprojection imaging methods for dense local arrays

    Science.gov (United States)

    Beskardes, G. D.; Hole, J. A.; Wang, K.; Michaelides, M.; Wu, Q.; Chapman, M. C.; Davenport, K. K.; Brown, L. D.; Quiros, D. A.

    2018-03-01

    Backprojection imaging has recently become a practical method for local earthquake detection and location due to the deployment of densely sampled, continuously recorded, local seismograph arrays. While backprojection sometimes utilizes the full seismic waveform, the waveforms are often pre-processed and simplified to overcome imaging challenges. Real data issues include aliased station spacing, inadequate array aperture, inaccurate velocity model, low signal-to-noise ratio, large noise bursts and varying waveform polarity. We compare the performance of backprojection with four previously used data pre-processing methods: raw waveform, envelope, short-term averaging/long-term averaging and kurtosis. Our primary goal is to detect and locate events smaller than noise by stacking prior to detection to improve the signal-to-noise ratio. The objective is to identify an optimized strategy for automated imaging that is robust in the presence of real-data issues, has the lowest signal-to-noise thresholds for detection and for location, has the best spatial resolution of the source images, preserves magnitude, and considers computational cost. Imaging method performance is assessed using a real aftershock data set recorded by the dense AIDA array following the 2011 Virginia earthquake. Our comparisons show that raw-waveform backprojection provides the best spatial resolution, preserves magnitude and boosts signal to detect events smaller than noise, but is most sensitive to velocity error, polarity error and noise bursts. On the other hand, the other methods avoid polarity error and reduce sensitivity to velocity error, but sacrifice spatial resolution and cannot effectively reduce noise by stacking. Of these, only kurtosis is insensitive to large noise bursts while being as efficient as the raw-waveform method to lower the detection threshold; however, it does not preserve the magnitude information. For automatic detection and location of events in a large data set, we

  7. Methods to assess intended effects of drug treatment in observational studies are reviewed

    NARCIS (Netherlands)

    Klungel, Olaf H|info:eu-repo/dai/nl/181447649; Martens, Edwin P|info:eu-repo/dai/nl/088859010; Psaty, Bruce M; Grobbee, Diederik E; Sullivan, Sean D; Stricker, Bruno H Ch; Leufkens, Hubert G M|info:eu-repo/dai/nl/075255049; de Boer, A|info:eu-repo/dai/nl/075097346

    2004-01-01

    BACKGROUND AND OBJECTIVE: To review methods that seek to adjust for confounding in observational studies when assessing intended drug effects. METHODS: We reviewed the statistical, economical and medical literature on the development, comparison and use of methods adjusting for confounding. RESULTS:

  8. Comparison of different methods to extract the required coefficient of friction for level walking.

    Science.gov (United States)

    Chang, Wen-Ruey; Chang, Chien-Chi; Matz, Simon

    2012-01-01

    The required coefficient of friction (RCOF) is an important predictor for slip incidents. Despite the wide use of the RCOF there is no standardised method for identifying the RCOF from ground reaction forces. This article presents a comparison of the outcomes from seven different methods, derived from those reported in the literature, for identifying the RCOF from the same data. While commonly used methods are based on a normal force threshold, percentage of stance phase or time from heel contact, a newly introduced hybrid method is based on a combination of normal force, time and direction of increase in coefficient of friction. Although no major differences were found with these methods in more than half the strikes, significant differences were found in a significant portion of strikes. Potential problems with some of these methods were identified and discussed and they appear to be overcome by the hybrid method. No standard method exists for determining the required coefficient of friction (RCOF), an important predictor for slipping. In this study, RCOF values from a single data set, using various methods from the literature, differed considerably for a significant portion of strikes. A hybrid method may yield improved results.

  9. Comparisons of Energy Management Methods for a Parallel Plug-In Hybrid Electric Vehicle between the Convex Optimization and Dynamic Programming

    Directory of Open Access Journals (Sweden)

    Renxin Xiao

    2018-01-01

    Full Text Available This paper proposes a comparison study of energy management methods for a parallel plug-in hybrid electric vehicle (PHEV. Based on detailed analysis of the vehicle driveline, quadratic convex functions are presented to describe the nonlinear relationship between engine fuel-rate and battery charging power at different vehicle speed and driveline power demand. The engine-on power threshold is estimated by the simulated annealing (SA algorithm, and the battery power command is achieved by convex optimization with target of improving fuel economy, compared with the dynamic programming (DP based method and the charging depleting–charging sustaining (CD/CS method. In addition, the proposed control methods are discussed at different initial battery state of charge (SOC values to extend the application. Simulation results validate that the proposed strategy based on convex optimization can save the fuel consumption and reduce the computation burden obviously.

  10. Age modelling for Pleistocene lake sediments: A comparison of methods from the Andean Fúquene Basin (Colombia) case study

    NARCIS (Netherlands)

    Groot, M.H.M.; van der Plicht, J.; Hooghiemstra, H.; Lourens, L.J.; Rowe, H.D.

    2014-01-01

    Challenges and pitfalls for developing age models for long lacustrine sedimentary records are discussed and a comparison is made between radiocarbon dating, visual curve matching, and frequency analysis in the depth domain in combination with cyclostratigraphy. A core section of the high resolution

  11. Age modelling for Pleistocene lake sediments: a comparison of methods from the Andean Fúquene Basin (Colombia) case study

    NARCIS (Netherlands)

    Groot, M.H.M.; van der Plicht, J.; Hooghiemstra, H.; Lourens, L.J.; Rowe, H.D.

    2014-01-01

    Challenges and pitfalls for developing age models for long lacustrine sedimentary records are discussed and a comparison is made between radiocarbon dating, visual curve matching, and frequency analysis in the depth domain in combination with cyclostratigraphy. A core section of the high resolution

  12. Comparison and transfer testing of multiplex ligation detection methods for GM plants

    Directory of Open Access Journals (Sweden)

    Ujhelyi Gabriella

    2012-01-01

    Full Text Available Abstract Background With the increasing number of GMOs on the global market the maintenance of European GMO regulations is becoming more complex. For the analysis of a single food or feed sample it is necessary to assess the sample for the presence of many GMO-targets simultaneously at a sensitive level. Several methods have been published regarding DNA-based multidetection. Multiplex ligation detection methods have been described that use the same basic approach: i hybridisation and ligation of specific probes, ii amplification of the ligated probes and iii detection and identification of the amplified products. Despite they all have this same basis, the published ligation methods differ radically. The present study investigated with real-time PCR whether these different ligation methods have any influence on the performance of the probes. Sensitivity and the specificity of the padlock probes (PLPs with the ligation protocol with the best performance were also tested and the selected method was initially validated in a laboratory exchange study. Results Of the ligation protocols tested in this study, the best results were obtained with the PPLMD I and PPLMD II protocols and no consistent differences between these two protocols were observed. Both protocols are based on padlock probe ligation combined with microarray detection. Twenty PLPs were tested for specificity and the best probes were subjected to further evaluation. Up to 13 targets were detected specifically and simultaneously. During the interlaboratory exchange study similar results were achieved by the two participating institutes (NIB, Slovenia, and RIKILT, the Netherlands. Conclusions From the comparison of ligation protocols it can be concluded that two protocols perform equally well on the basis of the selected set of PLPs. Using the most ideal parameters the multiplicity of one of the methods was tested and 13 targets were successfully and specifically detected. In the

  13. Re-analysis as a Comparison of Constructions

    Directory of Open Access Journals (Sweden)

    Jochen Gläser

    2000-12-01

    Full Text Available An interesting methodological aspect of secondary analysis is that it enables comparisons between constructions that constitute qualitative data analysis. This comparison is even more focused if a reanalysis is conducted, that means an analysis that reexamines both the primary study's data and the primary study's research question. In this article, a reanalysis is described that used interviews from the archive at the Special Collaborative Centre 186 (Sfb 186. One of the primary study's results was formulated as a hypothesis and subsequently "tested" by conducting a qualitative content analysis of the interviews. A comparison of primary study and reanalysis reveals critical decisions which may lead the data analyses to different results. These decisions are usually made implicitly and will show up only if contradictions between results are explained. As a second result of the comparison, typical threats to primary and secondary analyses are discussed. Primary studies seem to suffer from a "closure pressure", that it is a necessity to make sense of the data at all costs. This may stimulate researchers to close gaps in their data by speculation and to neglect contradicting evidence. Secondary analyses are thematically and methodically restricted by the primary study's data collection. Finally, the reanalysis confirmed that it is possible to use interviews from archives: The losses of information due to archiving and anonymisation seemed to have no significant influence on the reanalysis. URN: urn:nbn:de:0114-fqs0003257

  14. Comparison of Transmission Line Methods for Surface Acoustic Wave Modeling

    Science.gov (United States)

    Wilson, William; Atkinson, Gary

    2009-01-01

    Surface Acoustic Wave (SAW) technology is low cost, rugged, lightweight, extremely low power and can be used to develop passive wireless sensors. For these reasons, NASA is investigating the use of SAW technology for Integrated Vehicle Health Monitoring (IVHM) of aerospace structures. To facilitate rapid prototyping of passive SAW sensors for aerospace applications, SAW models have been developed. This paper reports on the comparison of three methods of modeling SAWs. The three models are the Impulse Response Method (a first order model), and two second order matrix methods; the conventional matrix approach, and a modified matrix approach that is extended to include internal finger reflections. The second order models are based upon matrices that were originally developed for analyzing microwave circuits using transmission line theory. Results from the models are presented with measured data from devices. Keywords: Surface Acoustic Wave, SAW, transmission line models, Impulse Response Method.

  15. Unimolecular decomposition reactions at low-pressure: A comparison of competitive methods

    Science.gov (United States)

    Adams, G. F.

    1980-01-01

    The lack of a simple rate coefficient expression to describe the pressure and temperature dependence hampers chemical modeling of flame systems. Recently developed simplified models to describe unimolecular processes include the calculation of rate constants for thermal unimolecular reactions and recombinations at the low pressure limit, at the high pressure limit and in the intermediate fall-off region. Comparison between two different applications of Troe's simplified model and a comparison between the simplified model and the classic RRKM theory are described.

  16. Recent amendments of the KTA 2101.2 fire barrier resistance rating method for German NPP and comparison to the Eurocode t-equivalent method

    Energy Technology Data Exchange (ETDEWEB)

    Forell, Burkhard [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) gGmbH, Koeln (Germany)

    2015-12-15

    The German nuclear standard KTA2101 on ''Fire Protection in Nuclear Power Plants'', Part 2: ''Fire Protection of Structural Plant Components'' includes a simplified method for the fire resistance rating of fire barrier elements based on the t-equivalent approach. The method covers the specific features of compartments in nuclear power plant buildings in terms of the boundary conditions which have to be expected in the event of fire. The method has proven to be relatively simple and straightforward to apply. The paper gives an overview of amendments with respect to the rating method made within the regular review of the KTA 2101.2. A comparison to the method of the non-nuclear Eurocode 1 is also provided. The Eurocode method is closely connected to the German standard DIN 18230 on structural fire protection in industrial buildings. Special emphasis of the comparison is given to the ventilation factor, which has a large impact on the required fire resistance.

  17. Diversity Indices as Measures of Functional Annotation Methods in Metagenomics Studies

    KAUST Repository

    Jankovic, Boris R.

    2016-01-26

    Applications of high-throughput techniques in metagenomics studies produce massive amounts of data. Fragments of genomic, transcriptomic and proteomic molecules are all found in metagenomics samples. Laborious and meticulous effort in sequencing and functional annotation are then required to, amongst other objectives, reconstruct a taxonomic map of the environment that metagenomics samples were taken from. In addition to computational challenges faced by metagenomics studies, the analysis is further complicated by the presence of contaminants in the samples, potentially resulting in skewed taxonomic analysis. The functional annotation in metagenomics can utilize all available omics data and therefore different methods that are associated with a particular type of data. For example, protein-coding DNA, non-coding RNA or ribosomal RNA data can be used in such an analysis. These methods would have their advantages and disadvantages and the question of comparison among them naturally arises. There are several criteria that can be used when performing such a comparison. Loosely speaking, methods can be evaluated in terms of computational complexity or in terms of the expected biological accuracy. We propose that the concept of diversity that is used in the ecosystems and species diversity studies can be successfully used in evaluating certain aspects of the methods employed in metagenomics studies. We show that when applying the concept of Hill’s diversity, the analysis of variations in the diversity order provides valuable clues into the robustness of methods used in the taxonomical analysis.

  18. Comparison of Feature Learning Methods for Human Activity Recognition Using Wearable Sensors.

    Science.gov (United States)

    Li, Frédéric; Shirahama, Kimiaki; Nisar, Muhammad Adeel; Köping, Lukas; Grzegorzek, Marcin

    2018-02-24

    Getting a good feature representation of data is paramount for Human Activity Recognition (HAR) using wearable sensors. An increasing number of feature learning approaches-in particular deep-learning based-have been proposed to extract an effective feature representation by analyzing large amounts of data. However, getting an objective interpretation of their performances faces two problems: the lack of a baseline evaluation setup, which makes a strict comparison between them impossible, and the insufficiency of implementation details, which can hinder their use. In this paper, we attempt to address both issues: we firstly propose an evaluation framework allowing a rigorous comparison of features extracted by different methods, and use it to carry out extensive experiments with state-of-the-art feature learning approaches. We then provide all the codes and implementation details to make both the reproduction of the results reported in this paper and the re-use of our framework easier for other researchers. Our studies carried out on the OPPORTUNITY and UniMiB-SHAR datasets highlight the effectiveness of hybrid deep-learning architectures involving convolutional and Long-Short-Term-Memory (LSTM) to obtain features characterising both short- and long-term time dependencies in the data.

  19. Evaluation of head and neck cancer with 18F-FDG PET: a comparison with conventional methods

    International Nuclear Information System (INIS)

    Kresnik, E.; Mikosch, P.; Gallowitsch, H.J.; Heinisch, M.; Unterweger, O.; Kumnig, G.; Gomez, I.; Lind, P.; Kogler, D.; Wieser, S.; Gruenbacher, G.; Raunik, W.

    2001-01-01

    The aim of this study was to evaluate the usefulness of 18 F-FDG PET in the diagnosis and staging of primary and recurrent malignant head and neck tumours in comparison with conventional imaging methods [including ultrasonography, radiography, computed tomography (CT) and magnetic resonance imaging (MRI)], physical examination, panendoscopy and biopsies in clinical routine. A total of 54 patients (13 female, 41 male, age 61.3±12 years) were investigated retrospectively. Three groups were formed. In group I, 18 F-FDG PET was performed in 15 patients to detect unknown primary cancers. In group II, 24 studies were obtained for preoperative staging of proven head and neck cancer. In group III, 18 F-FDG PET was used in 15 patients to monitor tumour recurrence after radiotherapy and/or chemotherapy. In all patients, imaging was obtained at 70 min after the intravenous administration of 180 MBq 18 F-FDG. In 11 of the 15 patients in group I, the primary cancer could be found with 18 F-FDG, yielding a detection rate of 73.3%. In 4 of the 15 patients, CT findings were also suggestive of the primary cancer but were nonetheless equivocal. In these patients, 18 F-FDG showed increased 18 F-FDG uptake by the primary tumour, which was confirmed by histology. One patient had recurrence of breast carcinoma that could not be detected with 18 F-FDG PET, but was detected by CT. In three cases, the primary cancer could not be found with any imaging method. Among the 24 patients in group II investigated for staging purposes, 18 F-FDG PET detected a total of 13 local and three distant lymph node metastases, whereas the conventional imaging methods detected only nine local and one distant lymph node metastases. The results of 18 F-FDG PET led to an upstaging in 5/24 (20.8%) patients. The conventional imaging methods were false positive in 5/24 (20.8%). There was one false positive result using 18 F-FDG PET. Among the 15 patients of group III with suspected recurrence after radiotherapy

  20. Analysis and research on Maximum Power Point Tracking of Photovoltaic Array with Fuzzy Logic Control and Three-point Weight Comparison Method

    Institute of Scientific and Technical Information of China (English)

    LIN; Kuang-Jang; LIN; Chii-Ruey

    2010-01-01

    The Photovoltaic Array has a best optimal operating point where the array operating can obtain the maximum power.However, the optimal operating point can be compromised by the strength of solar radiation,angle,and by the change of environment and load.Due to the constant changes in these conditions,it has become very difficult to locate the optimal operating point by following a mathematical model.Therefore,this study will focus mostly on the application of Fuzzy Logic Control theory and Three-point Weight Comparison Method in effort to locate the optimal operating point of solar panel and achieve maximum efficiency in power generation. The Three-point Weight Comparison Method is the comparison between the characteristic curves of the voltage of photovoltaic array and output power;it is a rather simple way to track the maximum power.The Fuzzy Logic Control,on the other hand,can be used to solve problems that cannot be effectively dealt with by calculation rules,such as concepts,contemplation, deductive reasoning,and identification.Therefore,this paper uses these two kinds of methods to make simulation successively. The simulation results show that,the Three-point Comparison Method is more effective under the environment with more frequent change of solar radiation;however,the Fuzzy Logic Control has better tacking efficiency under the environment with violent change of solar radiation.

  1. Classification Method in Integrated Information Network Using Vector Image Comparison

    Directory of Open Access Journals (Sweden)

    Zhou Yuan

    2014-05-01

    Full Text Available Wireless Integrated Information Network (WMN consists of integrated information that can get data from its surrounding, such as image, voice. To transmit information, large resource is required which decreases the service time of the network. In this paper we present a Classification Approach based on Vector Image Comparison (VIC for WMN that improve the service time of the network. The available methods for sub-region selection and conversion are also proposed.

  2. Optimization and Comparison of ESI and APCI LC-MS/MS Methods: A Case Study of Irgarol 1051, Diuron, and their Degradation Products in Environmental Samples

    Science.gov (United States)

    Maragou, Niki C.; Thomaidis, Nikolaos S.; Koupparis, Michael A.

    2011-10-01

    A systematic and detailed optimization strategy for the development of atmospheric pressure ionization (API) LC-MS/MS methods for the determination of Irgarol 1051, Diuron, and their degradation products (M1, DCPMU, DCPU, and DCA) in water, sediment, and mussel is described. Experimental design was applied for the optimization of the ion sources parameters. Comparison of ESI and APCI was performed in positive- and negative-ion mode, and the effect of the mobile phase on ionization was studied for both techniques. Special attention was drawn to the ionization of DCA, which presents particular difficulty in API techniques. Satisfactory ionization of this small molecule is achieved only with ESI positive-ion mode using acetonitrile in the mobile phase; the instrumental detection limit is 0.11 ng/mL. Signal suppression was qualitatively estimated by using purified and non-purified samples. The sample preparation for sediments and mussels is direct and simple, comprising only solvent extraction. Mean recoveries ranged from 71% to 110%, and the corresponding (%) RSDs ranged between 4.1 and 14%. The method limits of detection ranged between 0.6 and 3.5 ng/g for sediment and mussel and from 1.3 to 1.8 ng/L for sea water. The method was applied to sea water, marine sediment, and mussels, which were obtained from marinas in Attiki, Greece. Ion ratio confirmation was used for the identification of the compounds.

  3. Robustness of phase retrieval methods in x-ray phase contrast imaging: A comparison

    International Nuclear Information System (INIS)

    Yan, Aimin; Wu, Xizeng; Liu, Hong

    2011-01-01

    Purpose: The robustness of the phase retrieval methods is of critical importance for limiting and reducing radiation doses involved in x-ray phase contrast imaging. This work is to compare the robustness of two phase retrieval methods by analyzing the phase maps retrieved from the experimental images of a phantom. Methods: Two phase retrieval methods were compared. One method is based on the transport of intensity equation (TIE) for phase contrast projections, and the TIE-based method is the most commonly used method for phase retrieval in the literature. The other is the recently developed attenuation-partition based (AP-based) phase retrieval method. The authors applied these two methods to experimental projection images of an air-bubble wrap phantom for retrieving the phase map of the bubble wrap. The retrieved phase maps obtained by using the two methods are compared. Results: In the wrap's phase map retrieved by using the TIE-based method, no bubble is recognizable, hence, this method failed completely for phase retrieval from these bubble wrap images. Even with the help of the Tikhonov regularization, the bubbles are still hardly visible and buried in the cluttered background in the retrieved phase map. The retrieved phase values with this method are grossly erroneous. In contrast, in the wrap's phase map retrieved by using the AP-based method, the bubbles are clearly recovered. The retrieved phase values with the AP-based method are reasonably close to the estimate based on the thickness-based measurement. The authors traced these stark performance differences of the two methods to their different techniques employed to deal with the singularity problem involved in the phase retrievals. Conclusions: This comparison shows that the conventional TIE-based phase retrieval method, regardless if Tikhonov regularization is used or not, is unstable against the noise in the wrap's projection images, while the AP-based phase retrieval method is shown in these

  4. Comparison Study of Three Common Technologies for Freezing-Thawing Measurement

    Directory of Open Access Journals (Sweden)

    Xinbao Yu

    2010-01-01

    Full Text Available This paper describes a comparison study on three different technologies (i.e., thermocouple, electrical resistivity probe and Time Domain Reflectometry (TDR that are commonly used for frost measurement. Specially, the paper developed an analyses procedure to estimate the freezing-thawing status based on the dielectric properties of freezing soil. Experiments were conducted where the data of temperature, electrical resistivity, and dielectric constant were simultaneously monitored during the freezing/thawing process. The comparison uncovered the advantages and limitations of these technologies for frost measurement. The experimental results indicated that TDR measured soil dielectric constant clearly indicates the different stages of the freezing/thawing process. Analyses method was developed to determine not only the onset of freezing or thawing, but also the extent of their development. This is a major advantage of TDR over other technologies.

  5. Comparison of the spatial landmark scatter of various 3D digitalization methods.

    Science.gov (United States)

    Boldt, Florian; Weinzierl, Christian; Hertrich, Klaus; Hirschfelder, Ursula

    2009-05-01

    The aim of this study was to compare four different three-dimensional digitalization methods on the basis of the complex anatomical surface of a cleft lip and palate plaster cast, and to ascertain their accuracy when positioning 3D landmarks. A cleft lip and palate plaster cast was digitalized with the SCAN3D photo-optical scanner, the OPTIX 400S laser-optical scanner, the Somatom Sensation 64 computed tomography system and the MicroScribe MLX 3-axis articulated-arm digitizer. First, four examiners appraised by individual visual inspection the surface detail reproduction of the three non-tactile digitalization methods in comparison to the reference plaster cast. The four examiners then localized the landmarks five times at intervals of 2 weeks. This involved simply copying, or spatially tracing, the landmarks from a reference plaster cast to each model digitally reproduced by each digitalization method. Statistical analysis of the landmark distribution specific to each method was performed based on the 3D coordinates of the positioned landmarks. Visual evaluation of surface detail conformity assigned the photo-optical digitalization method an average score of 1.5, the highest subjectively-determined conformity (surpassing computer tomographic and laser-optical methods). The tactile scanning method revealed the lowest degree of 3D landmark scatter, 0.12 mm, and at 1.01 mm the lowest maximum 3D landmark scatter; this was followed by the computer tomographic, photo-optical and laser-optical methods (in that order). This study demonstrates that the landmarks' precision and reproducibility are determined by the complexity of the reference-model surface as well as the digital surface quality and individual ability of each evaluator to capture 3D spatial relationships. The differences in the 3D-landmark scatter values and lowest maximum 3D-landmark scatter between the best and the worst methods showed minor differences. The measurement results in this study reveal that it

  6. Issues in benchmarking human reliability analysis methods : a literature review.

    Energy Technology Data Exchange (ETDEWEB)

    Lois, Erasmia (US Nuclear Regulatory Commission); Forester, John Alan; Tran, Tuan Q. (Idaho National Laboratory, Idaho Falls, ID); Hendrickson, Stacey M. Langfitt; Boring, Ronald L. (Idaho National Laboratory, Idaho Falls, ID)

    2008-04-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.

  7. Is lower IQ in children with epilepsy due to lower parental IQ? A controlled comparison study

    Science.gov (United States)

    Walker, Natalie M; Jackson, Daren C; Dabbs, Kevin; Jones, Jana E; Hsu, David A; Stafstrom, Carl E; Sheth, Raj D; Koehn, Monica A; Seidenberg, Michael; Hermann, Bruce P

    2012-01-01

    Aim The aim of this study was to determine the relationship between parent and child full-scale IQ (FSIQ) in children with epilepsy and in typically developing comparison children and to examine parent–child IQ differences by epilepsy characteristics. Method The study participants were 97 children (50 males, 47 females; age range 8–18y; mean age 12y 3mo, SD 3y.1mo) with recent-onset epilepsy including idiopathic generalized (n=43) and idiopathic localization-related epilepsies (n=54); 69 healthy comparison children (38 females, 31 males; age range 8–18y; mean age 12y 8mo, SD 3y 2mo), and one biological parent per child. All participants were administered the Wechsler Abbreviated Intelligence Scale. FSIQ was compared in children with epilepsy and typically developing children; FSIQ was compared in the parents of typically developing children and the parents of participants with epilepsy; parent–child FSIQ differences were compared between the groups. Results FSIQ was lower in children with epilepsy than in comparison children (pepilepsy did not differ from the FSIQ of the parents of typically developing children. Children with epilepsy had significantly lower FSIQ than their parents (pepilepsy than the comparison group (p=0.043). Epilepsy characteristics were not related to parent–child IQ difference. Interpretation Parent–child IQ difference appears to be a marker of epilepsy impact independent of familial IQ, epilepsy syndrome, and clinical seizure features. This marker is evident early in the course of idiopathic epilepsies and can be tracked over time. PMID:23216381

  8. An empirical comparison of several recent epistatic interaction detection methods.

    Science.gov (United States)

    Wang, Yue; Liu, Guimei; Feng, Mengling; Wong, Limsoon

    2011-11-01

    Many new methods have recently been proposed for detecting epistatic interactions in GWAS data. There is, however, no in-depth independent comparison of these methods yet. Five recent methods-TEAM, BOOST, SNPHarvester, SNPRuler and Screen and Clean (SC)-are evaluated here in terms of power, type-1 error rate, scalability and completeness. In terms of power, TEAM performs best on data with main effect and BOOST performs best on data without main effect. In terms of type-1 error rate, TEAM and BOOST have higher type-1 error rates than SNPRuler and SNPHarvester. SC does not control type-1 error rate well. In terms of scalability, we tested the five methods using a dataset with 100 000 SNPs on a 64 bit Ubuntu system, with Intel (R) Xeon(R) CPU 2.66 GHz, 16 GB memory. TEAM takes ~36 days to finish and SNPRuler reports heap allocation problems. BOOST scales up to 100 000 SNPs and the cost is much lower than that of TEAM. SC and SNPHarvester are the most scalable. In terms of completeness, we study how frequently the pruning techniques employed by these methods incorrectly prune away the most significant epistatic interactions. We find that, on average, 20% of datasets without main effect and 60% of datasets with main effect are pruned incorrectly by BOOST, SNPRuler and SNPHarvester. The software for the five methods tested are available from the URLs below. TEAM: http://csbio.unc.edu/epistasis/download.php BOOST: http://ihome.ust.hk/~eeyang/papers.html. SNPHarvester: http://bioinformatics.ust.hk/SNPHarvester.html. SNPRuler: http://bioinformatics.ust.hk/SNPRuler.zip. Screen and Clean: http://wpicr.wpic.pitt.edu/WPICCompGen/. wangyue@nus.edu.sg.

  9. A GIS-based comparison of the Mexican national and IUCN methods for determining extinction risk.

    Science.gov (United States)

    Arroyo, Teresa P Feria; Olson, Mark E; García-Mendoza, Abisaí; Solano, Eloy

    2009-10-01

    The national systems used in the evaluation of extinction risk are often touted as more readily applied and somehow more regionally appropriate than the system of the International Union for Conservation of Nature (IUCN). We compared risk assessments of the Mexican national system (method for evaluation of risk of extinction of wild species [MER]) with the IUCN system for the 16 Polianthes taxa (Agavaceae), a genus of plants with marked variation in distribution sizes. We used a novel combination of herbarium data, geographic information systems (GIS), and species distribution models to provide rapid, repeatable estimates of extinction risk. Our GIS method showed that the MER and the IUCN system use similar data. Our comparison illustrates how the IUCN method can be applied even when all desirable data are not available, and that the MER offers no special regional advantage with respect to the IUCN regional system. Instead, our results coincided, with both systems identifying 14 taxa of conservation concern and the remaining two taxa of low risk, largely because both systems use similar information. An obstacle for the application of the MER is that there are no standards for quantifying the criteria of habitat condition and intrinsic biological vulnerability. If these impossible-to-quantify criteria are left out, what are left are geographical distribution and the impact of human activity, essentially the considerations we were able to assess for the IUCN method. Our method has the advantage of making the IUCN criteria easy to apply, and because each step can be standardized between studies, it ensures greater comparability of extinction risk estimates among taxa.

  10. Magnetic Resonance Imaging in the measurement of whole body muscle mass: A comparison of interval gap methods

    International Nuclear Information System (INIS)

    Hellmanns, K.; McBean, K.; Thoirs, K.

    2015-01-01

    Purpose: Magnetic Resonance Imaging (MRI) is commonly used in body composition research to measure whole body skeletal muscle mass (SM). MRI calculation methods of SM can vary by analysing the images at different slice intervals (or interval gaps) along the length of the body. This study compared SM measurements made from MRI images of apparently healthy individuals using different interval gap methods to determine the error associated with each technique. It was anticipated that the results would inform researchers of optimum interval gap measurements to detect a predetermined minimum change in SM. Methods: A method comparison study was used to compare eight interval gap methods (interval gaps of 40, 50, 60, 70, 80, 100, 120 and 140 mm) against a reference 10 mm interval gap method for measuring SM from twenty MRI image sets acquired from apparently healthy participants. Pearson product-moment correlation analysis was used to determine the association between methods. Total error was calculated as the sum of the bias (systematic error) and the random error (limits of agreement) of the mean differences. Percentage error was used to demonstrate proportional error. Results: Pearson product-moment correlation analysis between the reference method and all interval gap methods demonstrated strong and significant associations (r > 0.99, p < 0.0001). The 40 mm interval gap method was comparable with the 10 mm interval reference method and had a low error (total error 0.95 kg, −3.4%). Analysis methods using wider interval gap techniques demonstrated larger errors than reported for dual-energy x-ray absorptiometry (DXA), a technique which is more available, less expensive, and less time consuming than MRI analysis of SM. Conclusions: Researchers using MRI to measure SM can be confident in using a 40 mm interval gap technique when analysing the images to detect minimum changes less than 1 kg. The use of wider intervals will introduce error that is no better

  11. Emulsions: the cutting edge of development in blasting agent technology - a method for economic comparison

    Energy Technology Data Exchange (ETDEWEB)

    Ayat, M.G.; Allen, S.G.

    1988-03-01

    This work examines the history and development of blasting agents beginning with ANFO in the 1950's and concluding with a specific look at the 1980's blasting technology: the emulsion. Properties of emulsions and Emulsion Blend Explosive Systems are compared with ANFO and a method of comparing their costs, useful for comparing any two explosives, is developed. Based on this comparison, the Emulsion Blend Explosive System is determined superior to ANFO on the basis of cost per unit of overburden broken. 4 refs.

  12. Comparison of PDF and Moment Closure Methods in the Modeling of Turbulent Reacting Flows

    Science.gov (United States)

    Norris, Andrew T.; Hsu, Andrew T.

    1994-01-01

    In modeling turbulent reactive flows, Probability Density Function (PDF) methods have an advantage over the more traditional moment closure schemes in that the PDF formulation treats the chemical reaction source terms exactly, while moment closure methods are required to model the mean reaction rate. The common model used is the laminar chemistry approximation, where the effects of turbulence on the reaction are assumed negligible. For flows with low turbulence levels and fast chemistry, the difference between the two methods can be expected to be small. However for flows with finite rate chemistry and high turbulence levels, significant errors can be expected in the moment closure method. In this paper, the ability of the PDF method and the moment closure scheme to accurately model a turbulent reacting flow is tested. To accomplish this, both schemes were used to model a CO/H2/N2- air piloted diffusion flame near extinction. Identical thermochemistry, turbulence models, initial conditions and boundary conditions are employed to ensure a consistent comparison can be made. The results of the two methods are compared to experimental data as well as to each other. The comparison reveals that the PDF method provides good agreement with the experimental data, while the moment closure scheme incorrectly shows a broad, laminar-like flame structure.

  13. A comparison of latent class, K-means, and K-median methods for clustering dichotomous data.

    Science.gov (United States)

    Brusco, Michael J; Shireman, Emilie; Steinley, Douglas

    2017-09-01

    The problem of partitioning a collection of objects based on their measurements on a set of dichotomous variables is a well-established problem in psychological research, with applications including clinical diagnosis, educational testing, cognitive categorization, and choice analysis. Latent class analysis and K-means clustering are popular methods for partitioning objects based on dichotomous measures in the psychological literature. The K-median clustering method has recently been touted as a potentially useful tool for psychological data and might be preferable to its close neighbor, K-means, when the variable measures are dichotomous. We conducted simulation-based comparisons of the latent class, K-means, and K-median approaches for partitioning dichotomous data. Although all 3 methods proved capable of recovering cluster structure, K-median clustering yielded the best average performance, followed closely by latent class analysis. We also report results for the 3 methods within the context of an application to transitive reasoning data, in which it was found that the 3 approaches can exhibit profound differences when applied to real data. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. A surrogate method for comparison analysis of salivary concentrations of Xylitol-containing products

    Directory of Open Access Journals (Sweden)

    Zhou Lingmei

    2008-02-01

    g.min/mL, gummy bears – 55.9 μg.min/mL, and syrup – 59.0 μg.min/mL. Conclusion The comparison method demonstrated high reliability and validity. In both studies other xylitol-containing products had time curves and mean xylitol concentration peaks similar to xylitol pellet gum suggesting this test may be a surrogate for longer studies comparing various products.

  15. A comparison of analysis methods to estimate contingency strength.

    Science.gov (United States)

    Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T

    2018-05-09

    To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.

  16. Comparison of a Label-Free Quantitative Proteomic Method Based on Peptide Ion Current Area to the Isotope Coded Affinity Tag Method

    Directory of Open Access Journals (Sweden)

    Young Ah Goo

    2008-01-01

    Full Text Available Recently, several research groups have published methods for the determination of proteomic expression profiling by mass spectrometry without the use of exogenously added stable isotopes or stable isotope dilution theory. These so-called label-free, methods have the advantage of allowing data on each sample to be acquired independently from all other samples to which they can later be compared in silico for the purpose of measuring changes in protein expression between various biological states. We developed label free software based on direct measurement of peptide ion current area (PICA and compared it to two other methods, a simpler label free method known as spectral counting and the isotope coded affinity tag (ICAT method. Data analysis by these methods of a standard mixture containing proteins of known, but varying, concentrations showed that they performed similarly with a mean squared error of 0.09. Additionally, complex bacterial protein mixtures spiked with known concentrations of standard proteins were analyzed using the PICA label-free method. These results indicated that the PICA method detected all levels of standard spiked proteins at the 90% confidence level in this complex biological sample. This finding confirms that label-free methods, based on direct measurement of the area under a single ion current trace, performed as well as the standard ICAT method. Given the fact that the label-free methods provide ease in experimental design well beyond pair-wise comparison, label-free methods such as our PICA method are well suited for proteomic expression profiling of large numbers of samples as is needed in clinical analysis.

  17. Propensity-score matching in economic analyses: comparison with regression models, instrumental variables, residual inclusion, differences-in-differences, and decomposition methods.

    Science.gov (United States)

    Crown, William H

    2014-02-01

    This paper examines the use of propensity score matching in economic analyses of observational data. Several excellent papers have previously reviewed practical aspects of propensity score estimation and other aspects of the propensity score literature. The purpose of this paper is to compare the conceptual foundation of propensity score models with alternative estimators of treatment effects. References are provided to empirical comparisons among methods that have appeared in the literature. These comparisons are available for a subset of the methods considered in this paper. However, in some cases, no pairwise comparisons of particular methods are yet available, and there are no examples of comparisons across all of the methods surveyed here. Irrespective of the availability of empirical comparisons, the goal of this paper is to provide some intuition about the relative merits of alternative estimators in health economic evaluations where nonlinearity, sample size, availability of pre/post data, heterogeneity, and missing variables can have important implications for choice of methodology. Also considered is the potential combination of propensity score matching with alternative methods such as differences-in-differences and decomposition methods that have not yet appeared in the empirical literature.

  18. Comparison of methods for generating typical meteorological year using meteorological data from a tropical environment

    Energy Technology Data Exchange (ETDEWEB)

    Janjai, S.; Deeyai, P. [Laboratory of Tropical Atmospheric Physics, Department of Physics, Faculty of Science, Silpakorn University, Nakhon Pathom 73000 (Thailand)

    2009-04-15

    This paper presents the comparison of methods for generating typical meteorological year (TMY) data set using a 10-year period of meteorological data from four stations in a tropical environment of Thailand. These methods are the Sadia National Laboratory method, the Danish method and the Festa and Ratto method. In investigating their performance, these methods were employed to generate TMYs for each station. For all parameters of the TMYs and the stations, statistical test indicates that there is no significant difference between the 10-year average values of these parameters and the corresponding average values from TMY generated from each method. The TMY obtained from each method was also used as input data to simulate two solar water heating systems and two photovoltaic systems with different sizes at the four stations by using the TRNSYS simulation program. Solar fractions and electrical output calculated using TMYs are in good agreement with those computed employing the 10-year period hourly meteorological data. It is concluded that the performance of the three methods has no significant difference for all stations under this investigation. Due to its simplicity, the method of Sandia National Laboratories is recommended for the generation of TMY for this tropical environment. The TMYs developed in this work can be used for solar energy and energy conservation applications at the four locations in Thailand. (author)

  19. Comparison of fuzzy AHP and fuzzy TODIM methods for landfill location selection.

    Science.gov (United States)

    Hanine, Mohamed; Boutkhoum, Omar; Tikniouine, Abdessadek; Agouti, Tarik

    2016-01-01

    Landfill location selection is a multi-criteria decision problem and has a strategic importance for many regions. The conventional methods for landfill location selection are insufficient in dealing with the vague or imprecise nature of linguistic assessment. To resolve this problem, fuzzy multi-criteria decision-making methods are proposed. The aim of this paper is to use fuzzy TODIM (the acronym for Interactive and Multi-criteria Decision Making in Portuguese) and the fuzzy analytic hierarchy process (AHP) methods for the selection of landfill location. The proposed methods have been applied to a landfill location selection problem in the region of Casablanca, Morocco. After determining the criteria affecting the landfill location decisions, fuzzy TODIM and fuzzy AHP methods are applied to the problem and results are presented. The comparisons of these two methods are also discussed.

  20. The improved quasi-static method vs the direct method: a case study for CANDU reactor transients

    International Nuclear Information System (INIS)

    Kaveh, S.; Koclas, J.; Roy, R.

    1999-01-01

    Among the large number of methods for the transient analysis of nuclear reactors, the improved quasi-static procedure is one of the most widely used. In recent years, substantial increase in both computer speed and memory has motivated a rethinking of the limitations of this method. The overall goal of the present work is a systematic comparison between the improved quasi-static and the direct method (mesh-centered finite difference) for realistic CANDU transient simulations. The emphasis is on the accuracy of the solutions as opposed to the computational speed. Using the computer code NDF, a typical realistic transient of CANDU reactor has been analyzed. In this transient the response of the reactor regulating system to a substantial local perturbation (sudden extraction of the five adjuster rods) has been simulated. It is shown that when updating the detector responses is of major importance, it is better to use a well-optimized direct method rather than the improved quasi-static method. (author)

  1. Issues in benchmarking human reliability analysis methods: A literature review

    International Nuclear Information System (INIS)

    Boring, Ronald L.; Hendrickson, Stacey M.L.; Forester, John A.; Tran, Tuan Q.; Lois, Erasmia

    2010-01-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies is presented in order to aid in the design of future HRA benchmarking endeavors.

  2. Development and comparison of Bayesian modularization method in uncertainty assessment of hydrological models

    Science.gov (United States)

    Li, L.; Xu, C.-Y.; Engeland, K.

    2012-04-01

    With respect to model calibration, parameter estimation and analysis of uncertainty sources, different approaches have been used in hydrological models. Bayesian method is one of the most widely used methods for uncertainty assessment of hydrological models, which incorporates different sources of information into a single analysis through Bayesian theorem. However, none of these applications can well treat the uncertainty in extreme flows of hydrological models' simulations. This study proposes a Bayesian modularization method approach in uncertainty assessment of conceptual hydrological models by considering the extreme flows. It includes a comprehensive comparison and evaluation of uncertainty assessments by a new Bayesian modularization method approach and traditional Bayesian models using the Metropolis Hasting (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions are used in combination with traditional Bayesian: the AR (1) plus Normal and time period independent model (Model 1), the AR (1) plus Normal and time period dependent model (Model 2) and the AR (1) plus multi-normal model (Model 3). The results reveal that (1) the simulations derived from Bayesian modularization method are more accurate with the highest Nash-Sutcliffe efficiency value, and (2) the Bayesian modularization method performs best in uncertainty estimates of entire flows and in terms of the application and computational efficiency. The study thus introduces a new approach for reducing the extreme flow's effect on the discharge uncertainty assessment of hydrological models via Bayesian. Keywords: extreme flow, uncertainty assessment, Bayesian modularization, hydrological model, WASMOD

  3. Comparison of 2 electrophoretic methods and a wet-chemistry method in the analysis of canine lipoproteins.

    Science.gov (United States)

    Behling-Kelly, Erica

    2016-03-01

    The evaluation of lipoprotein metabolism in small animal medicine is hindered by the lack of a gold standard method and paucity of validation data to support the use of automated chemistry methods available in the typical veterinary clinical pathology laboratory. The physical and chemical differences between canine and human lipoproteins draw into question whether the transference of some of these human methodologies for the study of canine lipoproteins is valid. Validation of methodology must go hand in hand with exploratory studies into the diagnostic or prognostic utility of measuring specific lipoproteins in veterinary medicine. The goal of this study was to compare one commercially available wet-chemistry method to manual and automated lipoprotein electrophoresis in the analysis of canine lipoproteins. Canine lipoproteins from 50 dogs were prospectively analyzed by 2 electrophoretic methods, one automated and one manual method, and one wet-chemistry method. Electrophoretic methods identified a higher proportion of low-density lipoproteins than the wet-chemistry method. Automated electrophoresis occasionally failed to identify very low-density lipoproteins. Wet-chemistry methods designed for evaluation of human lipoproteins are insensitive to canine low-density lipoproteins and may not be applicable to the study of canine lipoproteins. Automated electrophoretic methods will likely require significant modifications if they are to be used in the analysis of canine lipoproteins. Studies aimed at determining the impact of a disease state on lipoproteins should thoroughly investigate the selected methodology prior to the onset of the study. © 2016 American Society for Veterinary Clinical Pathology.

  4. A comparison of sample preparation methods for extracting volatile organic compounds (VOCs) from equine faeces using HS-SPME.

    Science.gov (United States)

    Hough, Rachael; Archer, Debra; Probert, Christopher

    2018-01-01

    Disturbance to the hindgut microbiota can be detrimental to equine health. Metabolomics provides a robust approach to studying the functional aspect of hindgut microorganisms. Sample preparation is an important step towards achieving optimal results in the later stages of analysis. The preparation of samples is unique depending on the technique employed and the sample matrix to be analysed. Gas chromatography mass spectrometry (GCMS) is one of the most widely used platforms for the study of metabolomics and until now an optimised method has not been developed for equine faeces. To compare a sample preparation method for extracting volatile organic compounds (VOCs) from equine faeces. Volatile organic compounds were determined by headspace solid phase microextraction gas chromatography mass spectrometry (HS-SPME-GCMS). Factors investigated were the mass of equine faeces, type of SPME fibre coating, vial volume and storage conditions. The resultant method was unique to those developed for other species. Aliquots of 1000 or 2000 mg in 10 ml or 20 ml SPME headspace were optimal. From those tested, the extraction of VOCs should ideally be performed using a divinylbenzene-carboxen-polydimethysiloxane (DVB-CAR-PDMS) SPME fibre. Storage of faeces for up to 12 months at - 80 °C shared a greater percentage of VOCs with a fresh sample than the equivalent stored at - 20 °C. An optimised method for extracting VOCs from equine faeces using HS-SPME-GCMS has been developed and will act as a standard to enable comparisons between studies. This work has also highlighted storage conditions as an important factor to consider in experimental design for faecal metabolomics studies.

  5. A comparison of two search methods for determining the scope of systematic reviews and health technology assessments.

    Science.gov (United States)

    Forsetlund, Louise; Kirkehei, Ingvild; Harboe, Ingrid; Odgaard-Jensen, Jan

    2012-01-01

    This study aims to compare two different search methods for determining the scope of a requested systematic review or health technology assessment. The first method (called the Direct Search Method) included performing direct searches in the Cochrane Database of Systematic Reviews (CDSR), Database of Abstracts of Reviews of Effects (DARE) and the Health Technology Assessments (HTA). Using the comparison method (called the NHS Search Engine) we performed searches by means of the search engine of the British National Health Service, NHS Evidence. We used an adapted cross-over design with a random allocation of fifty-five requests for systematic reviews. The main analyses were based on repeated measurements adjusted for the order in which the searches were conducted. The Direct Search Method generated on average fewer hits (48 percent [95 percent confidence interval {CI} 6 percent to 72 percent], had a higher precision (0.22 [95 percent CI, 0.13 to 0.30]) and more unique hits than when searching by means of the NHS Search Engine (50 percent [95 percent CI, 7 percent to 110 percent]). On the other hand, the Direct Search Method took longer (14.58 minutes [95 percent CI, 7.20 to 21.97]) and was perceived as somewhat less user-friendly than the NHS Search Engine (-0.60 [95 percent CI, -1.11 to -0.09]). Although the Direct Search Method had some drawbacks such as being more time-consuming and less user-friendly, it generated more unique hits than the NHS Search Engine, retrieved on average fewer references and fewer irrelevant results.

  6. Comparison of multivariate methods for studying the G×E interaction

    Directory of Open Access Journals (Sweden)

    Deoclécio Domingos Garbuglio

    2015-12-01

    Full Text Available The objective of this work was to evaluate three statistical multivariate methods for analyzing adaptability and environmental stratification simultaneously, using data from maize cultivars indicated for planting in the State of Paraná-Brazil. Under the FGGE and GGE methods, the genotypic effect adjusts the G×E interactions across environments, resulting in a high percentage of explanation associated with a smaller number of axes. Environmental stratification via the FGGE and GGE methods showed similar responses, while the AMMI method did not ensure grouping of environments. The adaptability analysis revealed low divergence patterns of the responses obtained through the three methods. Genotypes P30F35, P30F53, P30R50, P30K64 and AS 1570 showed high yields associated with general adaptability. The FGGE method allowed differences in yield responses in specific regions and the impact in locations belonging to the same environmental group (through rE to be associated with the level of the simple portion of the G×E interaction.

  7. Comparison of Flood Frequency Analysis Methods for Ungauged Catchments in France

    Directory of Open Access Journals (Sweden)

    Jean Odry

    2017-09-01

    Full Text Available The objective of flood frequency analysis (FFA is to associate flood intensity with a probability of exceedance. Many methods are currently employed for this, ranging from statistical distribution fitting to simulation approaches. In many cases the site of interest is actually ungauged, and a regionalisation scheme has to be associated with the FFA method, leading to a multiplication of the number of possible methods available. This paper presents the results of a wide-range comparison of FFA methods from statistical and simulation families associated with different regionalisation schemes based on regression, or spatial or physical proximity. The methods are applied to a set of 1535 French catchments, and a k-fold cross-validation procedure is used to consider the ungauged configuration. The results suggest that FFA from the statistical family largely relies on the regionalisation step, whereas the simulation-based method is more stable regarding regionalisation. This conclusion emphasises the difficulty of the regionalisation process. The results are also contrasted depending on the type of climate: the Mediterranean catchments tend to aggravate the differences between the methods.

  8. Inter-laboratory comparisons of hexenuronic acid measurements in kraft eucalyptus pulps using a UV-Vis spectroscopic method

    Science.gov (United States)

    J.Y. Zhu; H.F Zhou; Chai X.S.; Donna Johannes; Richard Pope; Cristina Valls; M. Blanca Roncero

    2014-01-01

    An inter-laboratory comparison of a UV-Vis spectroscopic method (TAPPI T 282 om-13 “Hexeneuronic acid content of chemical pulp”) for hexeneuronic acid measurements was conducted using three eucalyptus kraft pulps. The pulp samples were produced in a laboratory at kappa numbers of approximately 14, 20, and 35. The hexeneuronic acid contents of the three pulps were...

  9. Comparison of Potentiometric and Gravimetric Methods for Determination of O/U Ratio

    International Nuclear Information System (INIS)

    Farida; Windaryati, L; Putro Kasino, P

    1998-01-01

    Comparison of determination O/U ratio by using potentiometric and gravimetric methods has been done. Those methods are simple, economical and having high precision and accuracy. Determination O/U ratio for UO 2 powder using potentiometric is carried out by adopting the davies-gray method. This technique is based on the redox reaction of uranium species such as U(IV) and U(VI). In gravimetric method,the UO 2 power as a sample is calcined at temperature of 900 C, and the weight of the sample is measured after calcination process. The t-student test show that there are no different result significantly between those methods. However, for low concentration in the sample the potentiometric method has a highed precision and accuracy compare to the gravimetric method. O/U ratio obtained is 2.00768 ± 0,00170 for potentiometric method 2.01089 ± 0,02395 for gravimetric method

  10. Detection of circulating immune complexes by Raji cell assay: comparison of flow cytometric and radiometric methods

    International Nuclear Information System (INIS)

    Kingsmore, S.F.; Crockard, A.D.; Fay, A.C.; McNeill, T.A.; Roberts, S.D.; Thompson, J.M.

    1988-01-01

    Several flow cytometric methods for the measurement of circulating immune complexes (CIC) have recently become available. We report a Raji cell flow cytometric assay (FCMA) that uses aggregated human globulin (AHG) as primary calibrator. Technical advantages of the Raji cell flow cytometric assay are discussed, and its clinical usefulness is evaluated in a method comparison study with the widely used Raji cell immunoradiometric assay. FCMA is more precise and has greater analytic sensitivity for AHG. Diagnostic sensitivity by the flow cytometric method is superior in systemic lupus erythematosus (SLE), rheumatoid arthritis, and vasculitis patients: however, diagnostic specificity is similar for both assays, but the reference interval of FCMA is narrower. Significant correlations were found between CIC levels obtained with both methods in SLE, rheumatoid arthritis, and vasculitis patients and in longitudinal studies of two patients with cerebral SLE. The Raji cell FCMA is recommended for measurement of CIC levels to clinical laboratories with access to a flow cytometer

  11. Estimation of the specific activity of radioiodinated gonadotrophins: comparison of three methods

    Energy Technology Data Exchange (ETDEWEB)

    Englebienne, P [Centre for Research and Diagnosis in Endocrinology, Kain (Belgium); Slegers, G [Akademisch Ziekenhuis, Ghent (Belgium). Lab. voor Analytische Chemie

    1983-01-14

    The authors compared 3 methods for estimating the specific activity of radioiodinated gonadotrophins. Two of the methods (column recovery and isotopic dilution) gave similar results, while the third (autodisplacement) gave significantly higher estimations. In the autodisplacement method, B/T ratios, obtained when either labelled hormone alone, or labelled and unlabelled hormone, are added to the antibody, were compared as estimates of the mass of hormone iodinated. It is likely that immunologically unreactive impurities present in the labelled hormone solution invalidate such comparison.

  12. Statistical inference methods for two crossing survival curves: a comparison of methods.

    Science.gov (United States)

    Li, Huimin; Han, Dong; Hou, Yawen; Chen, Huilin; Chen, Zheng

    2015-01-01

    A common problem that is encountered in medical applications is the overall homogeneity of survival distributions when two survival curves cross each other. A survey demonstrated that under this condition, which was an obvious violation of the assumption of proportional hazard rates, the log-rank test was still used in 70% of studies. Several statistical methods have been proposed to solve this problem. However, in many applications, it is difficult to specify the types of survival differences and choose an appropriate method prior to analysis. Thus, we conducted an extensive series of Monte Carlo simulations to investigate the power and type I error rate of these procedures under various patterns of crossing survival curves with different censoring rates and distribution parameters. Our objective was to evaluate the strengths and weaknesses of tests in different situations and for various censoring rates and to recommend an appropriate test that will not fail for a wide range of applications. Simulation studies demonstrated that adaptive Neyman's smooth tests and the two-stage procedure offer higher power and greater stability than other methods when the survival distributions cross at early, middle or late times. Even for proportional hazards, both methods maintain acceptable power compared with the log-rank test. In terms of the type I error rate, Renyi and Cramér-von Mises tests are relatively conservative, whereas the statistics of the Lin-Xu test exhibit apparent inflation as the censoring rate increases. Other tests produce results close to the nominal 0.05 level. In conclusion, adaptive Neyman's smooth tests and the two-stage procedure are found to be the most stable and feasible approaches for a variety of situations and censoring rates. Therefore, they are applicable to a wider spectrum of alternatives compared with other tests.

  13. Comparison of the quantitative dry culture methods with both conventional media and most probable number method for the enumeration of coliforms and Escherichia coli/coliforms in food.

    Science.gov (United States)

    Teramura, H; Sota, K; Iwasaki, M; Ogihara, H

    2017-07-01

    Sanita-kun™ CC (coliform count) and EC (Escherichia coli/coliform count), sheet quantitative culture systems which can avoid chromogenic interference by lactase in food, were evaluated in comparison with conventional methods for these bacteria. Based on the results of inclusivity and exclusivity studies using 77 micro-organisms, sensitivity and specificity of both Sanita-kun™ met the criteria for ISO 16140. Both media were compared with deoxycholate agar, violet red bile agar, Merck Chromocult™ coliform agar (CCA), 3M Petrifilm™ CC and EC (PEC) and 3-tube MPN, as reference methods, in 100 naturally contaminated food samples. The correlation coefficients of both Sanita-kun™ for coliform detection were more than 0·95 for all comparisons. For E. coli detection, Sanita-kun™ EC was compared with CCA, PEC and MPN in 100 artificially contaminated food samples. The correlation coefficients for E. coli detection of Sanita-kun™ EC were more than 0·95 for all comparisons. There were no significant differences in all comparisons when conducting a one-way analysis of variance (anova). Both Sanita-kun™ significantly inhibited colour interference by lactase when inhibition of enzymatic staining was assessed using 40 natural cheese samples spiked with coliform. Our results demonstrated Sanita-kun™ CC and EC are suitable alternatives for the enumeration of coliforms and E. coli/coliforms, respectively, in a variety of foods, and specifically in fermented foods. Current chromogenic media for coliforms and Escherichia coli/coliforms have enzymatic coloration due to breaking down of chromogenic substrates by food lactase. The novel sheet culture media which have film layer to avoid coloration by food lactase have been developed for enumeration of coliforms and E. coli/coliforms respectively. In this study, we demonstrated these media had comparable performance with reference methods and less interference by food lactase. These media have a possibility not only

  14. Pulsational stabilities of a star in thermal imbalance: comparison between the methods

    International Nuclear Information System (INIS)

    Vemury, S.K.

    1978-01-01

    The stability coefficients for quasi-adiabatic pulsations for a model in thermal imbalance are evaluated using the dynamical energy (DE) approach, the total (kinetic plus potential) energy (TE) approach, and the small amplitude (SA) approaches. From a comparison among the methods, it is found that there can exist two distinct stability coefficients under conditions of thermal imbalance as pointed out by Demaret. It is shown that both the TE approaches lead to one stability coefficient, while both the SA approaches lead to another coefficient. The coefficient obtained through the energy approaches is identified as the one which determines the stability of the velocity amplitudes.For a prenova model with a thin hydrogen-burning shell in thermal imbalance, several radial modes are found to be unstable both for radial displacements and for velocity amplitudes. However, a new kind of pulsational instability also appears, viz., while the radial displacements are unstable, the velocity amplitudes may be stabilized through the thermal imbalance terms

  15. Comparison of validation methods for forming simulations

    Science.gov (United States)

    Schug, Alexander; Kapphan, Gabriel; Bardl, Georg; Hinterhölzl, Roland; Drechsler, Klaus

    2018-05-01

    The forming simulation of fibre reinforced thermoplastics could reduce the development time and improve the forming results. But to take advantage of the full potential of the simulations it has to be ensured that the predictions for material behaviour are correct. For that reason, a thorough validation of the material model has to be conducted after characterising the material. Relevant aspects for the validation of the simulation are for example the outer contour, the occurrence of defects and the fibre paths. To measure these features various methods are available. Most relevant and also most difficult to measure are the emerging fibre orientations. For that reason, the focus of this study was on measuring this feature. The aim was to give an overview of the properties of different measuring systems and select the most promising systems for a comparison survey. Selected were an optical, an eddy current and a computer-assisted tomography system with the focus on measuring the fibre orientations. Different formed 3D parts made of unidirectional glass fibre and carbon fibre reinforced thermoplastics were measured. Advantages and disadvantages of the tested systems were revealed. Optical measurement systems are easy to use, but are limited to the surface plies. With an eddy current system also lower plies can be measured, but it is only suitable for carbon fibres. Using a computer-assisted tomography system all plies can be measured, but the system is limited to small parts and challenging to evaluate.

  16. Simulated annealing method for electronic circuits design: adaptation and comparison with other optimization methods

    International Nuclear Information System (INIS)

    Berthiau, G.

    1995-10-01

    The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. Finally, our simulated annealing program

  17. A comparison of three methods for determining the amount of nitric acid needed to treat HLW sludge at SRS

    International Nuclear Information System (INIS)

    Siegwald, S.F.; Ferrara, D.M.

    1994-01-01

    A comparison was made of three methods for determining the amount of nitric acid which will be needed to treat a sample of high-level waste (HLW) sludge from the Savannah River Site (SRS) Tank Farm. The treatment must ensure the resulting melter feed will have the necessary rheological and oxidation-reduction properties, reduce mercury and manganese in the sludge, and be performed in a fashion which does not produce a flammable gas mixture. The three methods examined where an empirical method based on pH measurements, a computational method based on known reactions of the species in the sludge and a titration based on neutralization of carbonate in the solution

  18. Second-order kinetic model for the sorption of cadmium onto tree fern: a comparison of linear and non-linear methods.

    Science.gov (United States)

    Ho, Yuh-Shan

    2006-01-01

    A comparison was made of the linear least-squares method and a trial-and-error non-linear method of the widely used pseudo-second-order kinetic model for the sorption of cadmium onto ground-up tree fern. Four pseudo-second-order kinetic linear equations are discussed. Kinetic parameters obtained from the four kinetic linear equations using the linear method differed but they were the same when using the non-linear method. A type 1 pseudo-second-order linear kinetic model has the highest coefficient of determination. Results show that the non-linear method may be a better way to obtain the desired parameters.

  19. Comparison of Feature Learning Methods for Human Activity Recognition Using Wearable Sensors

    Directory of Open Access Journals (Sweden)

    Frédéric Li

    2018-02-01

    Full Text Available Getting a good feature representation of data is paramount for Human Activity Recognition (HAR using wearable sensors. An increasing number of feature learning approaches—in particular deep-learning based—have been proposed to extract an effective feature representation by analyzing large amounts of data. However, getting an objective interpretation of their performances faces two problems: the lack of a baseline evaluation setup, which makes a strict comparison between them impossible, and the insufficiency of implementation details, which can hinder their use. In this paper, we attempt to address both issues: we firstly propose an evaluation framework allowing a rigorous comparison of features extracted by different methods, and use it to carry out extensive experiments with state-of-the-art feature learning approaches. We then provide all the codes and implementation details to make both the reproduction of the results reported in this paper and the re-use of our framework easier for other researchers. Our studies carried out on the OPPORTUNITY and UniMiB-SHAR datasets highlight the effectiveness of hybrid deep-learning architectures involving convolutional and Long-Short-Term-Memory (LSTM to obtain features characterising both short- and long-term time dependencies in the data.

  20. Reexamining the Dissolution of Spent Fuel: A Comparison of Different Methods for Calculating Rates

    International Nuclear Information System (INIS)

    Hanson, Brady D.; Stout, Ray B.

    2004-01-01

    Dissolution rates for spent fuel have typically been reported in terms of a rate normalized to the surface area of the specimen. Recent evidence has shown that neither the geometric surface area nor that measured with BET accurately predicts the effective surface area of spent fuel. Dissolution rates calculated from results obtained by flowthrough tests were reexamined comparing the cumulative releases and surface area normalized rates. While initial surface area is important for comparison of different rates, it appears that normalizing to the surface area introduces unnecessary uncertainty compared to using cumulative or fractional release rates. Discrepancies in past data analyses are mitigated using this alternative method

  1. Experimental comparison of the different methods for seismic qualification of electrical cabinets

    International Nuclear Information System (INIS)

    Buland, P.; Gauthier, G.; Simon, D.

    1993-01-01

    This paper presents the results of an experimental seismic study performed on a cabinet equipped with 96 acceleration sensitive relays located in four racks. The aim of this study is to verify the validity of the seismic qualification method proposed in the IEC 980 standard. The cabinet was primarily tested on a shaking table and the relay chatter was monitored. The interactions between the racks and the cabinet frame induce shocks which were correlated with some of the contact openings. The racks were afterwards tested individually with the accelerations recorded during the cabinet test. A comparison of relay chatter was performed for both test phases. An important reduction of relay chatter was noticed during the racks test. This is due to the fact that it is not possible to fully represent on shaking table the complex vibration environment (shocks) sustained by a rack in a cabinet

  2. A study for lattice comparison for PLS 2 GeV storage ring

    International Nuclear Information System (INIS)

    Yoon, M.

    1991-01-01

    TBA and DBA lattices are compared for 1.5-2.5 GeV synchrotron light source, with particular attention to the PLS 2 GeV electron storage ring currently being developed in Pohang, Korea. For the comparison study, the optimum electron energy was chosen to be 2 GeV and the circumference of the ring is less than 280.56 m, the natural beam emittance no greater than 13 nm. Results from various linear and nonlinear optics comparison studies are presented

  3. Comparison of selected methods for the enumeration of fecal coliforms and Escherichia coli in shellfish.

    Science.gov (United States)

    Grabow, W O; De Villiers, J C; Schildhauer, C I

    1992-09-01

    In a comparison of five selected methods for the enumeration of fecal coliforms and Escherichia coli in naturally contaminated and sewage-seeded mussels (Choromytilus spp.) and oysters (Ostrea spp.), a spread-plate procedure with mFC agar without rosolic acid and preincubation proved the method of choice for routine quality assessment.

  4. A global multicenter study on reference values: 1. Assessment of methods for derivation and comparison of reference intervals.

    Science.gov (United States)

    Ichihara, Kiyoshi; Ozarda, Yesim; Barth, Julian H; Klee, George; Qiu, Ling; Erasmus, Rajiv; Borai, Anwar; Evgina, Svetlana; Ashavaid, Tester; Khan, Dilshad; Schreier, Laura; Rolle, Reynan; Shimizu, Yoshihisa; Kimura, Shogo; Kawano, Reo; Armbruster, David; Mori, Kazuo; Yadav, Binod K

    2017-04-01

    The IFCC Committee on Reference Intervals and Decision Limits coordinated a global multicenter study on reference values (RVs) to explore rational and harmonizable procedures for derivation of reference intervals (RIs) and investigate the feasibility of sharing RIs through evaluation of sources of variation of RVs on a global scale. For the common protocol, rather lenient criteria for reference individuals were adopted to facilitate harmonized recruitment with planned use of the latent abnormal values exclusion (LAVE) method. As of July 2015, 12 countries had completed their study with total recruitment of 13,386 healthy adults. 25 analytes were measured chemically and 25 immunologically. A serum panel with assigned values was measured by all laboratories. RIs were derived by parametric and nonparametric methods. The effect of LAVE methods is prominent in analytes which reflect nutritional status, inflammation and muscular exertion, indicating that inappropriate results are frequent in any country. The validity of the parametric method was confirmed by the presence of analyte-specific distribution patterns and successful Gaussian transformation using the modified Box-Cox formula in all countries. After successful alignment of RVs based on the panel test results, nearly half the analytes showed variable degrees of between-country differences. This finding, however, requires confirmation after adjusting for BMI and other sources of variation. The results are reported in the second part of this paper. The collaborative study enabled us to evaluate rational methods for deriving RIs and comparing the RVs based on real-world datasets obtained in a harmonized manner. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  5. A Comparison of Platelet Count and Enrichment Percentages in the Platelet Rich Plasma (PRP) Obtained Following Preparation by Three Different Methods.

    Science.gov (United States)

    Sabarish, Ram; Lavu, Vamsi; Rao, Suresh Ranga

    2015-02-01

    Platelet rich plasma (PRP) represents an easily accessible and rich source of autologous growth factors. Different manual methods for the preparation of PRP have been suggested. Lacuna in knowledge exists about the efficacy of PRP preparation by these different manual methods. This study was performed to determine the effects of centrifugation rate revolutions per minute (RPM) and time on the platelet count and enrichment percentages in the concentrates obtained following the three different manual methods of PRP preparation. In vitro experimental study. This was an experimental study in which platelet concentration was assessed in the PRP prepared by three different protocols as suggested by Marx R (method 1), Okuda K (method 2) and Landesberg R (method 3). A total of 60 peripheral blood samples, (n=20 per method) were obtained from healthy volunteers. Baseline platelet count was assessed for all the subjects following which PRP was prepared. The platelet count in the PRP was determined using coulter counter (Sysmex XT 2000i). The mean of the platelet count obtained and their enrichment percentage were calculated and intergroup comparison was done (Tukey's HSD test). The number of platelets and enrichment percentage in PRP prepared by method 1 was higher compared to method 2 and method 3; this difference in platelet concentrates was found to be statistically significant (p < 0.05). The centrifugation rate and time appear to be important parameters, which influence the platelet yield. Method 1 which had lower centrifugation rate and time yielded a greater platelet count and enrichment percentage.

  6. Evaluation of Acute Aortic Dissection Type a Factors and Comparison the Postoperative Clinical Outcomes between Two Surgical Methods

    Directory of Open Access Journals (Sweden)

    Hasan Shemirani

    2017-01-01

    Full Text Available Background: Although aortic dissection is a rare disease, it causes high level of mortality. If ascending aorta gets involved in this disease, it is known as type A. According to small number of studies about this disease in Iran, this study conducted to detect the factors related to acute aortic dissection type A, its surgery consequences and the factors affecting them. Materials and Methods: In this historical cohort study, all patients having acute aortic dissection type A referring to Chamran Hospital from 2006 to 2012 were studied. The impact of two surgical methods including antegrade cerebral perfusion (ACP and retrograde cerebral one (RCP on surgical and long-term mortality and recurrence of dissection was determined. The relation of mortality rate and hemodynamic instability before surgery, age more than 70 years old, ejection fraction lower than 50%, prolonged cardiopulmonary bypass pump (CPBP time and excessive blood transfusion, was assessed. Results: Surgery and long-term mortality and recurrence of dissection were 35.3%, 30.8% and 30.4%. Surgical and long-term death in the patients being operated by ACP method was lower than those one being operated by RCP (P < 0.001. Excessive blood transfusion and unstable hemodynamic condition had significant effect on surgical mortality (P = 0.014, 0.030, respectively. CPBP time and unstable hemodynamic condition affected long-term mortality significantly (P = 0.002. Conclusion: The result found that ACP is the preferable kind of surgery in comparison with RCP according to the surgical and long-term mortality.

  7. Assessment of Three Flood Hazard Mapping Methods: A Case Study of Perlis

    Science.gov (United States)

    Azizat, Nazirah; Omar, Wan Mohd Sabki Wan

    2018-03-01

    Flood is a common natural disaster and also affect the all state in Malaysia. Regarding to Drainage and Irrigation Department (DID) in 2007, about 29, 270 km2 or 9 percent of region of the country is prone to flooding. Flood can be such devastating catastrophic which can effected to people, economy and environment. Flood hazard mapping can be used is an important part in flood assessment to define those high risk area prone to flooding. The purposes of this study are to prepare a flood hazard mapping in Perlis and to evaluate flood hazard using frequency ratio, statistical index and Poisson method. The six factors affecting the occurrence of flood including elevation, distance from the drainage network, rainfall, soil texture, geology and erosion were created using ArcGIS 10.1 software. Flood location map in this study has been generated based on flooded area in year 2010 from DID. These parameters and flood location map were analysed to prepare flood hazard mapping in representing the probability of flood area. The results of the analysis were verified using flood location data in year 2013, 2014, 2015. The comparison result showed statistical index method is better in prediction of flood area rather than frequency ratio and Poisson method.

  8. A comparison of digital gene expression profiling and methyl DNA immunoprecipitation as methods for gene discovery in honeybee (Apis mellifera behavioural genomic analyses.

    Directory of Open Access Journals (Sweden)

    Cui Guan

    Full Text Available The honey bee has a well-organized system of division of labour among workers. Workers typically progress through a series of discrete behavioural castes as they age, and this has become an important case study for exploring how dynamic changes in gene expression can influence behaviour. Here we applied both digital gene expression analysis and methyl DNA immunoprecipitation analysis to nurse, forager and reverted nurse bees (nurses that have returned to the nursing state after a period spent foraging from the same colony in order to compare the outcomes of these different forms of genomic analysis. A total of 874 and 710 significantly differentially expressed genes were identified in forager/nurse and reverted nurse/forager comparisons respectively. Of these, 229 genes exhibited reversed directions of gene expression differences between the forager/nurse and reverted nurse/forager comparisons. Using methyl-DNA immunoprecipitation combined with high-throughput sequencing (MeDIP-seq we identified 366 and 442 significantly differentially methylated genes in forager/nurse and reverted nurse/forager comparisons respectively. Of these, 165 genes were identified as differentially methylated in both comparisons. However, very few genes were identified as both differentially expressed and differentially methylated in our comparisons of nurses and foragers. These findings confirm that changes in both gene expression and DNA methylation are involved in the nurse and forager behavioural castes, but the different analytical methods reveal quite distinct sets of candidate genes.

  9. A comparison between the conventional manual ROI method and an automatic algorithm for semiquantitative analysis of SPECT studies

    International Nuclear Information System (INIS)

    Pagan, L; Novi, B; Guidarelli, G; Tranfaglia, C; Galli, S; Lucchi, G; Fagioli, G

    2011-01-01

    In this study, the performance of a free software for automatic segmentation of striatal SPECT brain studies (BasGanV2 - www.aimn.it) and a standard manual Region Of Interest (ROI) method were compared. The anthropomorphic Alderson RSD phantom, filled with solutions at different concentration of 123 I-FP-CIT with Caudate-Putamen to Background ratios between 1 and 8.7 and Caudate to Putamen ratios between 1 and 2, was imaged on a Philips-Irix triple head gamma camera. Images were reconstructed using filtered back-projection and processed with both BasGanV2, that provides normalized striatal uptake values on volumetric anatomical ROIs, and a manual method, based on average counts per voxel in ROIs drawn in a three-slice section. Caudate-Putamen/Background and Caudate/Putamen ratios obtained with the two methods were compared with true experimental ratios. Good correlation was found for each method; BasGanV2, however, has higher R index (BasGan R mean = 0.95, p mean = 0.89, p 123 I-FP-CIT SPECT data with, moreover, the advantage of the availability of a control subject's database.

  10. Hydrogeochemical methods for studying uranium mineralization in sedimentary rocks

    International Nuclear Information System (INIS)

    Lisitsin, A.K.

    1985-01-01

    The role of hydrogeochemical studies of uranium deposits is considered, which permits to obtain data on ore forming role of water solutions. The hydrogeochemistry of ore formation is determined as a result of physicochemical analysis of mineral paragenesis. Analysis results of the content of primary and secondary gaseous - liquid inclusions into the minerals are of great importance. Another way to determine the main features of ore formation hydrogeochemistry envisages simultaneous analysis of material from a number of deposits of one genetic type but in different periods of their geochemical life: being formed, formed and preserved, and being destructed. Comparison of mineralogo-geochemical zonation and hydrogeochemical one in water-bearing horizon is an efficient method, resulting in the objective interpretation of the facts. The comparison is compulsory when determining deposit genesis

  11. Results from the Application of Uncertainty Methods in the CSNI Uncertainty Methods Study (UMS)

    International Nuclear Information System (INIS)

    Glaeser, H.

    2008-01-01

    Within licensing procedures there is the incentive to replace the conservative requirements for code application by a - best estimate - concept supplemented by an uncertainty analysis to account for predictive uncertainties of code results. Methods have been developed to quantify these uncertainties. The Uncertainty Methods Study (UMS) Group, following a mandate from CSNI, has compared five methods for calculating the uncertainty in the predictions of advanced -best estimate- thermal-hydraulic codes. Most of the methods identify and combine input uncertainties. The major differences between the predictions of the methods came from the choice of uncertain parameters and the quantification of the input uncertainties, i.e. the wideness of the uncertainty ranges. Therefore, suitable experimental and analytical information has to be selected to specify these uncertainty ranges or distributions. After the closure of the Uncertainty Method Study (UMS) and after the report was issued comparison calculations of experiment LSTF-SB-CL-18 were performed by University of Pisa using different versions of the RELAP 5 code. It turned out that the version used by two of the participants calculated a 170 K higher peak clad temperature compared with other versions using the same input deck. This may contribute to the differences of the upper limit of the uncertainty ranges.

  12. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.

    Science.gov (United States)

    Rukhin, Andrew L

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.

  13. Treatment of transverse patellar fractures: a comparison between metallic and non-metallic implants.

    Science.gov (United States)

    Heusinkveld, Maarten H G; den Hamer, Anniek; Traa, Willeke A; Oomen, Pim J A; Maffulli, Nicola

    2013-01-01

    Several methods of transverse patellar fixation have been described. This study compares the clinical outcome and the occurrence of complications of various fixation methods. The databases PubMed, Web of Science, Science Direct, Google Scholar and Google were searched. A direct comparison between fixation techniques using mixed or non-metallic implants and metallic K-wire and tension band fixation shows no significant difference in clinical outcome between both groups. Additionally, studies reporting novel operation techniques show good clinical results. Studies describing the treatment of patients using non-metallic or mixed implants are fewer compared with those using metallic fixation. A large variety of clinical scoring systems were used for assessing the results of treatment, which makes direct comparison difficult. More data of fracture treatment using non-metallic or mixed implants is needed to achieve a more balanced comparison.

  14. An Enzymatic Clinical Chemistry Laboratory Experiment Incorporating an Introduction to Mathematical Method Comparison Techniques

    Science.gov (United States)

    Duxbury, Mark

    2004-01-01

    An enzymatic laboratory experiment based on the analysis of serum is described that is suitable for students of clinical chemistry. The experiment incorporates an introduction to mathematical method-comparison techniques in which three different clinical glucose analysis methods are compared using linear regression and Bland-Altman difference…

  15. Antimicrobial activity evaluation and comparison of methods of susceptibility for Klebsiella pneumoniae carbapenemase (KPC)-producing Enterobacter spp. isolates.

    Science.gov (United States)

    Rechenchoski, Daniele Zendrini; Dambrozio, Angélica Marim Lopes; Vivan, Ana Carolina Polano; Schuroff, Paulo Alfonso; Burgos, Tatiane das Neves; Pelisson, Marsileni; Perugini, Marcia Regina Eches; Vespero, Eliana Carolina

    The production of KPC (Klebsiella pneumoniae carbapenemase) is the major mechanism of resistance to carbapenem agents in enterobacterias. In this context, forty KPC-producing Enterobacter spp. clinical isolates were studied. It was evaluated the activity of antimicrobial agents: polymyxin B, tigecycline, ertapenem, imipenem and meropenem, and was performed a comparison of the methodologies used to determine the susceptibility: broth microdilution, Etest ® (bioMérieux), Vitek 2 ® automated system (bioMérieux) and disc diffusion. It was calculated the minimum inhibitory concentration (MIC) for each antimicrobial and polymyxin B showed the lowest concentrations for broth microdilution. Errors also were calculated among the techniques, tigecycline and ertapenem were the antibiotics with the largest and the lower number of discrepancies, respectively. Moreover, Vitek 2 ® automated system was the method most similar compared to the broth microdilution. Therefore, is important to evaluate the performance of new methods in comparison to the reference method, broth microdilution. Copyright © 2017 Sociedade Brasileira de Microbiologia. Published by Elsevier Editora Ltda. All rights reserved.

  16. EPA (Environmental Protection Agency) Method Study 12, cyanide in water. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Winter, J.; Britton, P.; Kroner, R.

    1984-05-01

    EPA Method Study 12, Cyanide in Water reports the results of a study by EMSL-Cincinnati for the parameters, Total Cyanide and Cyanides Amendable to Chlorination, present in water at microgram per liter levels. Four methods: pyridine-pyrazolone, pyridine-barbituric acid, electrode and Roberts-Jackson were used by 112 laboratories in Federal and State agencies, municipalities, universities, and the private/industrial sector. Sample concentrates were prepared in pairs with similar concentrations at each of three levels. Analysts diluted samples to volume with distilled and natural waters and analyzed them. Precision, accuracy, bias and the natural water interference were evaluated for each analytical method and comparisons were made between the four methods.

  17. A comparison of confidence interval methods for the concordance correlation coefficient and intraclass correlation coefficient with small number of raters.

    Science.gov (United States)

    Feng, Dai; Svetnik, Vladimir; Coimbra, Alexandre; Baumgartner, Richard

    2014-01-01

    The intraclass correlation coefficient (ICC) with fixed raters or, equivalently, the concordance correlation coefficient (CCC) for continuous outcomes is a widely accepted aggregate index of agreement in settings with small number of raters. Quantifying the precision of the CCC by constructing its confidence interval (CI) is important in early drug development applications, in particular in qualification of biomarker platforms. In recent years, there have been several new methods proposed for construction of CIs for the CCC, but their comprehensive comparison has not been attempted. The methods consisted of the delta method and jackknifing with and without Fisher's Z-transformation, respectively, and Bayesian methods with vague priors. In this study, we carried out a simulation study, with data simulated from multivariate normal as well as heavier tailed distribution (t-distribution with 5 degrees of freedom), to compare the state-of-the-art methods for assigning CI to the CCC. When the data are normally distributed, the jackknifing with Fisher's Z-transformation (JZ) tended to provide superior coverage and the difference between it and the closest competitor, the Bayesian method with the Jeffreys prior was in general minimal. For the nonnormal data, the jackknife methods, especially the JZ method, provided the coverage probabilities closest to the nominal in contrast to the others which yielded overly liberal coverage. Approaches based upon the delta method and Bayesian method with conjugate prior generally provided slightly narrower intervals and larger lower bounds than others, though this was offset by their poor coverage. Finally, we illustrated the utility of the CIs for the CCC in an example of a wake after sleep onset (WASO) biomarker, which is frequently used in clinical sleep studies of drugs for treatment of insomnia.

  18. Comparison of different concentration methods for the detection of hepatitis A virus and calicivirus from bottled natural mineral waters

    DEFF Research Database (Denmark)

    Di Pasquale, S.; Paniconi, M; Auricchio, B

    2010-01-01

    To evaluate the efficiency of different recovery methods of viral RNA from bottled water, a comparison was made of 2 positively and 2 negatively charged membranes that were used for absorbing and releasing HAV virus particles during the filtration of viral spiked bottled water. All the 4 membrane...

  19. Comparison of four statistical and machine learning methods for crash severity prediction.

    Science.gov (United States)

    Iranitalab, Amirfarrokh; Khattak, Aemal

    2017-11-01

    Crash severity prediction models enable different agencies to predict the severity of a reported crash with unknown severity or the severity of crashes that may be expected to occur sometime in the future. This paper had three main objectives: comparison of the performance of four statistical and machine learning methods including Multinomial Logit (MNL), Nearest Neighbor Classification (NNC), Support Vector Machines (SVM) and Random Forests (RF), in predicting traffic crash severity; developing a crash costs-based approach for comparison of crash severity prediction methods; and investigating the effects of data clustering methods comprising K-means Clustering (KC) and Latent Class Clustering (LCC), on the performance of crash severity prediction models. The 2012-2015 reported crash data from Nebraska, United States was obtained and two-vehicle crashes were extracted as the analysis data. The dataset was split into training/estimation (2012-2014) and validation (2015) subsets. The four prediction methods were trained/estimated using the training/estimation dataset and the correct prediction rates for each crash severity level, overall correct prediction rate and a proposed crash costs-based accuracy measure were obtained for the validation dataset. The correct prediction rates and the proposed approach showed NNC had the best prediction performance in overall and in more severe crashes. RF and SVM had the next two sufficient performances and MNL was the weakest method. Data clustering did not affect the prediction results of SVM, but KC improved the prediction performance of MNL, NNC and RF, while LCC caused improvement in MNL and RF but weakened the performance of NNC. Overall correct prediction rate had almost the exact opposite results compared to the proposed approach, showing that neglecting the crash costs can lead to misjudgment in choosing the right prediction method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Task-oriented comparison of power spectral density estimation methods for quantifying acoustic attenuation in diagnostic ultrasound using a reference phantom method.

    Science.gov (United States)

    Rosado-Mendez, Ivan M; Nam, Kibo; Hall, Timothy J; Zagzebski, James A

    2013-07-01

    Reported here is a phantom-based comparison of methods for determining the power spectral density (PSD) of ultrasound backscattered signals. Those power spectral density values are then used to estimate parameters describing α(f), the frequency dependence of the acoustic attenuation coefficient. Phantoms were scanned with a clinical system equipped with a research interface to obtain radiofrequency echo data. Attenuation, modeled as a power law α(f)= α0 f (β), was estimated using a reference phantom method. The power spectral density was estimated using the short-time Fourier transform (STFT), Welch's periodogram, and Thomson's multitaper technique, and performance was analyzed when limiting the size of the parameter-estimation region. Errors were quantified by the bias and standard deviation of the α0 and β estimates, and by the overall power-law fit error (FE). For parameter estimation regions larger than ~34 pulse lengths (~1 cm for this experiment), an overall power-law FE of 4% was achieved with all spectral estimation methods. With smaller parameter estimation regions as in parametric image formation, the bias and standard deviation of the α0 and β estimates depended on the size of the parameter estimation region. Here, the multitaper method reduced the standard deviation of the α0 and β estimates compared with those using the other techniques. The results provide guidance for choosing methods for estimating the power spectral density in quantitative ultrasound methods.

  1. A comparison of U.S. and European methods for accident scenario, identificaton, selection and quantification

    International Nuclear Information System (INIS)

    Cadwallader, L.C.; Djerassi, H.; Lampin, I.

    1989-10-01

    This paper presents a comparison of the varying methods used to identify and select accident-initiating events for safety analysis and probabilistic risk assessment (PRA). Initiating events are important in that they define the extent of a given safety analysis or PRA. Comprehensiveness in identification and selection of initiating events is necessary to ensure that a thorough analysis is being performed. While total completeness cannot ever be realized, inclusion of all safety significant events can be attained. The European approach to initiating event identification and selection arises from within a newly developed Safety Analysis methodology framework. This is a functional approach, with accident initiators based on events that will cause a system or facility loss of function. The US method divides accident initiators into two groups, internal accident initiators into two groups, internal and external events. Since traditional US PRA techniques are applied to fusion facilities, the recommended PRA-based approach is a review of historical safety documents coupled with a facility-level Master Logic Diagram. The US and European methods are described, and both are applied to a proposed International Thermonuclear Experiment Reactor (ITER) Magnet System in a sample problem. Contrasts in the US and European methods are discussed. Within their respective frameworks, each method can provide the comprehensiveness of safety-significant events needed for a thorough analysis. 4 refs., 8 figs., 11 tabs

  2. Comparison of three-dimensional poisson solution methods for particle-based simulation and inhomogeneous dielectrics.

    Science.gov (United States)

    Berti, Claudio; Gillespie, Dirk; Bardhan, Jaydeep P; Eisenberg, Robert S; Fiegna, Claudio

    2012-07-01

    Particle-based simulation represents a powerful approach to modeling physical systems in electronics, molecular biology, and chemical physics. Accounting for the interactions occurring among charged particles requires an accurate and efficient solution of Poisson's equation. For a system of discrete charges with inhomogeneous dielectrics, i.e., a system with discontinuities in the permittivity, the boundary element method (BEM) is frequently adopted. It provides the solution of Poisson's equation, accounting for polarization effects due to the discontinuity in the permittivity by computing the induced charges at the dielectric boundaries. In this framework, the total electrostatic potential is then found by superimposing the elemental contributions from both source and induced charges. In this paper, we present a comparison between two BEMs to solve a boundary-integral formulation of Poisson's equation, with emphasis on the BEMs' suitability for particle-based simulations in terms of solution accuracy and computation speed. The two approaches are the collocation and qualocation methods. Collocation is implemented following the induced-charge computation method of D. Boda et al. [J. Chem. Phys. 125, 034901 (2006)]. The qualocation method is described by J. Tausch et al. [IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 20, 1398 (2001)]. These approaches are studied using both flat and curved surface elements to discretize the dielectric boundary, using two challenging test cases: a dielectric sphere embedded in a different dielectric medium and a toy model of an ion channel. Earlier comparisons of the two BEM approaches did not address curved surface elements or semiatomistic models of ion channels. Our results support the earlier findings that for flat-element calculations, qualocation is always significantly more accurate than collocation. On the other hand, when the dielectric boundary is discretized with curved surface elements, the

  3. Methods and models used in comparative risk studies

    International Nuclear Information System (INIS)

    Devooght, J.

    1983-01-01

    Comparative risk studies make use of a large number of methods and models based upon a set of assumptions incompletely formulated or of value judgements. Owing to the multidimensionality of risks and benefits, the economic and social context may notably influence the final result. Five classes of models are briefly reviewed: accounting of fluxes of effluents, radiation and energy; transport models and health effects; systems reliability and bayesian analysis; economic analysis of reliability and cost-risk-benefit analysis; decision theory in presence of uncertainty and multiple objectives. Purpose and prospect of comparative studies are assessed in view of probable diminishing returns for large generic comparisons [fr

  4. Randomly and Non-Randomly Missing Renal Function Data in the Strong Heart Study: A Comparison of Imputation Methods.

    Directory of Open Access Journals (Sweden)

    Nawar Shara

    Full Text Available Kidney and cardiovascular disease are widespread among populations with high prevalence of diabetes, such as American Indians participating in the Strong Heart Study (SHS. Studying these conditions simultaneously in longitudinal studies is challenging, because the morbidity and mortality associated with these diseases result in missing data, and these data are likely not missing at random. When such data are merely excluded, study findings may be compromised. In this article, a subset of 2264 participants with complete renal function data from Strong Heart Exams 1 (1989-1991, 2 (1993-1995, and 3 (1998-1999 was used to examine the performance of five methods used to impute missing data: listwise deletion, mean of serial measures, adjacent value, multiple imputation, and pattern-mixture. Three missing at random models and one non-missing at random model were used to compare the performance of the imputation techniques on randomly and non-randomly missing data. The pattern-mixture method was found to perform best for imputing renal function data that were not missing at random. Determining whether data are missing at random or not can help in choosing the imputation method that will provide the most accurate results.

  5. Comparison of multimedia system and conventional method in patients’ selecting prosthetic treatment

    Directory of Open Access Journals (Sweden)

    Baghai R

    2010-12-01

    Full Text Available "nBackground and Aims: Selecting an appropriate treatment plan is one of the most critical aspects of dental treatments. The purpose of this study was to compare multimedia system and conventional method in patients' selecting prosthetic treatment and the time consumed."nMaterials and Methods: Ninety patients were randomly divided into three groups. Patients in group A, once were instructed using the conventional method of dental office and once multimedia system and time was measured in seconds from the beginning of the instruction till the patient had came to decision. The patients were asked about the satisfaction of the method used for them. In group B, patients were only instructed using the conventional method, whereas they were only exposed to soft ware in group C. The data were analyzed with Paired-T-test"n(in group A and T-test and Mann-Whitney test (in groups B and C."nResult: There was a significant difference between multimedia system and conventional method in group A and also between groups B and C (P<0.001. In group A and between groups B and C, patient's satisfaction about multimedia system was better. However, in comparison between groups B and C, multimedia system did not have a significant effect in treatment selection score (P=0.08."nConclusion: Using multimedia system is recommended due to its high ability in giving answers to a large number of patient's questions as well as in terms of marketing.

  6. Comparison of methods for measuring flux gradients in type II superconductors

    International Nuclear Information System (INIS)

    Kroeger, D.M.; Koch, C.C.; Charlesworth, J.P.

    1975-01-01

    A comparison has been made of four methods of measuring the critical current density J/sub c/ in hysteretic type II superconductors, having a wide range of K and J/sub c/ values, in magnetic fields up to 70 kOe. Two of the methods, (a) resistive measurements and (b) magnetization measurements, were carried out in static magnetic fields. The other two methods involved analysis of the response of the sample to a small alternating field superimposed on the static field. The response was analyzed either (c) by measuring the third-harmonic content or (d) by integration of the waveform to obtain measure of flux penetration. The results are discussed with reference to the agreement between the different techniques and the consistency of the critical state hypothesis on which all these techniques are based. It is concluded that flux-penetration measurements by method (d) provide the most detailed information about J/sub c/ but that one must be wary of minor failures of the critical state hypothesis. Best results are likely to be obtained by using more than one method. (U.S.)

  7. A Comparison Study for DNA Motif Modeling on Protein Binding Microarray

    KAUST Repository

    Wong, Ka-Chun; Li, Yue; Peng, Chengbin; Wong, Hau-San

    2015-01-01

    Transcription Factor Binding Sites (TFBSs) are relatively short (5-15 bp) and degenerate. Identifying them is a computationally challenging task. In particular, Protein Binding Microarray (PBM) is a high-throughput platform that can measure the DNA binding preference of a protein in a comprehensive and unbiased manner; for instance, a typical PBM experiment can measure binding signal intensities of a protein to all possible DNA k-mers (k=810). Since proteins can often bind to DNA with different binding intensities, one of the major challenges is to build motif models which can fully capture the quantitative binding affinity data. To learn DNA motif models from the non-convex objective function landscape, several optimization methods are compared and applied to the PBM motif model building problem. In particular, representative methods from different optimization paradigms have been chosen for modeling performance comparison on hundreds of PBM datasets. The results suggest that the multimodal optimization methods are very effective for capturing the binding preference information from PBM data. In particular, we observe a general performance improvement using di-nucleotide modeling over mono-nucleotide modeling. In addition, the models learned by the best-performing method are applied to two independent applications: PBM probe rotation testing and ChIP-Seq peak sequence prediction, demonstrating its biological applicability.

  8. A Comparison Study for DNA Motif Modeling on Protein Binding Microarray

    KAUST Repository

    Wong, Ka-Chun

    2015-06-11

    Transcription Factor Binding Sites (TFBSs) are relatively short (5-15 bp) and degenerate. Identifying them is a computationally challenging task. In particular, Protein Binding Microarray (PBM) is a high-throughput platform that can measure the DNA binding preference of a protein in a comprehensive and unbiased manner; for instance, a typical PBM experiment can measure binding signal intensities of a protein to all possible DNA k-mers (k=810). Since proteins can often bind to DNA with different binding intensities, one of the major challenges is to build motif models which can fully capture the quantitative binding affinity data. To learn DNA motif models from the non-convex objective function landscape, several optimization methods are compared and applied to the PBM motif model building problem. In particular, representative methods from different optimization paradigms have been chosen for modeling performance comparison on hundreds of PBM datasets. The results suggest that the multimodal optimization methods are very effective for capturing the binding preference information from PBM data. In particular, we observe a general performance improvement using di-nucleotide modeling over mono-nucleotide modeling. In addition, the models learned by the best-performing method are applied to two independent applications: PBM probe rotation testing and ChIP-Seq peak sequence prediction, demonstrating its biological applicability.

  9. Quantification of [18F]FDOPA and [18F]-3-OMFD in pig serum - a new TLC method in comparison with HPLC

    International Nuclear Information System (INIS)

    Pawelke, B.; Fuechtner, F.; Bergmann, R.; Brust, P.

    2002-01-01

    A novel TLC method for convenient quantification of [ 18 F]FDOPA and [ 18 F]-3-OMFD in routine operation was developed and the results assessed in comparison with an HPLC analysis. The two methods were found to correlate well. [ 18 F]fluoride which resisted determination on HPLC RP-18 columns was also quantified by TLC. (orig.)

  10. COMPARISON OF ULTRASOUND IMAGE FILTERING METHODS BY MEANS OF MULTIVARIABLE KURTOSIS

    Directory of Open Access Journals (Sweden)

    Mariusz Nieniewski

    2017-06-01

    Full Text Available Comparison of the quality of despeckled US medical images is complicated because there is no image of a human body that would be free of speckles and could serve as a reference. A number of various image metrics are currently used for comparison of filtering methods; however, they do not satisfactorily represent the visual quality of images and medical expert’s satisfaction with images. This paper proposes an innovative use of relative multivariate kurtosis for the evaluation of the most important edges in an image. Multivariate kurtosis allows one to introduce an order among the filtered images and can be used as one of the metrics for image quality evaluation. At present there is no method which would jointly consider individual metrics. Furthermore, these metrics are typically defined by comparing the noisy original and filtered images, which is incorrect since the noisy original cannot serve as a golden standard. In contrast to this, the proposed kurtosis is the absolute measure, which is calculated independently of any reference image and it agrees with the medical expert’s satisfaction to a large extent. The paper presents a numerical procedure for calculating kurtosis and describes results of such calculations for a computer-generated noisy image, images of a general purpose phantom and a cyst phantom, as well as real-life images of thyroid and carotid artery obtained with SonixTouch ultrasound machine. 16 different methods of image despeckling are compared via kurtosis. The paper shows that visually more satisfactory despeckling results are associated with higher kurtosis, and to a certain degree kurtosis can be used as a single metric for evaluation of image quality.

  11. Comparative Study of the Volumetric Methods Calculation Using GNSS Measurements

    Science.gov (United States)

    Şmuleac, Adrian; Nemeş, Iacob; Alina Creţan, Ioana; Sorina Nemeş, Nicoleta; Şmuleac, Laura

    2017-10-01

    This paper aims to achieve volumetric calculations for different mineral aggregates using different methods of analysis and also comparison of results. To achieve these comparative studies and presentation were chosen two software licensed, namely TopoLT 11.2 and Surfer 13. TopoLT program is a program dedicated to the development of topographic and cadastral plans. 3D terrain model, level courves and calculation of cut and fill volumes, including georeferencing of images. The program Surfer 13 is produced by Golden Software, in 1983 and is active mainly used in various fields such as agriculture, construction, geophysical, geotechnical engineering, GIS, water resources and others. It is also able to achieve GRID terrain model, to achieve the density maps using the method of isolines, volumetric calculations, 3D maps. Also, it can read different file types, including SHP, DXF and XLSX. In these paper it is presented a comparison in terms of achieving volumetric calculations using TopoLT program by two methods: a method where we choose a 3D model both for surface as well as below the top surface and a 3D model in which we choose a 3D terrain model for the bottom surface and another 3D model for the top surface. The comparison of the two variants will be made with data obtained from the realization of volumetric calculations with the program Surfer 13 generating GRID terrain model. The topographical measurements were performed with equipment from Leica GPS 1200 Series. Measurements were made using Romanian position determination system - ROMPOS which ensures accurate positioning of reference and coordinates ETRS through the National Network of GNSS Permanent Stations. GPS data processing was performed with the program Leica Geo Combined Office. For the volumetric calculating the GPS used point are in 1970 stereographic projection system and for the altitude the reference is 1975 the Black Sea projection system.

  12. Comparison of conventional and nuclear-medical examination methods in the field of nephro-urology

    International Nuclear Information System (INIS)

    Pfannenstiel, P.

    1975-07-01

    Main part of this study is a further simplification of the 131 I-hippuran total body clearance, a method which was first described by Oberhausen (1968). As a result of the study the clearance can now be calculated with only one scintillation detector after i.v. injection of 131 I-hippuran. Essential for this simplification is the exact localisation of the detector at the right or left shoulder region of the patient. The unilateral clearance is calculated from the total body clearance in conjunction with the simultaneously registered isotope renogram. This method was routinely applied to 4,138 patients during November 15th 1971 and March 31st 1975. In special groups of patients sequential renal studies with the gamma-camera, renal rectilinear scans, xenon-133-washout curves from the kidneys and radioimmuno-assays of angiotensin were performed for comparison. All data were analysed in view of other clinical and radiological data. Recommendations for rational procedures in diagnosis and control of therapy of the most common nephro-urological disorders were derived from a joint study of 16 hospitals analysing 1,979 cases. (orig.) [de

  13. Comparison of deterministic and Monte Carlo methods in shielding design.

    Science.gov (United States)

    Oliveira, A D; Oliveira, C

    2005-01-01

    In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions.

  14. Comparison of deterministic and Monte Carlo methods in shielding design

    International Nuclear Information System (INIS)

    Oliveira, A. D.; Oliveira, C.

    2005-01-01

    In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions. (authors)

  15. A new method for the recovery and evidential comparison of footwear impressions using 3D structured light scanning.

    Science.gov (United States)

    Thompson, T J U; Norris, P

    2018-05-01

    Footwear impressions are one of the most common forms of evidence to be found at a crime scene, and can potentially offer the investigator a wealth of intelligence. Our aim is to highlight a new and improved technique for the recovery of footwear impressions, using three-dimensional structured light scanning. Results from this preliminary study demonstrate that this new approach is non-destructive, safe to use and is fast, reliable and accurate. Further, since this is a digital method, there is also the option of digital comparison between items of footwear and footwear impressions, and an increased ability to share recovered footwear impressions between forensic staff thus speeding up the investigation. Copyright © 2018 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.

  16. Comparison of template registration methods for multi-site meta-analysis of brain morphometry

    Science.gov (United States)

    Faskowitz, Joshua; de Zubicaray, Greig I.; McMahon, Katie L.; Wright, Margaret J.; Thompson, Paul M.; Jahanshad, Neda

    2016-03-01

    Neuroimaging consortia such as ENIGMA can significantly improve power to discover factors that affect the human brain by pooling statistical inferences across cohorts to draw generalized conclusions from populations around the world. Voxelwise analyses such as tensor-based morphometry also allow an unbiased search for effects throughout the brain. Even so, such consortium-based analyses are limited by a lack of high-powered methods to harmonize voxelwise information across study populations and scanners. While the simplest approach may be to map all images to a single standard space, the benefits of cohort-specific templates have long been established. Here we studied methods to pool voxel-wise data across sites using templates customized for each cohort but providing a meaningful common space across all studies for voxelwise comparisons. As non-linear 3D MRI registrations represent mappings between images at millimeter resolution, we need to consider the reliability of these mappings. To evaluate these mappings, we calculated test-retest statistics on the volumetric maps of expansion and contraction. Further, we created study-specific brain templates for ten T1-weighted MRI datasets, and a common space from four study-specific templates. We evaluated the efficacy of using a two-step registration framework versus a single standard space. We found that the two-step framework more reliably mapped subjects to a common space.

  17. A Comparison of Case Study and Traditional Teaching Methods for Improvement of Oral Communication and Critical-Thinking Skills

    Science.gov (United States)

    Noblitt, Lynnette; Vance, Diane E.; Smith, Michelle L. DePoy

    2010-01-01

    This study compares a traditional paper presentation approach and a case study method for the development and improvement of oral communication skills and critical-thinking skills in a class of junior forensic science majors. A rubric for rating performance in these skills was designed on the basis of the oral communication competencies developed…

  18. A practical comparison of methods to assess sum-of-products

    International Nuclear Information System (INIS)

    Rauzy, A.; Chatelet, E.; Dutuit, Y.; Berenguer, C.

    2003-01-01

    Many methods have been proposed in the literature to assess the probability of a sum-of-products. This problem has been shown computationally hard (namely no. P-hard). Therefore, algorithms can be compared only from a practical point of view. In this article, we propose first an efficient implementation of the pivotal decomposition method. This kind of algorithms is widely used in the Artificial Intelligence framework. It is unfortunately almost never considered in the reliability engineering framework, but as a pedagogical tool. We report experimental results that show that this method is in general much more efficient than classical methods that rewrite the sum-of-products under study into an equivalent sum of disjoint products. Then, we derive from our method a factorization algorithm to be used as a preprocessing method for binary decision diagrams. We show by means of experimental results that this latter approach outperforms the formers

  19. Transperineal Prostate Core Needle Biopsy: A Comparison of Coaxial Versus Noncoaxial Method in a Randomised Trial

    International Nuclear Information System (INIS)

    Babaei Jandaghi, Ali; Habibzadeh, Habib; Falahatkar, Siavash; Heidarzadeh, Abtin; Pourghorban, Ramin

    2016-01-01

    PurposeTo compare the procedural time and complication rate of coaxial technique with those of noncoaxial technique in transperineal prostate biopsy.Materials and MethodsTransperineal prostate biopsy with coaxial (first group, n = 120) and noncoaxial (second group, n = 120) methods was performed randomly in 240 patients. The procedural time was recorded. The level of pain experienced during the procedure was assessed on a visual analogue scale (VAS), and the rate of complications was evaluated in comparison of the two methods.ResultsThe procedural time was significantly shorter in the first group (p < 0.001). In the first group, pain occurred less frequently (p = 0.002), with a significantly lower VAS score being experienced (p < 0.002). No patient had post procedural fever. Haematuria (p = 0.029) and haemorrhage from the site of biopsy (p < 0.001) were seen less frequently in the first group. There was no significant difference in the rate of urethral haemorrhage between the two groups (p = 0.059). Urinary retention occurred less commonly in the first group (p = 0.029). No significant difference was seen in the rate of dysuria between the two groups (p = 0.078).ConclusionsTransperineal prostate biopsy using a coaxial needle is a faster and less painful method with a lower rate of complications compared with conventional noncoaxial technique.

  20. Transperineal Prostate Core Needle Biopsy: A Comparison of Coaxial Versus Noncoaxial Method in a Randomised Trial

    Energy Technology Data Exchange (ETDEWEB)

    Babaei Jandaghi, Ali [Guilan University of Medical Sciences, Department of Radiology, Poursina Hospital (Iran, Islamic Republic of); Habibzadeh, Habib; Falahatkar, Siavash [Guilan University of Medical Sciences, Urology Research Center, Razi Hospital (Iran, Islamic Republic of); Heidarzadeh, Abtin [Guilan University of Medical Sciences, Department of Community Medicine (Iran, Islamic Republic of); Pourghorban, Ramin, E-mail: ramin-p2005@yahoo.com [Shahid Beheshti University of Medical Sciences, Department of Radiology, Modarres Hospital (Iran, Islamic Republic of)

    2016-12-15

    PurposeTo compare the procedural time and complication rate of coaxial technique with those of noncoaxial technique in transperineal prostate biopsy.Materials and MethodsTransperineal prostate biopsy with coaxial (first group, n = 120) and noncoaxial (second group, n = 120) methods was performed randomly in 240 patients. The procedural time was recorded. The level of pain experienced during the procedure was assessed on a visual analogue scale (VAS), and the rate of complications was evaluated in comparison of the two methods.ResultsThe procedural time was significantly shorter in the first group (p < 0.001). In the first group, pain occurred less frequently (p = 0.002), with a significantly lower VAS score being experienced (p < 0.002). No patient had post procedural fever. Haematuria (p = 0.029) and haemorrhage from the site of biopsy (p < 0.001) were seen less frequently in the first group. There was no significant difference in the rate of urethral haemorrhage between the two groups (p = 0.059). Urinary retention occurred less commonly in the first group (p = 0.029). No significant difference was seen in the rate of dysuria between the two groups (p = 0.078).ConclusionsTransperineal prostate biopsy using a coaxial needle is a faster and less painful method with a lower rate of complications compared with conventional noncoaxial technique.

  1. Undergraduate prosthetics and orthotics teaching methods: A baseline for international comparison.

    Science.gov (United States)

    Aminian, Gholamreza; O'Toole, John M; Mehraban, Afsoon Hassani

    2015-08-01

    Education of Prosthetics and Orthotics is a relatively recent professional program. While there has been some work on various teaching methods and strategies in international medical education, limited publication exists within prosthetics and orthotics. To identify the teaching and learning methods that are used in Bachelor-level prosthetics and orthotics programs that are given highest priority by expert prosthetics and orthotics instructors from regions enjoying a range of economic development. Mixed method. The study partly documented by this article utilized a mixed method approach (qualitative and quantitative methods) within which each phase provided data for other phases. It began with analysis of prosthetics and orthotics curricula documents, which was followed by a broad survey of instructors in this field and then a modified Delphi process. The expert instructors who participated in this study gave high priority to student-centered, small group methods that encourage critical thinking and may lead to lifelong learning. Instructors from more developed nations placed higher priority on student's independent acquisition of prosthetics and orthotics knowledge, particularly in clinical training. Application of student-centered approaches to prosthetics and orthotics programs may be preferred by many experts, but there appeared to be regional differences in the priority given to different teaching methods. The results of this study identify the methods of teaching that are preferred by expert prosthetics and orthotics instructors from a variety of regions. This treatment of current instructional techniques may inform instructor choice of teaching methods that impact the quality of education and improve the professional skills of students. © The International Society for Prosthetics and Orthotics 2014.

  2. A Comparative Study on Evaluation Methods of Fluid Forces on Cartesian Grids

    Directory of Open Access Journals (Sweden)

    Taku Nonomura

    2017-01-01

    Full Text Available We investigate the accuracy and the computational efficiency of the numerical schemes for evaluating fluid forces in Cartesian grid systems. A comparison is made between two different types of schemes, namely, polygon-based methods and mesh-based methods, which differ in the discretization of the surface of the object. The present assessment is intended to investigate the effects of the Reynolds number, the object motion, and the complexity of the object surface. The results show that the mesh-based methods work as well as the polygon-based methods, even if the object surface is discretized in a staircase manner. In addition, the results also show that the accuracy of the mesh-based methods is strongly dependent on the evaluation of shear stresses, and thus they must be evaluated by using a reliable method, such as the ghost-cell or ghost-fluid method.

  3. Comparison between Different Extraction Methods for Determination of Primary Aromatic Amines in Food Simulant

    Directory of Open Access Journals (Sweden)

    Morteza Shahrestani

    2018-01-01

    Full Text Available The primary aromatic amines (PAAs are food contaminants which may exist in packaged food. Polyurethane (PU adhesives which are used in flexible packaging are the main source of PAAs. It is the unreacted diisocyanates which in fact migrate to foodstuff and then hydrolyze to PAAs. These PAAs include toluenediamines (TDAs and methylenedianilines (MDAs, and the selected PAAs were 2,4-TDA, 2,6-TDA, 4,4′-MDA, 2,4′-MDA, and 2,2′-MDA. PAAs have genotoxic, carcinogenic, and allergenic effects. In this study, extraction methods were applied on a 3% acetic acid as food simulant which was spiked with the PAAs under study. Extraction methods were liquid-liquid extraction (LLE, dispersive liquid-liquid microextraction (DLLME, and solid-phase extraction (SPE with C18 ec (octadecyl, HR-P (styrene/divinylbenzene, and SCX (strong cationic exchange cartridges. Extracted samples were detected and analyzed by HPLC-UV. In comparison between methods, recovery rate of SCX cartridge showed the best adsorption, up to 91% for polar PAAs (TDAs and MDAs. The interested PAAs are polar and relatively soluble in water, so a cartridge with cationic exchange properties has the best absorption and consequently the best recoveries.

  4. Experimental studies for the development of a new method for stroke volume measuring using X-ray videodensitometry

    International Nuclear Information System (INIS)

    Odenthal, H.J.

    1982-01-01

    Quantitative videodensitometry was studied with a view to its possible application as a new, non-invasive method of measuring cardiac stroke volume. To begin with, the accuracy of roentgen volumetric measurements was determined. After this, blood volume variations were measured by densitometry in five animal experiments. The findings were compared with the volumes measured by a flowmeter in the pulmonary artery. The total stroke volume was found to be proportional to the difference between the maximum and mean densitometric volume. A comparison between videodensitometry and other non-invasive methods showed that, in a stable circulatory system, the results of videodensitometry are equally reliable as, or even more reliable than, those of the conventional methods. (orig./MG) [de

  5. Developement of the method for realization of spectral irradiance scale featuring system of spectral comparisons

    International Nuclear Information System (INIS)

    Skerovic, V; Zarubica, V; Aleksic, M; Zekovic, L; Belca, I

    2010-01-01

    Realization of the scale of spectral responsivity of the detectors in the Directorate of Measures and Precious Metals (DMDM) is based on silicon detectors traceable to LNE-INM. In order to realize the unit of spectral irradiance in the laboratory for photometry and radiometry of the Bureau of Measures and Precious Metals, the new method based on the calibration of the spectroradiometer by comparison with standard detector has been established. The development of the method included realization of the System of Spectral Comparisons (SSC), together with the detector spectral responsivity calibrations by means of a primary spectrophotometric system. The linearity testing and stray light analysis were preformed to characterize the spectroradiometer. Measurement of aperture diameter and calibration of transimpedance amplifier were part of the overall experiment. In this paper, the developed method is presented and measurement results with the associated measurement uncertainty budget are shown.

  6. Developement of the method for realization of spectral irradiance scale featuring system of spectral comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Skerovic, V; Zarubica, V; Aleksic, M [Directorate of measures and precious metals, Optical radiation Metrology department, Mike Alasa 14, 11000 Belgrade (Serbia); Zekovic, L; Belca, I, E-mail: vladanskerovic@dmdm.r [Faculty of Physics, Department for Applied physics and metrology, Studentski trg 12-16, 11000 Belgrade (Serbia)

    2010-10-15

    Realization of the scale of spectral responsivity of the detectors in the Directorate of Measures and Precious Metals (DMDM) is based on silicon detectors traceable to LNE-INM. In order to realize the unit of spectral irradiance in the laboratory for photometry and radiometry of the Bureau of Measures and Precious Metals, the new method based on the calibration of the spectroradiometer by comparison with standard detector has been established. The development of the method included realization of the System of Spectral Comparisons (SSC), together with the detector spectral responsivity calibrations by means of a primary spectrophotometric system. The linearity testing and stray light analysis were preformed to characterize the spectroradiometer. Measurement of aperture diameter and calibration of transimpedance amplifier were part of the overall experiment. In this paper, the developed method is presented and measurement results with the associated measurement uncertainty budget are shown.

  7. Structural identifiability of systems biology models: a critical comparison of methods.

    Directory of Open Access Journals (Sweden)

    Oana-Teodora Chis

    Full Text Available Analysing the properties of a biological system through in silico experimentation requires a satisfactory mathematical representation of the system including accurate values of the model parameters. Fortunately, modern experimental techniques allow obtaining time-series data of appropriate quality which may then be used to estimate unknown parameters. However, in many cases, a subset of those parameters may not be uniquely estimated, independently of the experimental data available or the numerical techniques used for estimation. This lack of identifiability is related to the structure of the model, i.e. the system dynamics plus the observation function. Despite the interest in knowing a priori whether there is any chance of uniquely estimating all model unknown parameters, the structural identifiability analysis for general non-linear dynamic models is still an open question. There is no method amenable to every model, thus at some point we have to face the selection of one of the possibilities. This work presents a critical comparison of the currently available techniques. To this end, we perform the structural identifiability analysis of a collection of biological models. The results reveal that the generating series approach, in combination with identifiability tableaus, offers the most advantageous compromise among range of applicability, computational complexity and information provided.

  8. Scattering of atoms by a stationary sinusoidal hard wall: Rigorous treatment in (n+1) dimensions and comparison with the Rayleigh method

    International Nuclear Information System (INIS)

    Goodman, F.O.

    1977-01-01

    A rigorous treatment of the scattering of atoms by a stationary sinusoidal hard wall in (n+1) dimensions is presented, a previous treatment by Masel, Merrill, and Miller for n=1 being contained as a special case. Numerical comparisons are made with the GR method of Garcia, which incorporates the Rayleigh hypothesis. Advantages and disadvantages of both methods are discussed, and it is concluded that the Rayleigh GR method, if handled properly, will probably work satisfactorily in physically realistic cases

  9. A COMPARISON OF STUDY RESULTS OF BUSINESS ENGLISH STUDENTS IN E-LEARNING AND FACE-TO-FACE COURSES

    Directory of Open Access Journals (Sweden)

    Petr Kučera

    2012-09-01

    Full Text Available The paper deals with the comparison of results of students in thelessons of Business English e-learning course with face-to-faceteaching at the Faculty of Economics and Management of the CULSin Prague. E-learning as a method of instruction refers to learningusing technology, such as the Internet, CD-ROMs and portabledevices. A current trend in university teaching is a particular focus one-learning method of studies enhancing the quality and effectivenessof studies and self-studies. In the paper we have analysed the currentstate in the area of English for Specific Purposes (ESP e-learningresearch, pointed out the results of a pilot ESP e-learning course intesting a control and an experimental group of students and resultsof questionnaires with views of students on e-learning. The paperfocuses on the experimental verification of e-learning influenceon the results of both groups of students. Online study materialsupports an interactive form of the teaching by means of multimediaapplication. It could be used not only for full-time students but alsofor distance students and centers of lifelong learning.

  10. Realist explanatory theory building method for social epidemiology: a protocol for a mixed method multilevel study of neighbourhood context and postnatal depression.

    Science.gov (United States)

    Eastwood, John G; Jalaludin, Bin B; Kemp, Lynn A

    2014-01-01

    A recent criticism of social epidemiological studies, and multi-level studies in particular has been a paucity of theory. We will present here the protocol for a study that aims to build a theory of the social epidemiology of maternal depression. We use a critical realist approach which is trans-disciplinary, encompassing both quantitative and qualitative traditions, and that assumes both ontological and hierarchical stratification of reality. We describe a critical realist Explanatory Theory Building Method comprising of an: 1) emergent phase, 2) construction phase, and 3) confirmatory phase. A concurrent triangulated mixed method multilevel cross-sectional study design is described. The Emergent Phase uses: interviews, focus groups, exploratory data analysis, exploratory factor analysis, regression, and multilevel Bayesian spatial data analysis to detect and describe phenomena. Abductive and retroductive reasoning will be applied to: categorical principal component analysis, exploratory factor analysis, regression, coding of concepts and categories, constant comparative analysis, drawing of conceptual networks, and situational analysis to generate theoretical concepts. The Theory Construction Phase will include: 1) defining stratified levels; 2) analytic resolution; 3) abductive reasoning; 4) comparative analysis (triangulation); 5) retroduction; 6) postulate and proposition development; 7) comparison and assessment of theories; and 8) conceptual frameworks and model development. The strength of the critical realist methodology described is the extent to which this paradigm is able to support the epistemological, ontological, axiological, methodological and rhetorical positions of both quantitative and qualitative research in the field of social epidemiology. The extensive multilevel Bayesian studies, intensive qualitative studies, latent variable theory, abductive triangulation, and Inference to Best Explanation provide a strong foundation for Theory

  11. [Risk stratification of patients with diabetes mellitus undergoing coronary artery bypass grafting--a comparison of statistical methods].

    Science.gov (United States)

    Arnrich, B; Albert, A; Walter, J

    2006-01-01

    Among the coronary bypass patients from our Datamart database, we found a prevalence of 29.6% of diagnosed diabetics. 5.2% of the patients without a diagnosis of diabetes mellitus and a fasting plasma glucose level > 125 mg/dl were defined as undiagnosed diabetics. The objective of this paper was to compare univariate methods and techniques for risk stratification to determine, whether undiagnosed diabetes is per se a risk factor for increased ventilation time and length of ICU stay, and for increased prevalence of resuscitation, reintubation and 30-d mortality for diabetics in heart surgery. Univariate comparisons reveals that undiagnosed diabetics needed resuscitation significantly more often and had an increased ventilation time, while the length of ICU stay was significantly reduced. The significantly different distribution between the diabetics groups of 11 from 32 attributes examined, demands the use of methods for risk stratification. Both risk adjusted methods regression and matching confirm that undiagnosed diabetics had an increased ventilation time and an increased prevalence of resuscitation, while the length of ICU stay was not significantly reduced. A homogeneous distribution of the patient characteristics in the two diabetics groups could be achieved through a statistical matching method using the propensity score. In contrast to the regression analysis, a significantly increased prevalence of reintubation in undiagnosed diabetics was found. Based on an example of undiagnosed diabetics in heart surgery, the presented study reveals the necessity and the possibilities of techniques for risk stratification in retrospective analysis and shows how the potential of data collection from daily clinical practice can be used in an effective way.

  12. A new method for quantitative assessment of resilience engineering by PCA and NT approach: A case study in a process industry

    International Nuclear Information System (INIS)

    Shirali, Gh.A.; Mohammadfam, I.; Ebrahimipour, V.

    2013-01-01

    In recent years, resilience engineering (RE) has attracted widespread interest from industry as well as academia because it presents a new way of thinking about safety and accident. Although the concept of RE was defined scholarly in various areas, there are only few which specifically focus on how to measure RE. Therefore, there is a gap in assessing resilience by quantitative methods. This research aimed at presenting a new method for quantitative assessment of RE using questionnaire and based on principal component analysis. However, six resilience indicators, i.e., top management commitment, Just culture, learning culture, awareness and opacity, preparedness, and flexibility were chosen, and the data related to those in the 11 units of a process industry using a questionnaire was gathered. The data was analyzed based on principal component analysis (PCA) approach. The analysis also leads to determination of the score of resilience indicators and the process units. The process units were ranked using these scores. Consequently, the prescribed approach can determine the poor indicators and the process units. This is the first study that considers a quantitative assessment in RE area which is conducted through PCA. Implementation of the proposed methods would enable the managers to recognize the current weaknesses and challenges against the resilience of their system. -- Highlights: •We quantitatively measure the potential of resilience. •The results are more tangible to understand and interpret. •The method facilitates comparison of resilience state among various process units. •The method facilitates comparison of units' resilience state with the best practice

  13. A photon dominated region code comparison study

    NARCIS (Netherlands)

    Roellig, M.; Abel, N. P.; Bell, T.; Bensch, F.; Black, J.; Ferland, G. J.; Jonkheid, B.; Kamp, I.; Kaufman, M. J.; Le Bourlot, J.; Le Petit, F.; Meijerink, R.; Morata, O.; Ossenkopf, Volker; Roueff, E.; Shaw, G.; Spaans, M.; Sternberg, A.; Stutzki, J.; Thi, W.-F.; van Dishoeck, E. F.; van Hoof, P. A. M.; Viti, S.; Wolfire, M. G.

    Aims. We present a comparison between independent computer codes, modeling the physics and chemistry of interstellar photon dominated regions (PDRs). Our goal was to understand the mutual differences in the PDR codes and their effects on the physical and chemical structure of the model clouds, and

  14. A comparison of high-order explicit Runge–Kutta, extrapolation, and deferred correction methods in serial and parallel

    KAUST Repository

    Ketcheson, David I.

    2014-06-13

    We compare the three main types of high-order one-step initial value solvers: extrapolation, spectral deferred correction, and embedded Runge–Kutta pairs. We consider orders four through twelve, including both serial and parallel implementations. We cast extrapolation and deferred correction methods as fixed-order Runge–Kutta methods, providing a natural framework for the comparison. The stability and accuracy properties of the methods are analyzed by theoretical measures, and these are compared with the results of numerical tests. In serial, the eighth-order pair of Prince and Dormand (DOP8) is most efficient. But other high-order methods can be more efficient than DOP8 when implemented in parallel. This is demonstrated by comparing a parallelized version of the wellknown ODEX code with the (serial) DOP853 code. For an N-body problem with N = 400, the experimental extrapolation code is as fast as the tuned Runge–Kutta pair at loose tolerances, and is up to two times as fast at tight tolerances.

  15. Comparison of the auxiliary function method and the discrete-ordinate method for solving the radiative transfer equation for light scattering.

    Science.gov (United States)

    da Silva, Anabela; Elias, Mady; Andraud, Christine; Lafait, Jacques

    2003-12-01

    Two methods for solving the radiative transfer equation are compared with the aim of computing the angular distribution of the light scattered by a heterogeneous scattering medium composed of a single flat layer or a multilayer. The first method [auxiliary function method (AFM)], recently developed, uses an auxiliary function and leads to an exact solution; the second [discrete-ordinate method (DOM)] is based on the channel concept and needs an angular discretization. The comparison is applied to two different media presenting two typical and extreme scattering behaviors: Rayleigh and Mie scattering with smooth or very anisotropic phase functions, respectively. A very good agreement between the predictions of the two methods is observed in both cases. The larger the number of channels used in the DOM, the better the agreement. The principal advantages and limitations of each method are also listed.

  16. Scaling Professional Problems of Teachers in Turkey with Paired Comparison Method

    Directory of Open Access Journals (Sweden)

    Yasemin Duygu ESEN

    2017-03-01

    Full Text Available In this study, teachers’ professional problems was investigated and the significance level of them was measured with the paired comparison method. The study was carried out in survey model. The study group consisted of 484 teachers working in public schools which are accredited by Ministry of National Education (MEB in Turkey. “The Teacher Professional Problems Survey” developed by the researchers was used as a data collection tool. In data analysis , the scaling method with the third conditional equation of Thurstone’s law of comparative judgement was used. According to the results of study, the teachers’ professional problems include teacher training and the quality of teacher, employee rights and financial problems, decrease of professional reputation, the problems with MEB policies, the problems with union activities, workload, the problems with administration in school, physical conditions and the lack of infrastructure, the problems with parents, the problems with students. According to teachers, the most significant problem is MEB educational policies. This is followed by decrease of professional reputation, physical conditions and the lack of infrastructure, the problems with students, employee rights and financial problems, the problems with administration in school, teacher training and the quality of teacher, the problems with parents, workload, and the problems with union activities. When teachers’ professional problems were analyzed seniority variable, there was little difference in scale values. While the teachers with 0-10 years experience consider decrease of professional reputation as the most important problem, the teachers with 11-45 years experience put the problems with MEB policies at the first place.

  17. Comparison of methods for calculating the health costs of endocrine disrupters: a case study on triclosan.

    Science.gov (United States)

    Prichystalova, Radka; Fini, Jean-Baptiste; Trasande, Leonardo; Bellanger, Martine; Demeneix, Barbara; Maxim, Laura

    2017-06-09

    Socioeconomic analysis is currently used in the Europe Union as part of the regulatory process in Regulation Registration, Evaluation and Authorisation of Chemicals (REACH), with the aim of assessing and managing risks from dangerous chemicals. The political impact of the socio-economic analysis is potentially high in the authorisation and restriction procedures, however, current socio-economic analysis dossiers submitted under REACH are very heterogeneous in terms of methodology used and quality. Furthermore, the economic literature is not very helpful for regulatory purposes, as most published calculations of health costs associated with chemical exposures use epidemiological studies as input data, but such studies are rarely available for most substances. The quasi-totality of the data used in the REACH dossiers comes from toxicological studies. This paper assesses the use of the integrated probabilistic risk assessment, based on toxicological data, for the calculation of health costs associated with endocrine disrupting effects of triclosan. The results are compared with those obtained using the population attributable fraction, based on epidemiological data. The results based on the integrated probabilistic risk assessment indicated that 4894 men could have reproductive deficits based on the decreased vas deferens weights observed in rats, 0 cases of changed T 3 levels, and 0 cases of girls with early pubertal development. The results obtained with the Population Attributable Fraction method showed 7,199,228 cases of obesity per year, 281,923 girls per year with early pubertal development and 88,957 to 303,759 cases per year with increased total T 3 hormone levels. The economic costs associated with increased BMI due to TCS exposure could be calculated. Direct health costs were estimated at €5.8 billion per year. The two methods give very different results for the same effects. The choice of a toxicological-based or an epidemiological-based method in the

  18. How does social comparison within a self-help group influence adjustment to chronic illness? A longitudinal study.

    Science.gov (United States)

    Dibb, Bridget; Yardley, Lucy

    2006-09-01

    Despite the growing popularity of self-help groups for people with chronic illness, there has been surprisingly little research into how these may support adjustment to illness. This study investigated the role that social comparison, occurring within a self-help group, may play in adjustment to chronic illness. A model of adjustment based on control process theory and response shift theory was tested to determine whether social comparisons predicted adjustment after controlling for the catalyst for adjustment (disease severity) and antecedents (demographic and psychological factors). A sample of 301 people with Ménière's disease who were members of the Ménière's Society UK completed questionnaires at baseline and 10-month follow-up assessing adjustment, defined for this study as functional and goal-oriented quality of life. At baseline, they also completed measures of the predictor variables i.e. the antecedents (age, sex, living circumstances, duration of self-help group membership, self-esteem, optimism and perceived control over illness), the catalyst (severity of vertigo, tinnitus, hearing loss and fullness in the ear) and mechanisms of social comparison within the self-help group. The social comparison variables included the extent to which self-help group resources were used, and whether reading about other members' experiences induced positive or negative feelings. Cross-sectional results showed that positive social comparison was indeed associated with better adjustment after controlling for all the other baseline variables, while negative social comparison was associated with worse adjustment. However, greater levels of social comparison at baseline were associated with a deteriorating quality of life over the 10-month follow-up period. Alternative explanations for these findings are discussed.

  19. Comparison of two solution ways of district heating control: Using analysis methods, using artificial intelligence methods

    Energy Technology Data Exchange (ETDEWEB)

    Balate, J.; Sysala, T. [Technical Univ., Zlin (Czech Republic). Dept. of Automation and Control Technology

    1997-12-31

    The District Heating Systems - DHS (Centralized Heat Supply Systems - CHSS) are being developed in large cities in accordance with their growth. The systems are formed by enlarging networks of heat distribution to consumers and at the same time they interconnect the heat sources gradually built. The heat is distributed to the consumers through the circular networks, that are supplied by several cooperating heat sources, that means by power and heating plants and heating plants. The complicated process of heat production technology and supply requires the system approach when solving the concept of automatized control. The paper deals with comparison of the solution way using the analysis methods and using the artificial intelligence methods. (orig.)

  20. A comparison of cervical smear adequacy using either the cytobrush ...

    African Journals Online (AJOL)

    A comparison of cervical smear adequacy using either the cytobrush or the Ayre spatula: a practice audit. ... The purpose of this study was to compare the adequacy of cervical smears taken with the Ayre spatula as opposed to the cytobrush. Methods: This was a retrospective analytical study. One sampler, an experienced ...

  1. [Comparison of bite marks and teeth features using 2D and 3D methods].

    Science.gov (United States)

    Lorkiewicz-Muszyńska, Dorota; Glapiński, Mariusz; Zaba, Czesław; Łabecka, Marzena

    2011-01-01

    The nature of bite marks is complex. They are found at the scene of crime on different materials and surfaces - not only on human body and corpse, but also on food products and material objects. Human bites on skin are sometimes difficult to interpret and to analyze because of the specific character of skin--elastic and distortable--and because different areas of human body have different surfaces and curvatures. A bite mark left at the scene of crime can be a highly helpful way to lead investigators to criminals. The study was performed to establish: 1) whether bite marks exhibit variations in the accuracy of impressions on different materials, 2) whether it is possible to use the 3D method in the process of identifying an individual based on the comparison of bite marks revealed at the scene, and 3D scans of dental casts, 3) whether application of the 3D method allows for elimination of secondary photographic distortion of bite marks. The authors carried out experiments on simulated cases. Five volunteers bit various materials with different surfaces. Experimental bite marks were collected with emphasis on differentiations of materials. Subsequently, dental impressions were taken from five volunteers in order to prepare five sets of dental casts (the maxilla and mandible. The biting edges of teeth were impressed in wax to create an imprint. The samples of dental casts, corresponding wax bite impressions and bite marks from different materials were scanned with 2D and 3D scanners and photographs were taken. All of these were examined in detail and then compared using different methods (2D and 3D). 1) Bite marks exhibit variations in accuracy of impression on different materials. The most legible reproduction of bite marks was seen on cheese. 2) In comparison of bite marks, the 3D method and 3D scans of dental casts are highly accurate. 3) The 3D method helps to eliminate secondary photographic distortion of bite marks.

  2. Comparison of two fractal interpolation methods

    Science.gov (United States)

    Fu, Yang; Zheng, Zeyu; Xiao, Rui; Shi, Haibo

    2017-03-01

    As a tool for studying complex shapes and structures in nature, fractal theory plays a critical role in revealing the organizational structure of the complex phenomenon. Numerous fractal interpolation methods have been proposed over the past few decades, but they differ substantially in the form features and statistical properties. In this study, we simulated one- and two-dimensional fractal surfaces by using the midpoint displacement method and the Weierstrass-Mandelbrot fractal function method, and observed great differences between the two methods in the statistical characteristics and autocorrelation features. From the aspect of form features, the simulations of the midpoint displacement method showed a relatively flat surface which appears to have peaks with different height as the fractal dimension increases. While the simulations of the Weierstrass-Mandelbrot fractal function method showed a rough surface which appears to have dense and highly similar peaks as the fractal dimension increases. From the aspect of statistical properties, the peak heights from the Weierstrass-Mandelbrot simulations are greater than those of the middle point displacement method with the same fractal dimension, and the variances are approximately two times larger. When the fractal dimension equals to 1.2, 1.4, 1.6, and 1.8, the skewness is positive with the midpoint displacement method and the peaks are all convex, but for the Weierstrass-Mandelbrot fractal function method the skewness is both positive and negative with values fluctuating in the vicinity of zero. The kurtosis is less than one with the midpoint displacement method, and generally less than that of the Weierstrass-Mandelbrot fractal function method. The autocorrelation analysis indicated that the simulation of the midpoint displacement method is not periodic with prominent randomness, which is suitable for simulating aperiodic surface. While the simulation of the Weierstrass-Mandelbrot fractal function method has

  3. Comparing Diagnostic Accuracy of Cognitive Screening Instruments: A Weighted Comparison Approach

    Directory of Open Access Journals (Sweden)

    A.J. Larner

    2013-03-01

    Full Text Available Background/Aims: There are many cognitive screening instruments available to clinicians when assessing patients' cognitive function, but the best way to compare the diagnostic utility of these tests is uncertain. One method is to undertake a weighted comparison which takes into account the difference in sensitivity and specificity of two tests, the relative clinical misclassification costs of true- and false-positive diagnosis, and also disease prevalence. Methods: Data were examined from four pragmatic diagnostic accuracy studies from one clinic which compared the Mini-Mental State Examination (MMSE with the Addenbrooke's Cognitive Examination-Revised (ACE-R, the Montreal Cognitive Assessment (MoCA, the Test Your Memory (TYM test, and the Mini-Mental Parkinson (MMP, respectively. Results: Weighted comparison calculations suggested a net benefit for ACE-R, MoCA, and MMP compared to MMSE, but a net loss for TYM test compared to MMSE. Conclusion: Routine incorporation of weighted comparison or other similar net benefit measures into diagnostic accuracy studies merits consideration to better inform clinicians of the relative value of cognitive screening instruments.

  4. Comparison of immersion density and improved microstructural characterization methods for measuring swelling in small irradiated disk specimens

    International Nuclear Information System (INIS)

    Sawai, T.; Suzuki, M.; Hishinuma, A.; Maziasz, P.J.

    1992-01-01

    The procedure of obtaining microstructural data from reactor-irradiated specimens has been carefully checked for accuracy by comparison of swelling data obtained from transmission electron microscopy (TEM) observations of cavities with density-change data measured using the Precision Densitometer at Oak Ridge National Laboratory (ORNL). Comparison of data measured by both methods on duplicate or, in some cases, on the same specimen has shown some appreciable discrepancies for US/Japan collaborative experiments irradiated in the High Flux Isotope Reactor (HFIR). The contamination spot separation (CSS) method was used in the past to determine the thickness of a TEM foil. Recent work has revealed an appreciable error in this method that can result in an overestimation of the foil thickness. This error causes lower swelling values to be measured by TEM microstructural observation relative to the Precision Densitometer. An improved method is proposed for determining the foil thickness by the CSS method, which includes a correction for the systematic overestimation of foil thickness. (orig.)

  5. Consensus building for interlaboratory studies, key comparisons, and meta-analysis

    Science.gov (United States)

    Koepke, Amanda; Lafarge, Thomas; Possolo, Antonio; Toman, Blaza

    2017-06-01

    Interlaboratory studies in measurement science, including key comparisons, and meta-analyses in several fields, including medicine, serve to intercompare measurement results obtained independently, and typically produce a consensus value for the common measurand that blends the values measured by the participants. Since interlaboratory studies and meta-analyses reveal and quantify differences between measured values, regardless of the underlying causes for such differences, they also provide so-called ‘top-down’ evaluations of measurement uncertainty. Measured values are often substantially over-dispersed by comparison with their individual, stated uncertainties, thus suggesting the existence of yet unrecognized sources of uncertainty (dark uncertainty). We contrast two different approaches to take dark uncertainty into account both in the computation of consensus values and in the evaluation of the associated uncertainty, which have traditionally been preferred by different scientific communities. One inflates the stated uncertainties by a multiplicative factor. The other adds laboratory-specific ‘effects’ to the value of the measurand. After distinguishing what we call recipe-based and model-based approaches to data reductions in interlaboratory studies, we state six guiding principles that should inform such reductions. These principles favor model-based approaches that expose and facilitate the critical assessment of validating assumptions, and give preeminence to substantive criteria to determine which measurement results to include, and which to exclude, as opposed to purely statistical considerations, and also how to weigh them. Following an overview of maximum likelihood methods, three general purpose procedures for data reduction are described in detail, including explanations of how the consensus value and degrees of equivalence are computed, and the associated uncertainty evaluated: the DerSimonian-Laird procedure; a hierarchical Bayesian

  6. Analysis of radioactive corrosion test specimens by means of ICP-MS. Comparison with earlier methods

    International Nuclear Information System (INIS)

    Forsyth, Roy

    1997-07-01

    In June 1992, an ICP-MS instrument (Inductively Coupled Plasma-Mass Spectrometry) was commissioned for use with radioactive sample solutions at Studsvik Nuclear's Hot Cell Laboratory. For conventional environmental samples the instrument permits the simultaneous analysis of many trace elements, but the software used in evaluation of the mass spectra is based on a library of isotopic compositions relevant only for elements in the lithosphere. Fission products and actinides, however, have isotopic compositions which are significantly different from the natural elements, and which also vary with the burnup of the nuclear fuel specimen. Consequently, a spread-sheet had to be developed which could evaluate the mass spectra with these isotopic compositions. Following these preparations, a large number of samples (about 200) from SKB's experimental programme for the study of spent fuel corrosion have been analyzed by the ICP-MS technique. Many of these samples were archive solutions of samples which had been taken earlier in the programme. This report presents a comparison of the analytical results for uranium, plutonium, cesium, strontium and technetium by both the ICP-MS technique, and the previously used analytical methods. For three products, a satisfactory agreement between the results from the various methods was obtained, but for uranium and plutonium the ICP-MS method gave results which were 10-20% higher than the conventional methods. The comparison programme has also shown, not unexpectedly, that significant losses of plutonium from solution had occurred, by precipitation and/or absorption, in the archive solutions during storage. It can be expected that such losses also occur for the other actinides, and consequently, all the analytical results for actinides in older archive solutions must be treated with great caution. 9 refs

  7. Legal Teaching Methods to Diverse Student Cohorts: A Comparison between the United Kingdom, the United States, Australia and New Zealand

    Science.gov (United States)

    Kraal, Diane

    2017-01-01

    This article makes a comparison across the unique educational settings of law and business schools in the United Kingdom, the United States, Australia and New Zealand to highlight differences in teaching methods necessary for culturally and ethnically mixed student cohorts derived from high migration, student mobility, higher education rankings…

  8. Comparative study on γ-ray spectrum by several filtering method

    International Nuclear Information System (INIS)

    Yuan Xinyu; Liu Liangjun; Zhou Jianliang

    2011-01-01

    Comparative study was conducted on results of gamma-ray spectrum by using a majority of active smoothing method, which were used to show filtering effect. The results showed that peak was widened and overlap peaks increased with energy domain filter in γ-ray spectrum. Filter and its parameters should be seriously taken into consideration in frequency domain. Wavelet transformation can keep signal in high frequency region well. Improved threshold method showed the advantages of hard and soft threshold method at the same time by comparison, which was suitable for weak peaks detection. A new filter was put forward to eke out gravity model approach, whose denoise level was detected by standard deviation. This method not only kept signal and net area of peak well,but also attained better result and had simple computer program. (authors)

  9. Quantitative comparison of analysis methods for spectroscopic optical coherence tomography: reply to comment

    NARCIS (Netherlands)

    Bosschaart, Nienke; van Leeuwen, Ton; Aalders, Maurice C.G.; Faber, Dirk

    2014-01-01

    We reply to the comment by Kraszewski et al on “Quantitative comparison of analysis methods for spectroscopic optical coherence tomography.” We present additional simulations evaluating the proposed window function. We conclude that our simulations show good qualitative agreement with the results of

  10. Quantitative analysis and efficiency study of PSD methods for a LaBr{sub 3}:Ce detector

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, Ming; Cang, Jirong [Key Laboratory of Particle & Radiation Imaging(Tsinghua University), Ministry of Education (China); Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Zeng, Zhi, E-mail: zengzhi@tsinghua.edu.cn [Key Laboratory of Particle & Radiation Imaging(Tsinghua University), Ministry of Education (China); Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Yue, Xiaoguang; Cheng, Jianping; Liu, Yinong; Ma, Hao; Li, Junli [Key Laboratory of Particle & Radiation Imaging(Tsinghua University), Ministry of Education (China); Department of Engineering Physics, Tsinghua University, Beijing 100084 (China)

    2016-03-21

    The LaBr{sub 3}:Ce scintillator has been widely studied for nuclear spectroscopy because of its optimal energy resolution (<3%@ 662 keV) and time resolution (~300 ps). Despite these promising properties, the intrinsic radiation background of LaBr{sub 3}:Ce is a critical issue, and pulse shape discrimination (PSD) has been shown to be an efficient potential method to suppress the alpha background from the {sup 227}Ac. In this paper, the charge comparison method (CCM) for alpha and gamma discrimination in LaBr{sub 3}:Ce is quantitatively analysed and compared with two other typical PSD methods using digital pulse processing. The algorithm parameters and discrimination efficiency are calculated for each method. Moreover, for the CCM, the correlation between the CCM feature value distribution and the total charge (energy) is studied, and a fitting equation for the correlation is inferred and experimentally verified. Using the equations, an energy-dependent threshold can be chosen to optimize the discrimination efficiency. Additionally, the experimental results show a potential application in low-activity high-energy γ measurement by suppressing the alpha background.

  11. A Comparison of Two Methods for Recruiting Children with an Intellectual Disability

    Science.gov (United States)

    Adams, Dawn; Handley, Louise; Heald, Mary; Simkiss, Doug; Jones, Alison; Walls, Emily; Oliver, Chris

    2017-01-01

    Background: Recruitment is a widely cited barrier of representative intellectual disability research, yet it is rarely studied. This study aims to document the rates of recruiting children with intellectual disabilities using two methods and discuss the impact of such methods on sample characteristics. Methods: Questionnaire completion rates are…

  12. Comparison of measurement methods for benzene and toluene

    Science.gov (United States)

    Wideqvist, U.; Vesely, V.; Johansson, C.; Potter, A.; Brorström-Lundén, E.; Sjöberg, K.; Jonsson, T.

    Diffusive sampling and active (pumped) sampling (tubes filled with Tenax TA or Carbopack B) were compared with an automatic BTX instrument (Chrompack, GC/FID) for measurements of benzene and toluene. The measurements were made during differing pollution levels and different weather conditions at a roof-top site and in a densely trafficked street canyon in Stockholm, Sweden. The BTX instrument was used as the reference method for comparison with the other methods. Considering all data the Perkin-Elmer diffusive samplers, containing Tenax TA and assuming a constant uptake rate of 0.406 cm3 min-1, showed about 30% higher benzene values compared to the BTX instrument. This discrepancy may be explained by a dose-dependent uptake rate with higher uptake rates at lower dose as suggested by laboratory experiments presented in the literature. After correction by applying the relationship between uptake rate and dose as suggested by Roche et al. (Atmos. Environ. 33 (1999) 1905), the two methods agreed almost perfectly. For toluene there was much better agreement between the two methods. No sign of a dose-dependent uptake could be seen. The mean concentrations and 95% confidence intervals of all toluene measurements (67 values) were (10.80±1.6) μg m -3 for diffusive sampling and (11.3±1.6) μg m -3 for the BTX instrument, respectively. The overall ratio between the concentrations obtained using diffusive sampling and the BTX instrument was 0.91±0.07 (95% confidence interval). Tenax TA was found to be equal to Carbopack B for measuring benzene and toluene in this concentration range, although it has been proposed not to be optimal for benzene. There was also good agreement between the active samplers and the BTX instrument.

  13. Estimating HIV incidence among adults in Kenya and Uganda: a systematic comparison of multiple methods.

    Directory of Open Access Journals (Sweden)

    Andrea A Kim

    2011-03-01

    Full Text Available Several approaches have been used for measuring HIV incidence in large areas, yet each presents specific challenges in incidence estimation.We present a comparison of incidence estimates for Kenya and Uganda using multiple methods: 1 Epidemic Projections Package (EPP and Spectrum models fitted to HIV prevalence from antenatal clinics (ANC and national population-based surveys (NPS in Kenya (2003, 2007 and Uganda (2004/2005; 2 a survey-derived model to infer age-specific incidence between two sequential NPS; 3 an assay-derived measurement in NPS using the BED IgG capture enzyme immunoassay, adjusted for misclassification using a locally derived false-recent rate (FRR for the assay; (4 community cohorts in Uganda; (5 prevalence trends in young ANC attendees. EPP/Spectrum-derived and survey-derived modeled estimates were similar: 0.67 [uncertainty range: 0.60, 0.74] and 0.6 [confidence interval: (CI 0.4, 0.9], respectively, for Uganda (2005 and 0.72 [uncertainty range: 0.70, 0.74] and 0.7 [CI 0.3, 1.1], respectively, for Kenya (2007. Using a local FRR, assay-derived incidence estimates were 0.3 [CI 0.0, 0.9] for Uganda (2004/2005 and 0.6 [CI 0, 1.3] for Kenya (2007. Incidence trends were similar for all methods for both Uganda and Kenya.Triangulation of methods is recommended to determine best-supported estimates of incidence to guide programs. Assay-derived incidence estimates are sensitive to the level of the assay's FRR, and uncertainty around high FRRs can significantly impact the validity of the estimate. Systematic evaluations of new and existing incidence assays are needed to the study the level, distribution, and determinants of the FRR to guide whether incidence assays can produce reliable estimates of national HIV incidence.

  14. Traction cytometry: regularization in the Fourier approach and comparisons with finite element method.

    Science.gov (United States)

    Kulkarni, Ankur H; Ghosh, Prasenjit; Seetharaman, Ashwin; Kondaiah, Paturu; Gundiah, Namrata

    2018-05-09

    Traction forces exerted by adherent cells are quantified using displacements of embedded markers on polyacrylamide substrates due to cell contractility. Fourier Transform Traction Cytometry (FTTC) is widely used to calculate tractions but has inherent limitations due to errors in the displacement fields; these are mitigated through a regularization parameter (γ) in the Reg-FTTC method. An alternate finite element (FE) approach computes tractions on a domain using known boundary conditions. Robust verification and recovery studies are lacking but essential in assessing the accuracy and noise sensitivity of the traction solutions from the different methods. We implemented the L2 regularization method and defined a maximum curvature point in the traction with γ plot as the optimal regularization parameter (γ*) in the Reg-FTTC approach. Traction reconstructions using γ* yield accurate values of low and maximum tractions (Tmax) in the presence of up to 5% noise. Reg-FTTC is hence a clear improvement over the FTTC method but is inadequate to reconstruct low stresses such as those at nascent focal adhesions. FE, implemented using a node-by-node comparison, showed an intermediate reconstruction compared to Reg-FTTC. We performed experiments using mouse embryonic fibroblast (MEF) and compared results between these approaches. Tractions from FTTC and FE showed differences of ∼92% and 22% as compared to Reg-FTTC. Selection of an optimum value of γ for each cell reduced variability in the computed tractions as compared to using a single value of γ for all the MEF cells in this study.

  15. Comparison of Swedish and Norwegian Use of Cone-Beam Computed Tomography: a Questionnaire Study

    Directory of Open Access Journals (Sweden)

    Jerker Edén Strindberg

    2015-12-01

    Full Text Available Objectives: Cone-beam computed tomography in dentistry can be used in some countries by other dentists than specialists in radiology. The frequency of buying cone-beam computed tomography to examine patients is rapidly growing, thus knowledge of how to use it is very important. The aim was to compare the outcome of an investigation on the use of cone-beam computed tomography in Sweden with a previous Norwegian study, regarding specifically technical aspects. Material and Methods: The questionnaire contained 45 questions, including 35 comparable questions to Norwegian clinics one year previous. Results were based on inter-comparison of the outcome from each of the two questionnaire studies. Results: Responses rate was 71% in Sweden. There, most of cone-beam computed tomography (CBCT examinations performed by dental nurses, while in Norway by specialists. More than two-thirds of the CBCT units had a scout image function, regularly used in both Sweden (79% and Norway (75%. In Sweden 4% and in Norway 41% of the respondents did not wait for the report from the radiographic specialist before initiating treatment. Conclusions: The bilateral comparison showed an overall similarity between the two countries. The survey gave explicit and important knowledge of the need for education and training of the whole team, since radiation dose to the patient could vary a lot for the same kind of radiographic examination. It is essential to establish quality assurance protocols with defined responsibilities in the team in order to maintain high diagnostic accuracy for all examinations when using cone-beam computed tomography for patient examinations.

  16. The Comparison Study of Neutron Activation Analysis and Fission Track Technique for Uranium Determination

    International Nuclear Information System (INIS)

    Sirinuntavid, Alice; Rodthongkom, Chouvana

    2007-08-01

    Full text: Comparison between Neutron Activation Analysis (NAA) and fission track technique for uranium determination in solid samples was studied by use of standard reference materials, i.e., ore, coal fly ash, soil. For NAA, the epithermal neutron was applied for activated irradiation. Then, the 74.5 keV gamma from U-239 or 277.7 keV gamma from Np-239 was measured. For high Uranium content samples, NAA method with 74.5 keV gamma measurement, gave higher precision result than the 277.7 keV gamma measurement method. NAA method with 277.7 keV gamma measurement, gave higher sensitivity and precision result for low Uranium content samples and the uranium contained less than 10 ppm samples. Nevertheless, the latter procedure needed longer time for neutron irradiation and analysis procedure. In comparison the results of Uranium analysis between NAA and fission track, it was found that no significant difference within 95 % of confidence level

  17. Standardization of Tc-99 by two methods and participation at the CCRI(II)-K2. Tc-99 comparison.

    Science.gov (United States)

    Sahagia, M; Antohe, A; Ioan, R; Luca, A; Ivan, C

    2014-05-01

    The work accomplished within the participation at the 2012 key comparison of Tc-99 is presented. The solution was standardized for the first time in IFIN-HH by two methods: LSC-TDCR and 4π(PC)β-γ efficiency tracer. The methods are described and the results are compared. For the LSC-TDCR method, the program TDCR07c, written and provided by P. Cassette, was used for processing the measurement data. The results are 2.1% higher than when applying the TDCR06b program; the higher value, calculated with the software TDCR07c, was used for reporting the final result in the comparison. The tracer used for the 4π(PC)β-γ efficiency tracer method was a standard (60)Co solution. The sources were prepared from the mixture (60)Co+(99)Tc solution and a general extrapolation curve, type: N(βTc-99)/(M)(Tc-99)=f [1-ε(Co-60)], was drawn. This value was not used for the final result of the comparison. The difference between the values of activity concentration obtained by the two methods was within the limit of the combined standard uncertainty of the difference of these two results. © 2013 Published by Elsevier Ltd.

  18. Comparison of selected methods of prediction of wine exports and imports

    Directory of Open Access Journals (Sweden)

    Radka Šperková

    2008-01-01

    Full Text Available For prediction of future events, there exist a number of methods usable in managerial practice. Decision on which of them should be used in a particular situation depends not only on the amount and quality of input information, but also on a subjective managerial judgement. Paper performs a practical application and consequent comparison of results of two selected methods, which are statistical method and deductive method. Both methods were used for predicting wine exports and imports in (from the Czech Republic. Prediction was done in 2003 and it related to the economic years 2003/2004, 2004/2005, 2005/2006, and 2006/2007, within which it was compared with the real values of the given indicators.Within the deductive methods there were characterized the most important factors of external environment including the most important influence according to authors’ opinion, which was the integration of the Czech Republic into the EU from 1st May, 2004. On the contrary, the statistical method of time-series analysis did not regard the integration, which is comes out of its principle. Statistics only calculates based on data from the past, and cannot incorporate the influence of irregular future conditions, just as the EU integration. Because of this the prediction based on deductive method was more optimistic and more precise in terms of its difference from real development in the given field.

  19. A comparison between self-reported and GIS-based proxies of residential exposure to environmental pollution in a case-control study on lung cancer.

    Science.gov (United States)

    Cordioli, M; Ranzi, A; Freni Sterrantino, A; Erspamer, L; Razzini, G; Ferrari, U; Gatti, M G; Bonora, K; Artioli, F; Goldoni, C A; Lauriola, P

    2014-06-01

    In epidemiological studies both questionnaire results and GIS modeling have been used to assess exposure to environmental risk factors. Nevertheless, few studies have used both these techniques to evaluate the degree of agreement between different exposure assessment methodologies. As part of a case-control study on lung cancer, we present a comparison between self-reported and GIS-derived proxies of residential exposure to environmental pollution. 649 subjects were asked to fill out a questionnaire and give information about residential history and perceived exposure. Using GIS, for each residence we evaluated land use patterns, proximity to major roads and exposure to industrial pollution. We then compared the GIS exposure-index values among groups created on the basis of questionnaire responses. Our results showed a relatively high agreement between the two methods. Although none of these methods is the "exposure gold standard", understanding similarities, weaknesses and strengths of each method is essential to strengthen epidemiological evidence. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Safety impacts of platform tram stops on pedestrians in mixed traffic operation: A comparison group before-after crash study.

    Science.gov (United States)

    Naznin, Farhana; Currie, Graham; Logan, David; Sarvi, Majid

    2016-01-01

    Tram stops in mixed traffic environments present a variety of safety, accessibility and transport efficiency challenges. In Melbourne, Australia the hundred year-old electric tram system is progressively being modernized to improve passenger accessibility. Platform stops, incorporating raised platforms for level entry into low floor trams, are being retro-fitted system-wide to replace older design stops. The aim of this study was to investigate the safety impacts of platform stops over older design stops (i.e. Melbourne safety zone tram stops) on pedestrians in the context of mixed traffic tram operation in Melbourne, using an advanced before-after crash analysis approach, the comparison group (CG) method. The CG method evaluates safety impacts by taking into account the general trends in safety and the unobserved factors at treatment and comparison sites that can alter the outcomes of a simple before-after analysis. The results showed that pedestrian-involved all injury crashes reduced by 43% after platform stop installation. This paper also explores a concern that the conventional CG method might underestimate safety impacts as a result of large differences in passenger stop use between treatment and comparison sites, suggesting differences in crash risk exposure. To adjust for this, a modified analysis explored crash rates (crash counts per 10,000 stop passengers) for each site. The adjusted results suggested greater reductions in pedestrian-involved crashes after platform stop installation: an 81% reduction in pedestrian-involved all injury crashes and 86% reduction in pedestrian-involved FSI crashes, both are significant at the 95% level. Overall, the results suggest that platform stops have considerable safety benefits for pedestrians. Implications for policy and areas for future research are explored. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Effect of preparation conditions on Nickel Zinc Ferrite nanoparticles: A comparison between sol–gel auto combustion and co-precipitation methods

    Directory of Open Access Journals (Sweden)

    Manju Kurian

    2016-09-01

    Full Text Available The experimental conditions used in the preparation of nano crystalline mixed ferrite materials play an important role in the particle size of the product. In the present work a comparison is made on sol–gel auto combustion methods and co-precipitation methods by preparing Nickel Zinc Ferrite (Ni0.5Zn0.5Fe2O4 nano particles. The prepared ferrite samples were calcined at different temperatures and characterized by using standard methods. X-ray diffraction analysis indicated the formation of single phase ferrite nanoparticles for samples calcined at 500 °C. The lattice parameter range of 8.32–8.49 Å confirmed the cubic spinel structure. Average crystallite size estimated from X-ray diffractogram was found to be between 17 and 40 nm. The IR spectra showed two main absorption bands, the high frequency band ν1 around 600 cm−1 and the low frequency band ν2 around 400 cm−1 arising from tetrahedral (A and octahedral (B interstitial sites in the spinel lattice. TEM pictures showed particles in the nanometric range confirming the XRD data. The studies revealed that the sol–gel auto combustion method was superior to the co-precipitation method for producing single phase nano particles with smaller crystallite size.

  2. Testing statistical significance scores of sequence comparison methods with structure similarity

    Directory of Open Access Journals (Sweden)

    Leunissen Jack AM

    2006-10-01

    Full Text Available Abstract Background In the past years the Smith-Waterman sequence comparison algorithm has gained popularity due to improved implementations and rapidly increasing computing power. However, the quality and sensitivity of a database search is not only determined by the algorithm but also by the statistical significance testing for an alignment. The e-value is the most commonly used statistical validation method for sequence database searching. The CluSTr database and the Protein World database have been created using an alternative statistical significance test: a Z-score based on Monte-Carlo statistics. Several papers have described the superiority of the Z-score as compared to the e-value, using simulated data. We were interested if this could be validated when applied to existing, evolutionary related protein sequences. Results All experiments are performed on the ASTRAL SCOP database. The Smith-Waterman sequence comparison algorithm with both e-value and Z-score statistics is evaluated, using ROC, CVE and AP measures. The BLAST and FASTA algorithms are used as reference. We find that two out of three Smith-Waterman implementations with e-value are better at predicting structural similarities between proteins than the Smith-Waterman implementation with Z-score. SSEARCH especially has very high scores. Conclusion The compute intensive Z-score does not have a clear advantage over the e-value. The Smith-Waterman implementations give generally better results than their heuristic counterparts. We recommend using the SSEARCH algorithm combined with e-values for pairwise sequence comparisons.

  3. Study of a method based on TLD detectors for in-phantom dosimetry in BNCT

    Energy Technology Data Exchange (ETDEWEB)

    Gambarini, G. [Dept. of Physics of the Univ., Via Celoria 16, 20133 Milan (Italy); INFN, Natl. Inst. of Nuclear Physics, Via Celoria 16, 20133 Milan (Italy); Klamert, V. [Dept. of Nuclear Eng. of Polytechnic, CESNEF, Via Ponzio 34/3, 20133 Milan (Italy); Agosteo, S. [INFN, Natl. Inst. of Nuclear Physics, Via Celoria 16, 20133 Milan (Italy); Dept. of Nuclear Eng. of Polytechnic, CESNEF, Via Ponzio 34/3, 20133 Milan (Italy); Birattari, C.; Gay, S. [Dept. of Physics of the Univ., Via Celoria 16, 20133 Milan (Italy); INFN, Natl. Inst. of Nuclear Physics, Via Celoria 16, 20133 Milan (Italy); Rosi, G. [FIS-ION, ENEA, Casaccia, Via Anguillarese 301, 00060 Santa Maria di Galeria, Rome (Italy); Scolari, L. [Dept. of Physics of the Univ., Via Celoria 16, 20133 Milan (Italy); INFN, Natl. Inst. of Nuclear Physics, Via Celoria 16, 20133 Milan (Italy)

    2004-07-01

    A method has been developed, based on thermoluminescent dosemeters (TLD), aimed at measuring the absorbed dose in tissue-equivalent phantoms exposed to thermal or epithermal neutrons, separating the contributions of various secondary radiation generated by neutrons. The proposed method takes advantage of the very low sensitivity of CaF{sub 2}:Tm (TLD-300) to low energy neutrons and to the different responses to thermal neutrons of LiF:Mg,Ti dosemeters with different {sup 6}Li percentage (TLD-100, TLD-700, TLD-600). The comparison of the results with those obtained by means of gel dosemeters and activation foils has confirmed the reliability of the method. The experimental modalities allowing reliable results have been studied. The glow curves of TLD-300 after gamma or neutron irradiation have been compared; moreover, both internal irradiation effect and energy dependence have been investigated. For TLD-600, TLD-100 and TLD-700, the suitable fluence limits have been determined in order to avoid radiation damage and loss of linearity. (authors)

  4. A comparison theorem for the SOR iterative method

    Science.gov (United States)

    Sun, Li-Ying

    2005-09-01

    In 1997, Kohno et al. have reported numerically that the improving modified Gauss-Seidel method, which was referred to as the IMGS method, is superior to the SOR iterative method. In this paper, we prove that the spectral radius of the IMGS method is smaller than that of the SOR method and Gauss-Seidel method, if the relaxation parameter [omega][set membership, variant](0,1]. As a result, we prove theoretically that this method is succeeded in improving the convergence of some classical iterative methods. Some recent results are improved.

  5. Rapid descriptive sensory methodsComparison of Free Multiple Sorting, Partial Napping, Napping, Flash Profiling and conventional profiling

    DEFF Research Database (Denmark)

    Dehlholm, Christian; Brockhoff, Per B.; Meinert, Lene

    2012-01-01

    is a modal restriction of Napping to specific sensory modalities, directing sensation and still allowing a holistic approach to products. The new methods are compared to Flash Profiling, Napping and conventional descriptive sensory profiling. Evaluations are performed by several panels of expert assessors......Two new rapid descriptive sensory evaluation methods are introduced to the field of food sensory evaluation. The first method, free multiple sorting, allows subjects to perform ad libitum free sortings, until they feel that no more relevant dissimilarities among products remain. The second method...... are applied for the graphical validation and comparisons. This allows similar comparisons and is applicable to single-block evaluation designs such as Napping. The partial Napping allows repetitions on multiple sensory modalities, e.g. appearance, taste and mouthfeel, and shows the average...

  6. Search and foraging behaviors from movement data: A comparison of methods.

    Science.gov (United States)

    Bennison, Ashley; Bearhop, Stuart; Bodey, Thomas W; Votier, Stephen C; Grecian, W James; Wakefield, Ewan D; Hamer, Keith C; Jessopp, Mark

    2018-01-01

    Search behavior is often used as a proxy for foraging effort within studies of animal movement, despite it being only one part of the foraging process, which also includes prey capture. While methods for validating prey capture exist, many studies rely solely on behavioral annotation of animal movement data to identify search and infer prey capture attempts. However, the degree to which search correlates with prey capture is largely untested. This study applied seven behavioral annotation methods to identify search behavior from GPS tracks of northern gannets ( Morus bassanus ), and compared outputs to the occurrence of dives recorded by simultaneously deployed time-depth recorders. We tested how behavioral annotation methods vary in their ability to identify search behavior leading to dive events. There was considerable variation in the number of dives occurring within search areas across methods. Hidden Markov models proved to be the most successful, with 81% of all dives occurring within areas identified as search. k -Means clustering and first passage time had the highest rates of dives occurring outside identified search behavior. First passage time and hidden Markov models had the lowest rates of false positives, identifying fewer search areas with no dives. All behavioral annotation methods had advantages and drawbacks in terms of the complexity of analysis and ability to reflect prey capture events while minimizing the number of false positives and false negatives. We used these results, with consideration of analytical difficulty, to provide advice on the most appropriate methods for use where prey capture behavior is not available. This study highlights a need to critically assess and carefully choose a behavioral annotation method suitable for the research question being addressed, or resulting species management frameworks established.

  7. Family-Centered Care in Juvenile Justice Institutions: A Mixed Methods Study Protocol.

    Science.gov (United States)

    Simons, Inge; Mulder, Eva; Rigter, Henk; Breuk, René; van der Vaart, Wander; Vermeiren, Robert

    2016-09-12

    Treatment and rehabilitation interventions in juvenile justice institutions aim to prevent criminal reoffending by adolescents and to enhance their prospects of successful social reintegration. There is evidence that these goals are best achieved when the institution adopts a family-centered approach, involving the parents of the adolescents. The Academic Workplace Forensic Care for Youth has developed two programs for family-centered care for youth detained in groups for short-term and long-term stay, respectively. The overall aim of our study is to evaluate the family-centered care program in the first two years after the first steps of its implementation in short-term stay groups of two juvenile justice institutions in the Netherlands. The current paper discusses our study design. Based on a quantitative pilot study, we opted for a study with an explanatory sequential mixed methods design. This pilot is considered the first stage of our study. The second stage of our study includes concurrent quantitative and qualitative approaches. The quantitative part of our study is a pre-post quasi-experimental comparison of family-centered care with usual care in short-term stay groups. The qualitative part of our study involves in-depth interviews with adolescents, parents, and group workers to elaborate on the preceding quantitative pilot study and to help interpret the outcomes of the quasi-experimental quantitative part of the study. We believe that our study will result in the following findings. In the quantitative comparison of usual care with family-centered care, we assume that in the latter group, parents will be more involved with their child and with the institution, and that parents and adolescents will be more motivated to take part in therapy. In addition, we expect family-centered care to improve family interactions, to decrease parenting stress, and to reduce problem behavior among the adolescents. Finally, we assume that adolescents, parents, and the

  8. Methods of analysis speech rate: a pilot study.

    Science.gov (United States)

    Costa, Luanna Maria Oliveira; Martins-Reis, Vanessa de Oliveira; Celeste, Letícia Côrrea

    2016-01-01

    To describe the performance of fluent adults in different measures of speech rate. The study included 24 fluent adults, of both genders, speakers of Brazilian Portuguese, who were born and still living in the metropolitan region of Belo Horizonte, state of Minas Gerais, aged between 18 and 59 years. Participants were grouped by age: G1 (18-29 years), G2 (30-39 years), G3 (40-49 years), and G4 (50-59 years). The speech samples were obtained following the methodology of the Speech Fluency Assessment Protocol. In addition to the measures of speech rate proposed by the protocol (speech rate in words and syllables per minute), the rate of speech into phonemes per second and the articulation rate with and without the disfluencies were calculated. We used the nonparametric Friedman test and the Wilcoxon test for multiple comparisons. Groups were compared using the nonparametric Kruskal Wallis. The significance level was of 5%. There were significant differences between measures of speech rate involving syllables. The multiple comparisons showed that all the three measures were different. There was no effect of age for the studied measures. These findings corroborate previous studies. The inclusion of temporal acoustic measures such as speech rate in phonemes per second and articulation rates with and without disfluencies can be a complementary approach in the evaluation of speech rate.

  9. Comparison of methods for measuring the ion exchange capacity of a soil. Development of a quick method

    International Nuclear Information System (INIS)

    Amavis, R.

    1959-01-01

    In the course of a study on the movement of radioactive ions in soil we had to measure the cationic exchange capacity of various soil samples, this parameter being one of the most important in the appreciation of the extent of fixation of radioactive ions in the ground. The object of this report is to describe the various methods used and to compare the results obtained. A colorimetric method, using Co(NH 3 ) 6 3+ as exchangeable ion, was developed. It gives results comparable to those obtained with conventional methods, whilst considerably reducing the time necessary for the operations. (author) [fr

  10. Comparison of the methods for calculating the interfacial heat transfer coefficient in hot stamping

    International Nuclear Information System (INIS)

    Zhao, Kunmin; Wang, Bin; Chang, Ying; Tang, Xinghui; Yan, Jianwen

    2015-01-01

    This paper presents a hot stamping experimentation and three methods for calculating the Interfacial Heat Transfer Coefficient (IHTC) of 22MnB5 boron steel. Comparison of the calculation results shows an average error of 7.5% for the heat balance method, 3.7% for the Beck's nonlinear inverse estimation method (the Beck's method), and 10.3% for the finite-element-analysis-based optimization method (the FEA method). The Beck's method is a robust and accurate method for identifying the IHTC in hot stamping applications. The numerical simulation using the IHTC identified by the Beck's method can predict the temperature field with a high accuracy. - Highlights: • A theoretical formula was derived for direct calculation of IHTC. • The Beck's method is a robust and accurate method for identifying IHTC. • Finite element method can be used to identify an overall equivalent IHTC

  11. Comparison of Methods for Computing the Exchange Energy of quantum helium and hydrogen

    International Nuclear Information System (INIS)

    Cayao, J. L. C. D.

    2009-01-01

    I investigate approach methods to find the exchange energy for quantum helium and hydrogen. I focus on Heitler-London, Hund-Mullikan, Molecular Orbital and variational approach methods. I use Fock-Darwin states centered at the potential minima as the single electron wavefunctions. Using these we build Slater determinants as the basis for the two electron problem. I do a comparison of methods for two electron double dot (quantum hydrogen) and for two electron single dot (quantum helium) in zero and finite magnetic field. I show that the variational, Hund-Mullikan and Heitler-London methods are in agreement with the exact solutions. Also I show that the exchange energy calculation by Heitler-London (HL) method is an excellent approximation for large inter dot distances and for single dot in magnetic field is an excellent approximation the Variational method. (author)

  12. A comparison between isotopic and different methods for determining urea fertilizer use efficiency under trickle figuration on tomato

    International Nuclear Information System (INIS)

    Mousavi Shalmani, M.A.; Sagheb, N.; Rafii, H.; Hobbi, M.S.; Khorosani, A.

    2002-01-01

    In order to study the fertilizer use efficiency by two different methods, an experimental design was conducted in Randomized Complete Block (RBC) with four replications and treatments in the plots with 35 square meter, area. 1200 seedlings of tomato (variety of early Urbana V.F.) were planted on June 3. 1996 at the distances of 50 cm apart. Fertilization and irrigation were performed by two fertilizer pumps (one for P and K and the other for N). The treatments of N 0 , N 1 , N 2 and N 3 received 0, 287.5, 575 and 862.5 kg N/ha respectively. The 14 N/ 15 N isotopic ratio of nitrogen gas was determined by the use of emission spectrometer and the fertilizer use efficiency was determined by using nuclear chemical equations. Because of the shortage of micro nutrients in soil and unbalanced quantity of N,P and K as fertilizers the results have shown a large amount of free radicals (NH 4 , NO 3 ) remained in the soil. This disturbed the balance between absorption of nitrogen from its different sources in the soil. Therefor, the share of fertilizer increased and in this condition, the priming effects were seen and different methods showed lower amounts of fertilizer use efficiency in comparison with the isotopic method

  13. Methods for land use impact assessment: A review

    International Nuclear Information System (INIS)

    Perminova, Tataina; Sirina, Natalia; Laratte, Bertrand; Baranovskaya, Natalia; Rikhvanov, Leonid

    2016-01-01

    Many types of methods to assess land use impact have been developed. Nevertheless a systematic synthesis of all these approaches is necessary to highlight the most commonly used and most effective methods. Given the growing interest in this area of research, a review of the different methods of assessing land use impact (LUI) was performed using bibliometric analysis. One hundred eighty seven articles of agricultural and biological science, and environmental sciences were examined. According to our results, the most frequently used land use assessment methods are Life-Cycle Assessment, Material Flow Analysis/Input–Output Analysis, Environmental Impact Assessment and Ecological Footprint. Comparison of the methods allowed their specific features to be identified and to arrive at the conclusion that a combination of several methods is the best basis for a comprehensive analysis of land use impact assessment. - Highlights: • We identified the most frequently used methods in land use impact assessment. • A comparison of the methods based on several criteria was carried out. • Agricultural land use is by far the most common area of study within the methods. • Incentive driven methods, like LCA, arouse the most interest in this field.

  14. Methods for land use impact assessment: A review

    Energy Technology Data Exchange (ETDEWEB)

    Perminova, Tataina, E-mail: tatiana.perminova@utt.fr [Research Centre for Environmental Studies and Sustainability, University of Technology of Troyes, CNRS UMR 6281, 12 Rue Marie Curie CS 42060, F-10004 Troyes Cedex (France); Department of Geoecology and Geochemistry, Institute of Natural Resources, National Research Tomsk Polytechnic University, 30 Lenin Avenue, 634050 Tomsk (Russian Federation); Sirina, Natalia, E-mail: natalia.sirina@utt.fr [Research Centre for Environmental Studies and Sustainability, University of Technology of Troyes, CNRS UMR 6281, 12 Rue Marie Curie CS 42060, F-10004 Troyes Cedex (France); Laratte, Bertrand, E-mail: bertrand.laratte@utt.fr [Research Centre for Environmental Studies and Sustainability, University of Technology of Troyes, CNRS UMR 6281, 12 Rue Marie Curie CS 42060, F-10004 Troyes Cedex (France); Baranovskaya, Natalia, E-mail: natalya.baranovs@mail.ru [Department of Geoecology and Geochemistry, Institute of Natural Resources, National Research Tomsk Polytechnic University, 30 Lenin Avenue, 634050 Tomsk (Russian Federation); Rikhvanov, Leonid, E-mail: rikhvanov@tpu.ru [Department of Geoecology and Geochemistry, Institute of Natural Resources, National Research Tomsk Polytechnic University, 30 Lenin Avenue, 634050 Tomsk (Russian Federation)

    2016-09-15

    Many types of methods to assess land use impact have been developed. Nevertheless a systematic synthesis of all these approaches is necessary to highlight the most commonly used and most effective methods. Given the growing interest in this area of research, a review of the different methods of assessing land use impact (LUI) was performed using bibliometric analysis. One hundred eighty seven articles of agricultural and biological science, and environmental sciences were examined. According to our results, the most frequently used land use assessment methods are Life-Cycle Assessment, Material Flow Analysis/Input–Output Analysis, Environmental Impact Assessment and Ecological Footprint. Comparison of the methods allowed their specific features to be identified and to arrive at the conclusion that a combination of several methods is the best basis for a comprehensive analysis of land use impact assessment. - Highlights: • We identified the most frequently used methods in land use impact assessment. • A comparison of the methods based on several criteria was carried out. • Agricultural land use is by far the most common area of study within the methods. • Incentive driven methods, like LCA, arouse the most interest in this field.

  15. Comparison of the effect of three autogenous bone harvesting methods on cell viability in rabbits

    Science.gov (United States)

    Moradi Haghgoo, Janet; Arabi, Seyed Reza; Hosseinipanah, Seyyed Mohammad; Solgi, Ghasem; Rastegarfard, Neda; Farhadian, Maryam

    2017-01-01

    Background. This study was designed to compare the viability of autogenous bone grafts, harvested using different methods, in order to determine the best harvesting technique with respect to more viable cells. Methods. In this animal experimental study, three harvesting methods, including manual instrument (chisel), rotary device and piezosurgery, were used for harvesting bone grafts from the lateral body of the mandible on the left and right sides of 10 rabbits. In each group, 20 bone samples were collected and their viability was assessed using MTS kit. Statistical analyses, including ANOVA and post hoc Tukey tests, were used for evaluating significant differences between the groups. Results. One-way ANOVA showed significant differences between all the groups (P=0.000). Data analysis using post hoc Tukey tests indicated that manual instrument and piezosurgery had no significant differences with regard to cell viability (P=0.749) and the cell viability in both groups was higher than that with the use of a rotary instrument (P=0.000). Conclusion. Autogenous bone grafts harvested with a manual instrument and piezosurgery had more viable cells in comparison to the bone chips harvested with a rotary device. PMID:28748046

  16. Differential scanning calorimetry method for purity determination: A case study on polycyclic aromatic hydrocarbons and chloramphenicol

    International Nuclear Information System (INIS)

    Kestens, V.; Zeleny, R.; Auclair, G.; Held, A.; Roebben, G.; Linsinger, T.P.J.

    2011-01-01

    Highlights: → Purity assessment of polycyclic aromatic hydrocarbons and chloramphenicol by DSC. → DSC results compared with traditional purity methods. → Different methods give different results, multiple method approach recommended. → DSC sensitive to impurities that have similar structures as main component. - Abstract: In this study the validity and suitability of differential scanning calorimetry (DSC) to determine the purity of selected polycyclic aromatic hydrocarbons and chloramphenicol has been investigated. The study materials were two candidate certified reference materials (CRMs), 6-methylchrysene and benzo[a]pyrene, and two different batches of commercially available highly pure chloramphenicol. The DSC results were compared with those obtained by other methods, namely gas and liquid chromatography with mass spectrometric detection, liquid chromatography with diode array detection, and quantitative nuclear magnetic resonance. The purity results obtained by these different analytical methods confirm the well-known challenges of comparing results of different method-defined measurands. In comparison with other methods, DSC has a much narrower working range. This limits the applicability of DSC as purity determination method, for instance during the assignment of the purity value of a CRM. Nevertheless, this study showed that DSC can be a powerful technique to detect impurities that are structurally very similar to the main purity component. From this point of view, and because of its good repeatability, DSC can be considered as a valuable technique to investigate the homogeneity and stability of candidate purity CRMs.

  17. Comparison of methods of extracting information for meta-analysis of observational studies in nutritional epidemiology

    Directory of Open Access Journals (Sweden)

    Jong-Myon Bae

    2016-01-01

    Full Text Available OBJECTIVES: A common method for conducting a quantitative systematic review (QSR for observational studies related to nutritional epidemiology is the “highest versus lowest intake” method (HLM, in which only the information concerning the effect size (ES of the highest category of a food item is collected on the basis of its lowest category. However, in the interval collapsing method (ICM, a method suggested to enable a maximum utilization of all available information, the ES information is collected by collapsing all categories into a single category. This study aimed to compare the ES and summary effect size (SES between the HLM and ICM. METHODS: A QSR for evaluating the citrus fruit intake and risk of pancreatic cancer and calculating the SES by using the HLM was selected. The ES and SES were estimated by performing a meta-analysis using the fixed-effect model. The directionality and statistical significance of the ES and SES were used as criteria for determining the concordance between the HLM and ICM outcomes. RESULTS: No significant differences were observed in the directionality of SES extracted by using the HLM or ICM. The application of the ICM, which uses a broader information base, yielded more-consistent ES and SES, and narrower confidence intervals than the HLM. CONCLUSIONS: The ICM is advantageous over the HLM owing to its higher statistical accuracy in extracting information for QSR on nutritional epidemiology. The application of the ICM should hence be recommended for future studies.

  18. The Semianalytical Solutions for Stiff Systems of Ordinary Differential Equations by Using Variational Iteration Method and Modified Variational Iteration Method with Comparison to Exact Solutions

    Directory of Open Access Journals (Sweden)

    Mehmet Tarik Atay

    2013-01-01

    Full Text Available The Variational Iteration Method (VIM and Modified Variational Iteration Method (MVIM are used to find solutions of systems of stiff ordinary differential equations for both linear and nonlinear problems. Some examples are given to illustrate the accuracy and effectiveness of these methods. We compare our results with exact results. In some studies related to stiff ordinary differential equations, problems were solved by Adomian Decomposition Method and VIM and Homotopy Perturbation Method. Comparisons with exact solutions reveal that the Variational Iteration Method (VIM and the Modified Variational Iteration Method (MVIM are easier to implement. In fact, these methods are promising methods for various systems of linear and nonlinear stiff ordinary differential equations. Furthermore, VIM, or in some cases MVIM, is giving exact solutions in linear cases and very satisfactory solutions when compared to exact solutions for nonlinear cases depending on the stiffness ratio of the stiff system to be solved.

  19. A simple method for the comparison of commercially available ATP hygiene-monitoring systems.

    Science.gov (United States)

    Colquhoun, K O; Timms, S; Fricker, C R

    1998-04-01

    The purpose of this study was to evaluate a methodology which could easily be used in any test laboratory in a uniform and consistent way for determining the sensitivity and reproducibility of results obtained with three ATP hygiene-monitoring systems. The test protocol discussed here allows such comparison to be made, thereby establishing a method of benchmarking both new systems and developments of existing systems. The sensitivity of the LUMINOMETER K, PocketSwab (Charm Sciences) was found to be between 0.4 and 4.0 nmol of ATP with poor reproducibility at the 40.0 nmol level (CV, 35%). The sensitivity of the IDEXX LIGHTING system and the Biotrace UNILITE Xcel were both between 0.04 and 0.4 nmol with coefficients of variation (CVs) of between 9% at 0.04 nmol and 10% at 0.4 nmol for the IDEXX system and 17% at 0.04 nmol and 21% at 0.4 nmol for the Biotrace system. The three systems were tested with a range of dilutions of different food residues: orange juice, raw milk, and ground beef slurry. All three test systems allowed detection of orange juice and raw milk at dilutions of 1:1,000, although the CV of results from the Charm system (54 and 74% respectively) was poor at this dilution for both residues. The sensitivity of the test systems was poorer for ground beef slurry than it was for orange juice and raw milk. Both the Biotrace and IDEXX systems were able to detect a 1:100 dilution of beef slurry (with CVs of 17 and 10% respectively), whilst at this dilution results from the Charm system had a CV of 55%. It was possible by using the method described in this paper to rank in order of sensitivity and reproducibility the three single-shot ATP hygiene-monitoring systems investigated, with the IDEXX LIGHTNING being the best, followed by the Biotrace UNILITE Xcel, and then the charm LUMINOMETER K, PocketSwab.

  20. Comparison of combinatorial clustering methods on pharmacological data sets represented by machine learning-selected real molecular descriptors.

    Science.gov (United States)

    Rivera-Borroto, Oscar Miguel; Marrero-Ponce, Yovani; García-de la Vega, José Manuel; Grau-Ábalo, Ricardo del Corazón

    2011-12-27

    Cluster algorithms play an important role in diversity related tasks of modern chemoinformatics, with the widest applications being in pharmaceutical industry drug discovery programs. The performance of these grouping strategies depends on various factors such as molecular representation, mathematical method, algorithmical technique, and statistical distribution of data. For this reason, introduction and comparison of new methods are necessary in order to find the model that best fits the problem at hand. Earlier comparative studies report on Ward's algorithm using fingerprints for molecular description as generally superior in this field. However, problems still remain, i.e., other types of numerical descriptions have been little exploited, current descriptors selection strategy is trial and error-driven, and no previous comparative studies considering a broader domain of the combinatorial methods in grouping chemoinformatic data sets have been conducted. In this work, a comparison between combinatorial methods is performed,with five of them being novel in cheminformatics. The experiments are carried out using eight data sets that are well established and validated in the medical chemistry literature. Each drug data set was represented by real molecular descriptors selected by machine learning techniques, which are consistent with the neighborhood principle. Statistical analysis of the results demonstrates that pharmacological activities of the eight data sets can be modeled with a few of families with 2D and 3D molecular descriptors, avoiding classification problems associated with the presence of nonrelevant features. Three out of five of the proposed cluster algorithms show superior performance over most classical algorithms and are similar (or slightly superior in the most optimistic sense) to Ward's algorithm. The usefulness of these algorithms is also assessed in a comparative experiment to potent QSAR and machine learning classifiers, where they perform

  1. Comparison of methods used to estimate conventional undiscovered petroleum resources: World examples

    Science.gov (United States)

    Ahlbrandt, T.S.; Klett, T.R.

    2005-01-01

    Various methods for assessing undiscovered oil, natural gas, and natural gas liquid resources were compared in support of the USGS World Petroleum Assessment 2000. Discovery process, linear fractal, parabolic fractal, engineering estimates, PETRIMES, Delphi, and the USGS 2000 methods were compared. Three comparisons of these methods were made in: (1) the Neuquen Basin province, Argentina (different assessors, same input data); (2) provinces in North Africa, Oman, and Yemen (same assessors, different methods); and (3) the Arabian Peninsula, Arabian (Persian) Gulf, and North Sea (different assessors, different methods). A fourth comparison (same assessors, same assessment methods but different geologic models), between results from structural and stratigraphic assessment units in the North Sea used only the USGS 2000 method, and hence compared the type of assessment unit rather than the method. In comparing methods, differences arise from inherent differences in assumptions regarding: (1) the underlying distribution of the parent field population (all fields, discovered and undiscovered), (2) the population of fields being estimated; that is, the entire parent distribution or the undiscovered resource distribution, (3) inclusion or exclusion of large outlier fields; (4) inclusion or exclusion of field (reserve) growth, (5) deterministic or probabilistic models, (6) data requirements, and (7) scale and time frame of the assessment. Discovery process, Delphi subjective consensus, and the USGS 2000 method yield comparable results because similar procedures are employed. In mature areas such as the Neuquen Basin province in Argentina, the linear and parabolic fractal and engineering methods were conservative compared to the other five methods and relative to new reserve additions there since 1995. The PETRIMES method gave the most optimistic estimates in the Neuquen Basin. In less mature areas, the linear fractal method yielded larger estimates relative to other methods

  2. Plethora or paucity: a systematic search and bibliometric study of the application and design of qualitative methods in nursing research 2008-2010.

    Science.gov (United States)

    Ball, Elaine; McLoughlin, Moira; Darvill, Angela

    2011-04-01

    Qualitative methodology has increased in application and acceptability in all research disciplines. In nursing, it is appropriate that a plethora of qualitative methods can be found as nurses pose real-world questions to clinical, cultural and ethical issues of patient care (Johnson, 2007; Long and Johnson, 2007), yet the methods nurses readily use in pursuit of answers remains under intense scrutiny. One of the problems with qualitative methodology for nursing research is its place in the hierarchy of evidence (HOE); another is its comparison to the positivist constructs of what constitutes good research and the measurement of qualitative research against this. In order to position and strengthen its evidence base, nursing may well seek to distance itself from a qualitative perspective and utilise methods at the top of the HOE; yet given the relation of qualitative methods to nursing this would constrain rather than broaden the profession in search of answers and an evidence base. The comparison between qualitative and quantitative can be both mutually exclusive and rhetorical, by shifting the comparison this study takes a more reflexive position and critically appraises qualitative methods against the standards set by qualitative researchers. By comparing the design and application of qualitative methods in nursing over a two year period, the study examined how qualitative stands up to independent rather than comparative scrutiny. For the methods, a four-step mixed methods approach newly constructed by the first author was used to define the scope of the research question and develop inclusion criteria. 2. Synthesis tables were constructed to organise data, 3. Bibliometrics configured data. 4. Studies selected for inclusion in the review were critically appraised using a critical interpretive synthesis (Dixon-Woods et al., 2006). The paper outlines the research process as well as findings. Results showed of the 240 papers analysed, 27% used ad hoc or no

  3. How to find non-dependent opiate users: a comparison of sampling methods in a field study of opium and heroin users

    NARCIS (Netherlands)

    Korf, D.J.; van Ginkel, P.; Benschop, A.

    2010-01-01

    Background/aim The first aim is to better understand the potentials and limitations of different sampling methods for reaching a specific, rarely studied population of drug users and for persuading them to take part in a multidisciplinary study. The second is to determine the extent to which these

  4. A Phylogeny of the Monocots, as Inferred from rbcL and atpA Sequence Variation, and a Comparison of Methods for Calculating Jackknife and Bootstrap Values

    DEFF Research Database (Denmark)

    Davis, Jerrold I.; Stevenson, Dennis W.; Petersen, Gitte

    2004-01-01

    elements of Xyridaceae. A comparison was conducted of jackknife and bootstrap values, as computed using strict-consensus (SC) and frequency-within-replicates (FWR) approaches. Jackknife values tend to be higher than bootstrap values, and for each of these methods support values obtained with the FWR...

  5. A Comparative Study of Voltage, Peak Current and Dual Current Mode Control Methods for Noninverting Buck-Boost Converter

    Directory of Open Access Journals (Sweden)

    M. Č. Bošković

    2016-06-01

    Full Text Available This paper presents a comparison of voltage mode control (VMC and two current mode control (CMC methods of noninverting buck-boost converter. The converter control-to-output transfer function, line-to-output transfer function and the output impedance are obtained for all methods by averaging converter equations over one switching period and applying small-signal linearization. The obtained results are required for the design procedure of feedback compensator to keep a system stable and robust. A comparative study of VMC, peak current mode control (PCMC and dual-current mode control (DCMC is performed. Performance evaluation of the closed-loop system with obtained compensator between these methods is performed via numerical simulations.

  6. Comparison of neutron activation analysis with other instrumental methods for elemental analysis of airborne particulate matter

    International Nuclear Information System (INIS)

    Regge, P. de; Lievens, F.; Delespaul, I.; Monsecour, M.

    1976-01-01

    A comparison of instrumental methods, including neutron activation analysis, X-ray fluorescence spectrometry, atomic absorption spectrometry and emission spectrometry, for the analysis of heavy metals in airborne particulate matter is described. The merits and drawbacks of each method for the routine analysis of a large number of samples are discussed. The sample preparation technique, calibration and statistical data relevant to each method are given. Concordant results are obtained by the different methods for Co, Cu, Ni, Pb and Zn. Less good agreement is obtained for Fe, Mn and V. The results are not in agreement for the elements Cd and Cr. Using data obtained on the dust sample distributed by Euratom-ISPRA within the framework of an interlaboratory comparison, the accuracy of each method for the various elements is estimated. Neutron activation analysis was found to be the most sensitive and accurate of the non-destructive analysis methods. Only atomic absorption spectrometry has a comparable sensitivity, but requires considerable preparation work. X-ray fluorescence spectrometry is less sensitive and shows biases for Cr and V. Automatic emission spectrometry with simultaneous measurement of the beam intensities by photomultipliers is the fastest and most economical technique, though at the expense of some precision and sensitivity. (author)

  7. Comparison of photoacoustic radiometry to gas chromatography/mass spectrometry methods for monitoring chlorinated hydrocarbons

    International Nuclear Information System (INIS)

    Sollid, J.E.; Trujillo, V.L.; Limback, S.P.; Woloshun, K.A.

    1996-01-01

    A comparison of two methods of gas chromatography mass spectrometry (GCMS) and a nondispersive infrared technique, photoacoustic radiometry (PAR), is presented in the context of field monitoring a disposal site. First is presented an historical account describing the site and early monitoring to provide an overview. The intent and nature of the monitoring program changed when it was proposed to expand the Radiological Waste Site close to the Hazardous Waste Site. Both the sampling methods and analysis techniques were refined in the course of this exercise

  8. Combining static and dynamic modelling methods: a comparison of four methods

    NARCIS (Netherlands)

    Wieringa, Roelf J.

    1995-01-01

    A conceptual model of a system is an explicit description of the behaviour required of the system. Methods for conceptual modelling include entity-relationship (ER) modelling, data flow modelling, Jackson System Development (JSD) and several object-oriented analysis method. Given the current

  9. A comparison of statistical methods for identifying out-of-date systematic reviews.

    Directory of Open Access Journals (Sweden)

    Porjai Pattanittum

    Full Text Available BACKGROUND: Systematic reviews (SRs can provide accurate and reliable evidence, typically about the effectiveness of health interventions. Evidence is dynamic, and if SRs are out-of-date this information may not be useful; it may even be harmful. This study aimed to compare five statistical methods to identify out-of-date SRs. METHODS: A retrospective cohort of SRs registered in the Cochrane Pregnancy and Childbirth Group (CPCG, published between 2008 and 2010, were considered for inclusion. For each eligible CPCG review, data were extracted and "3-years previous" meta-analyses were assessed for the need to update, given the data from the most recent 3 years. Each of the five statistical methods was used, with random effects analyses throughout the study. RESULTS: Eighty reviews were included in this study; most were in the area of induction of labour. The numbers of reviews identified as being out-of-date using the Ottawa, recursive cumulative meta-analysis (CMA, and Barrowman methods were 34, 7, and 7 respectively. No reviews were identified as being out-of-date using the simulation-based power method, or the CMA for sufficiency and stability method. The overall agreement among the three discriminating statistical methods was slight (Kappa = 0.14; 95% CI 0.05 to 0.23. The recursive cumulative meta-analysis, Ottawa, and Barrowman methods were practical according to the study criteria. CONCLUSION: Our study shows that three practical statistical methods could be applied to examine the need to update SRs.

  10. A comparison of two methods for retrieving ICD-9-CM data: the effect of using an ontology-based method for handling terminology changes.

    Science.gov (United States)

    Yu, Alexander C; Cimino, James J

    2011-04-01

    Most existing controlled terminologies can be characterized as collections of terms, wherein the terms are arranged in a simple list or organized in a hierarchy. These kinds of terminologies are considered useful for standardizing terms and encoding data and are currently used in many existing information systems. However, they suffer from a number of limitations that make data reuse difficult. Relatively recently, it has been proposed that formal ontological methods can be applied to some of the problems of terminological design. Biomedical ontologies organize concepts (embodiments of knowledge about biomedical reality) whereas terminologies organize terms (what is used to code patient data at a certain point in time, based on the particular terminology version). However, the application of these methods to existing terminologies is not straightforward. The use of these terminologies is firmly entrenched in many systems, and what might seem to be a simple option of replacing these terminologies is not possible. Moreover, these terminologies evolve over time in order to suit the needs of users. Any methodology must therefore take these constraints into consideration, hence the need for formal methods of managing changes. Along these lines, we have developed a formal representation of the concept-term relation, around which we have also developed a methodology for management of terminology changes. The objective of this study was to determine whether our methodology would result in improved retrieval of data. Comparison of two methods for retrieving data encoded with terms from the International Classification of Diseases (ICD-9-CM), based on their recall when retrieving data for ICD-9-CM terms whose codes had changed but which had retained their original meaning (code change). Recall and interclass correlation coefficient. Statistically significant differences were detected (pontology-based ICD-9-CM data retrieval method that takes into account the effects of

  11. Comparison of fatigue accumulated during and after prolonged robotic and laparoscopic surgical methods: a cross-sectional study.

    Science.gov (United States)

    González-Sánchez, Manuel; González-Poveda, Ivan; Mera-Velasco, Santiago; Cuesta-Vargas, Antonio I

    2017-03-01

    The aim of the present study was to analyse the fatigue experienced by surgeons during and after performing robotic and laparoscopic surgery and to analyse muscle function, self-perceived fatigue and postural balance. Cross-sectional study considering two surgical protocols (laparoscopic and robotic) with two different roles (chief and assistant surgeon). Fatigue was recorded in two ways: pre- and post-surgery using questionnaires [Profile of Mood States (POMS), Quick Questionnaire Piper Fatigue Scale and Visual Analogue Scale (VAS)-related fatigue] and parametrising functional tests [handgrip and single-leg balance test (SLBT)] and during the intervention by measuring the muscle activation of eight different muscles via surface electromyography and kinematic measurement (using inertial sensors). Each surgery profile intervention (robotic/laparoscopy-chief/assistant surgeon) was measured three times, totalling 12 measured surgery interventions. The minimal duration of surgery was 180 min. Pre- and post-surgery, all questionnaires showed that the magnitude of change was higher for the chief surgeon compared with the assistant surgeon, with differences of between 10 % POMS and 16.25 % VAS (robotic protocol) and between 3.1 % POMS and 12.5 % VAS (laparoscopic protocol). In the inter-profile comparison, the chief surgeon (robotic protocol) showed a lower balance capacity during the SLBT after surgery. During the intervention, the kinematic variables showed significant differences between the chief and assistant surgeon in the robotic protocol, but not in the laparoscopic protocol. Regarding muscle activation, there was not enough muscle activity to generate fatigue. Prolonged surgery increased fatigue in the surgeon; however, the magnitude of fatigue differed between surgical profiles. The surgeon who experienced the greatest fatigue was the chief surgeon in the robotic protocol.

  12. Comparison of two methods for extraction of volatiles from marine PL emulsions

    DEFF Research Database (Denmark)

    Lu, Henna Fung Sieng; Nielsen, Nina Skall; Jacobsen, Charlotte

    2013-01-01

    The dynamic headspace (DHS) thermal desorption principle using Tenax GR tube, as well as the solid phase micro‐extraction (SPME) tool with carboxen/polydimethylsiloxane 50/30 µm CAR/PDMS SPME fiber, both coupled to GC/MS were implemented for the isolation and identification of both lipid...... and Strecker derived volatiles in marine phospholipids (PL) emulsions. Comparison of volatile extraction efficiency was made between the methods. For marine PL emulsions with a highly complex composition of volatiles headspace, a fiber saturation problem was encountered when using CAR/PDMS‐SPME for volatiles...... analysis. However, the CAR/PDMS‐SPME technique was efficient for lipid oxidation analysis in emulsions of less complex headspace. The SPME method extracted volatiles of lower molecular weights more efficient than the DHS method. On the other hand, DHS Tenax GR appeared to be more efficient in extracting...

  13. A comparison of partial order technique with three methods of multi-criteria analysis for ranking of chemical substances.

    Science.gov (United States)

    Lerche, Dorte; Brüggemann, Rainer; Sørensen, Peter; Carlsen, Lars; Nielsen, Ole John

    2002-01-01

    An alternative to the often cumbersome and time-consuming risk assessments of chemical substances could be more reliable and advanced priority setting methods. An elaboration of the simple scoring methods is provided by Hasse Diagram Technique (HDT) and/or Multi-Criteria Analysis (MCA). The present study provides an in depth evaluation of HDT relative to three MCA techniques. The new and main methodological step in the comparison is the use of probability concepts based on mathematical tools such as linear extensions of partially ordered sets and Monte Carlo simulations. A data set consisting of 12 High Production Volume Chemicals (HPVCs) is used for illustration. It is a paradigm in this investigation to claim that the need of external input (often subjective weightings of criteria) should be minimized and that the transparency should be maximized in any multicriteria prioritisation. The study illustrates that the Hasse diagram technique (HDT) needs least external input, is most transparent and is least subjective. However, HDT has some weaknesses if there are criteria which exclude each other. Then weighting is needed. Multi-Criteria Analysis (i.e. Utility Function approach, PROMETHEE and concordance analysis) can deal with such mutual exclusions because their formalisms to quantify preferences allow participation e.g. weighting of criteria. Consequently MCA include more subjectivity and loose transparency. The recommendation which arises from this study is that the first step in decision making is to run HDT and as the second step possibly is to run one of the MCA algorithms.

  14. Comparison study of time history and response spectrum responses for multiply supported piping systems

    International Nuclear Information System (INIS)

    Wang, Y.K.; Subudhi, M.; Bezler, P.

    1983-01-01

    In the past decade, several investigators have studied the problem of independent support excitation of a multiply supported piping system to identify the real need for such an analysis. This approach offers an increase in accuracy at a small increase in computational costs. To assess the method, studies based on the response spectrum approach using independent support motions for each group of commonly connected supports were performed. The results obtained from this approach were compared with the conventional envelope spectrum and time history solutions. The present study includes a mathematical formulation of the independent support motion analysis method suitable for implementation into an existing all purpose piping code PSAFE2 and a comparison of the solutions for some typical piping system using both Time History and Response Spectrum Methods. The results obtained from the Response Spectrum Methods represent the upper bound solution at most points in the piping system. Similarly, the Seismic Anchor Movement analysis based on the SRP method over predicts the responses near the support points and under predicts at points away from the supports

  15. The Value of Mixed Methods Research: A Mixed Methods Study

    Science.gov (United States)

    McKim, Courtney A.

    2017-01-01

    The purpose of this explanatory mixed methods study was to examine the perceived value of mixed methods research for graduate students. The quantitative phase was an experiment examining the effect of a passage's methodology on students' perceived value. Results indicated students scored the mixed methods passage as more valuable than those who…

  16. Kernel method for air quality modelling. II. Comparison with analytic solutions

    Energy Technology Data Exchange (ETDEWEB)

    Lorimer, G S; Ross, D G

    1986-01-01

    The performance of Lorimer's (1986) kernel method for solving the advection-diffusion equation is tested for instantaneous and continuous emissions into a variety of model atmospheres. Analytical solutions are available for comparison in each case. The results indicate that a modest minicomputer is quite adequate for obtaining satisfactory precision even for the most trying test performed here, which involves a diffusivity tensor and wind speed which are nonlinear functions of the height above ground. Simulations of the same cases by the particle-in-cell technique are found to provide substantially lower accuracy even when use is made of greater computer resources.

  17. Ground Snow Measurements: Comparisons of the Hotplate, Weighing and Manual Methods

    Science.gov (United States)

    Wettlaufer, A.; Snider, J.; Campbell, L. S.; Steenburgh, W. J.; Burkhart, M.

    2015-12-01

    The Yankee Environmental Systems (YES) Hotplate was developed to avoid some of the problems associated with weighing snowfall sensors. This work compares Hotplate, weighing sensor (ETI NOAH-II) and manual measurements of liquid-equivalent depth. The main field site was at low altitude in western New York; Hotplate and ETI comparisons were also made at two forested subalpine sites in southeastern Wyoming. The manual measurement (only conducted at the New York site) was derived by weighing snow cores sampled from a snow board. The two recording gauges (Hotplate and ETI) were located within 5 m of the snow board. Hotplate-derived accumulations were corrected using a wind-speed dependent catch efficiency and the ETI orifice was heated and alter shielded. Three important findings are evident from the comparisons: 1) The Yes-derived accumulations, recorded in a user-accessible file, were compared to accumulations derived using an in-house calibration and fundamental measurements (plate power, long and shortwave radiances, wind speed, and temperature). These accumulations are highly correlated (N=24; r2=0.99), but the YES-derived values are larger by 20%. 2) The in-house Hotplate accumulations are in good agreement with ETI-based accumulations but with larger variability (N=24; r2=0.88). 3) The comparison of in-house Hotplate accumulation versus manual accumulation, expressed as mm of liquid, exhibits a fitted linear relationship Y (in-house) versus X (manual) given by Y = -0.2 (±1.4) + 0.9 (±0.1) · X (N= 20; r2=0.89). Thus, these two methods agree within statistical uncertainty.

  18. Comparison of radioimmunological and enzyme-immunological methods of determination in thyroid function studies

    International Nuclear Information System (INIS)

    Arnold, L.P.

    1984-01-01

    During the study described here parallel investigations were carried out using radio-immuno-assays (RIA) and enzyme-immuno-assays (EIA) in order to assess the quality and diagnostic value of both these techniques as well as to weigh up their relative advantages and disadvantages in connection with thyroid function studies. The following comparisons were made: T4 RIA versus T4 EIA, T3 RIA versus T3 EIA, T3 uptake versus thyroxine binding capacity, total balance of free hormone bonds. Correlation coefficients of 0.982 for T4 and 0.989 for T3 proved that the test results obtained using either RIA or EIA were in fairly good agreement. Less satisfactory correlation was suggested by a coefficient of 0.807 between the tests for thyroxine binding capacity and T3 uptake, which was even seen to decrease with increasing binding capacity so that the greatest deviations were observed in hyperthyroidism. The same holds true for the free T4 index and total balance, where the correlation coefficient is altered in a similar way by the inclusion of a thyroxine binding capacity EIA. Better agreement was achieved by optimal adjustment of the defined diagnostic ranges for hypothyroidism, euthyroidism and hyperthyroidism. The EIA results were biassed in the presence of hyperbilirubinemia. Intra-assay abd inter-assay veriations were analysed and found to range from 8.1 to 16.9% and 13.6 to 28.1%, respectively. EIA offers advantages over RIA inasmuch as it is free from radioactivity and does not require any special safety measures, although it was judged considerably less favourable than RIA in terms of sensitivity. (TRV) [de

  19. A comparison of non-integrating reprogramming methods

    Science.gov (United States)

    Schlaeger, Thorsten M; Daheron, Laurence; Brickler, Thomas R; Entwisle, Samuel; Chan, Karrie; Cianci, Amelia; DeVine, Alexander; Ettenger, Andrew; Fitzgerald, Kelly; Godfrey, Michelle; Gupta, Dipti; McPherson, Jade; Malwadkar, Prerana; Gupta, Manav; Bell, Blair; Doi, Akiko; Jung, Namyoung; Li, Xin; Lynes, Maureen S; Brookes, Emily; Cherry, Anne B C; Demirbas, Didem; Tsankov, Alexander M; Zon, Leonard I; Rubin, Lee L; Feinberg, Andrew P; Meissner, Alexander; Cowan, Chad A; Daley, George Q

    2015-01-01

    Human induced pluripotent stem cells (hiPSCs1–3) are useful in disease modeling and drug discovery, and they promise to provide a new generation of cell-based therapeutics. To date there has been no systematic evaluation of the most widely used techniques for generating integration-free hiPSCs. Here we compare Sendai-viral (SeV)4, episomal (Epi)5 and mRNA transfection mRNA6 methods using a number of criteria. All methods generated high-quality hiPSCs, but significant differences existed in aneuploidy rates, reprogramming efficiency, reliability and workload. We discuss the advantages and shortcomings of each approach, and present and review the results of a survey of a large number of human reprogramming laboratories on their independent experiences and preferences. Our analysis provides a valuable resource to inform the use of specific reprogramming methods for different laboratories and different applications, including clinical translation. PMID:25437882

  20. Gastroesophageal reflux diagnosis by scintigraphic method - a comparison among different methods

    International Nuclear Information System (INIS)

    Cruz, Maria das Gracas de Almedia; Penas, Maria Exposito; Maliska, Carmelindo; Fonseca, Lea Miriam B. da; Lemme, Eponina M.; Campos, Deisy Guacyaba S.; Rebelo, Ana Maria de O.; Martinho, Maria Jose R.; Dias, Vera Maria

    1996-01-01

    The gastroesophageal reflux disease is characterized by the return of gastric liquid to the esophagus which can origin the reflux esophagitis. The aim is to evaluate the contribution of dynamic quality scintigraphy of reflux (CTR) in the diagnosis of DRGE. We studied a ground of sick people with typical symptoms and compared the results given by digestive endoscopy (EDA), the histopathology and 24 hours pHmetry. There have been evaluated 24 healthy individuals and 97 who were sick. The first ones were submitted to CTR. In 20 controlled and 34 patients, the reflux index (IR) was determined by the evaluation of the reflux material percentage in relation to the basal activity. All the sick patients were submitted to CTR and EDA. This one classified them in: group A with esophagitis 45 patients (46,4%) where 14 did the biopsy of the third segment/part from the esophagus and 16 did the long pHmetry and group B without esophagitis = 52 patients (53.6%), 26 did the biopsy and 36 the pHmetry. In group A the CTR was positive in 38 (84%), the biopsy showed esophagitis in 11 (78,6%) and the pHmetry showed normal reflux in 14 (87,5%). The positivity of the same exams in group B were of 42 (81%), 11 (42,3%) and 19 (53%) respectively. The IR were determined to the controls and the measure to the patients were of 59% and 106% respectively as p<0,0001 (test from Mann Whitney). The correlation of CTR with the other methods showed sensitivity of 84,1%, specificity of 95,8%, positive predictive value of 98,3% and negative predictive value of 67,7%. As it was showed the authors concluded that the scintigraphy method can confirm the diagnosis of DRGE in patients of typical symptomology which can be recommended by the initial of investigation method in these conditions. (author)

  1. A comparison of entropy balance and probability weighting methods to generalize observational cohorts to a population: a simulation and empirical example.

    Science.gov (United States)

    Harvey, Raymond A; Hayden, Jennifer D; Kamble, Pravin S; Bouchard, Jonathan R; Huang, Joanna C

    2017-04-01

    We compared methods to control bias and confounding in observational studies including inverse probability weighting (IPW) and stabilized IPW (sIPW). These methods often require iteration and post-calibration to achieve covariate balance. In comparison, entropy balance (EB) optimizes covariate balance a priori by calibrating weights using the target's moments as constraints. We measured covariate balance empirically and by simulation by using absolute standardized mean difference (ASMD), absolute bias (AB), and root mean square error (RMSE), investigating two scenarios: the size of the observed (exposed) cohort exceeds the target (unexposed) cohort and vice versa. The empirical application weighted a commercial health plan cohort to a nationally representative National Health and Nutrition Examination Survey target on the same covariates and compared average total health care cost estimates across methods. Entropy balance alone achieved balance (ASMD ≤ 0.10) on all covariates in simulation and empirically. In simulation scenario I, EB achieved the lowest AB and RMSE (13.64, 31.19) compared with IPW (263.05, 263.99) and sIPW (319.91, 320.71). In scenario II, EB outperformed IPW and sIPW with smaller AB and RMSE. In scenarios I and II, EB achieved the lowest mean estimate difference from the simulated population outcome ($490.05, $487.62) compared with IPW and sIPW, respectively. Empirically, only EB differed from the unweighted mean cost indicating IPW, and sIPW weighting was ineffective. Entropy balance demonstrated the bias-variance tradeoff achieving higher estimate accuracy, yet lower estimate precision, compared with IPW methods. EB weighting required no post-processing and effectively mitigated observed bias and confounding. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Radiolysis: an efficient method of studying radicalar antioxidant mechanisms

    International Nuclear Information System (INIS)

    Gardes-Albert, M.; Jore, D.

    1998-01-01

    The use of the radiolysis method for studying radicalar antioxidant mechanisms offers the different following possibilities: 1- quantitative evaluation of antioxidant activity of molecules soluble in aqueous or non aqueous media (oxidation yields, molecular mechanisms, rate constants), 2- evaluation of the yield of prevention towards polyunsaturated fatty acids peroxidation, 3- evaluation of antioxidant activity towards biological systems such as liposomes or low density lipoproteins (LDL), 4- simple comparison in different model systems of drags effect versus natural antioxidants. (authors)

  3. Multivariate normative comparison, a novel method for more reliably detecting cognitive impairment in HIV infection

    NARCIS (Netherlands)

    Su, Tanja; Schouten, Judith; Geurtsen, Gert J.; Wit, Ferdinand W.; Stolte, Ineke G.; Prins, Maria; Portegies, Peter; Caan, Matthan W. A.; Reiss, Peter; Majoie, Charles B.; Schmand, Ben A.

    2015-01-01

    The objective of this study is to assess whether multivariate normative comparison (MNC) improves detection of HIV-1-associated neurocognitive disorder (HAND) as compared with Frascati and Gisslén criteria. One-hundred and three HIV-1-infected men with suppressed viremia on combination

  4. Calibrations of pocket dosemeters using a comparison method

    International Nuclear Information System (INIS)

    Somarriba V, I.

    1996-01-01

    This monograph is dedicated mainly to the calibration of pocket dosemeters. Various types of radiation sources used in hospitals and different radiation detectors with emphasis on ionization chambers are briefly presented. Calibration methods based on the use of a reference dosemeter were developed to calibrate all pocket dosemeters existing at the Radiation Physics and Metrology Laboratory. Some of these dosemeters were used in personnel dosimetry at hospitals. Moreover, a study was realized about factors that affect the measurements with pocket dosemeters in the long term, such as discharges due to cosmic radiation. A DBASE IV program was developed to store the information included in the hospital's registry

  5. A study of biodiversity using DSS method and seed storage protein comparison of populations in two species of Achillea L. in the west of Iran

    Directory of Open Access Journals (Sweden)

    Hajar Salehi

    2013-11-01

    Full Text Available Intarspecific and interspecific variations are the main reserves of biodiversity and both are important sources of speciation. On this basis, identifing and recognizing the intra and interspecific variations is important in order to recognition of biodiversity. This research was done to study biodiversity and electrophoresis comparison of seed storage proteins in the populations of the two species of the genus Achillea in Hamadan and Kurdistan provinces using of the method of determination of special station (DSS. For this purpose, 12 and 9 special stations were selected for the species A. tenuifolia and A. biebresteinii using the data published in the related flora. Seed storage proteins were extracted and then studied using electrophoresis techniques (SDS-PAGE. In survey of all special stations, 120 plant species were distinguished as associated species. The results of the floristic data for the both species determined six distinctive groups that indicated the existence of intraspecific diversity in this species. The result of analysis of ecological data and seed storage proteins for the two species was in accordance with the floristic data and showed six distinctive groups. The existence of the bands of no. 4, 5, 8, 12 and 13 in the special stations of A. tenuifolia and the bands of 14, 15 and 16 in the special stations of A. biebresteinii o separated the populations of the species in two quite different and distinctive groups.

  6. Comparison of meaningful learning characteristics in simulated nursing practice after traditional versus computer-based simulation method: a qualitative videography study.

    Science.gov (United States)

    Poikela, Paula; Ruokamo, Heli; Teräs, Marianne

    2015-02-01

    Nursing educators must ensure that nursing students acquire the necessary competencies; finding the most purposeful teaching methods and encouraging learning through meaningful learning opportunities is necessary to meet this goal. We investigated student learning in a simulated nursing practice using videography. The purpose of this paper is to examine how two different teaching methods presented students' meaningful learning in a simulated nursing experience. The 6-hour study was divided into three parts: part I, general information; part II, training; and part III, simulated nursing practice. Part II was delivered by two different methods: a computer-based simulation and a lecture. The study was carried out in the simulated nursing practice in two universities of applied sciences, in Northern Finland. The participants in parts II and I were 40 first year nursing students; 12 student volunteers continued to part III. Qualitative analysis method was used. The data were collected using video recordings and analyzed by videography. The students who used a computer-based simulation program were more likely to report meaningful learning themes than those who were first exposed to lecture method. Educators should be encouraged to use computer-based simulation teaching in conjunction with other teaching methods to ensure that nursing students are able to receive the greatest educational benefits. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Comparison between Hilbert-Huang transform and scalogram methods on non-stationary biomedical signals: application to laser Doppler flowmetry recordings

    International Nuclear Information System (INIS)

    Roulier, Remy; Humeau, Anne; Flatley, Thomas P; Abraham, Pierre

    2005-01-01

    A significant transient increase in laser Doppler flowmetry (LDF) signals is observed in response to a local and progressive cutaneous pressure application on healthy subjects. This reflex may be impaired in diabetic patients. The work presents a comparison between two signal processing methods that provide a clarification of this phenomenon. Analyses by the scalogram and the Hilbert-Huang transform (HHT) of LDF signals recorded at rest and during a local and progressive cutaneous pressure application are performed on healthy and type 1 diabetic subjects. Three frequency bands, corresponding to myogenic, neurogenic and endothelial related metabolic activities, are studied at different time intervals in order to take into account the dynamics of the phenomenon. The results show that both the scalogram and the HHT methods lead to the same conclusions concerning the comparisons of the myogenic, neurogenic and endothelial related metabolic activities-during the progressive pressure and at rest-in healthy and diabetic subjects. However, the HHT shows more details that may be obscured by the scalogram. Indeed, the non-locally adaptative limitations of the scalogram can remove some definition from the data. These results may improve knowledge on the above-mentioned reflex as well as on non-stationary biomedical signal processing methods

  8. Comparison of the effectiveness of sterilizing endodontic files by 4 different methods: An in vitro study

    Directory of Open Access Journals (Sweden)

    Venkatasubramanian R

    2010-03-01

    Full Text Available Sterilization is the best method to counter the threats of microorganisms. The purpose of sterilization in the field of health care is to prevent the spread of infectious diseases. In dentistry, it primarily relates to processing reusable instruments to prevent cross-infection. The aim of this study was to investigate the efficacy of 4 methods of sterilizing endodontic instruments: Autoclaving, carbon dioxide laser sterilization, chemical sterilization (with glutaraldehyde and glass-bead sterilization. The endodontic file was sterilized by 4 different methods after contaminating it with bacillus stearothermophillus and then checked for sterility by incubating after putting it in test tubes containing thioglycollate medium. The study showed that the files sterilized by autoclave and lasers were completely sterile. Those sterilized by glass bead were 90% sterile and those with glutaraldehyde were 80% sterile. The study concluded that autoclave or laser could be used as a method of sterilization in clinical practice and in advanced clinics; laser can be used also as a chair side method of sterilization.

  9. Study of the orbital correction method

    International Nuclear Information System (INIS)

    Meserve, R.A.

    1976-01-01

    Two approximations of interest in atomic, molecular, and solid state physics are explored. First, a procedure for calculating an approximate Green's function for use in perturbation theory is derived. In lowest order it is shown to be equivalent to treating the contribution of the bound states of the unperturbed Hamiltonian exactly and representing the continuum contribution by plane waves orthogonalized to the bound states (OPW's). If the OPW approximation were inadequate, the procedure allows for systematic improvement of the approximation. For comparison purposes an exact but more limited procedure for performing second-order perturbation theory, one that involves solving an inhomogeneous differential equation, is also derived. Second, the Kohn-Sham many-electron formalism is discussed and formulae are derived and discussed for implementing perturbation theory within the formalism so as to find corrections to the total energy of a system through second order in the perturbation. Both approximations were used in the calculation of the polarizability of helium, neon, and argon. The calculation included direct and exchange effects by the Kohn-Sham method and full self-consistency was demanded. The results using the differential equation method yielded excellent agreement with the coupled Hartree-Fock results of others and with experiment. Moreover, the OPW approximation yielded satisfactory comparison with the results of calculation by the exact differential equation method. Finally, both approximations were used in the calculation of properties of hydrogen fluoride and methane. The appendix formulates a procedure using group theory and the internal coordinates of a molecular system to simplify the calculation of vibrational frequencies

  10. Comparison of methods to determine methane emissions from dairy cows in farm conditions.

    Science.gov (United States)

    Huhtanen, P; Cabezas-Garcia, E H; Utsumi, S; Zimmerman, S

    2015-05-01

    Nutritional and animal-selection strategies to mitigate enteric methane (CH4) depend on accurate, cost-effective methods to determine emissions from a large number of animals. The objective of the present study was to compare 2 spot-sampling methods to determine CH4 emissions from dairy cows, using gas quantification equipment installed in concentrate feeders or automatic milking stalls. In the first method (sniffer method), CH4 and carbon dioxide (CO2) concentrations were measured in close proximity to the muzzle of the animal, and average CH4 concentrations or CH4/CO2 ratio was calculated. In the second method (flux method), measurement of CH4 and CO2 concentration was combined with an active airflow inside the feed troughs for capture of emitted gas and measurements of CH4 and CO2 fluxes. A muzzle sensor was used allowing data to be filtered when the muzzle was not near the sampling inlet. In a laboratory study, a model cow head was built that emitted CO2 at a constant rate. It was found that CO2 concentrations using the sniffer method decreased up to 39% when the distance of the muzzle from the sampling inlet increased to 30cm, but no muzzle-position effects were observed for the flux method. The methods were compared in 2 on-farm studies conducted using 32 (experiment 1) or 59 (experiment 2) cows in a switch-back design of 5 (experiment 1) or 4 (experiment 2) periods for replicated comparisons between methods. Between-cow coefficient of variation (CV) in CH4 was smaller for the flux than the sniffer method (experiment 1, CV=11.0 vs. 17.5%, and experiment 2, 17.6 vs. 28.0%). Repeatability of the measurements from both methods were high (0.72-0.88), but the relationship between the sniffer and flux methods was weak (R(2)=0.09 in both experiments). With the flux method CH4 was found to be correlated to dry matter intake or body weight, but this was not the case with the sniffer method. The CH4/CO2 ratio was more highly correlated between the flux and sniffer

  11. A Comparison of Distillery Stillage Disposal Methods

    OpenAIRE

    V. Sajbrt; M. Rosol; P. Ditl

    2010-01-01

    This paper compares the main stillage disposal methods from the point of view of technology, economics and energetics. Attention is paid to the disposal of both solid and liquid phase. Specifically, the following methods are considered: a) livestock feeding, b) combustion of granulated stillages, c) fertilizer production, d) anaerobic digestion with biogas production and e) chemical pretreatment and subsequent secondary treatment. Other disposal techniques mentioned in the literature (electro...

  12. A comparison of two microscale laboratory reporting methods in a secondary chemistry classroom

    Science.gov (United States)

    Martinez, Lance Michael

    This study attempted to determine if there was a difference between the laboratory achievement of students who used a modified reporting method and those who used traditional laboratory reporting. The study also determined the relationships between laboratory performance scores and the independent variables score on the Group Assessment of Logical Thinking (GALT) test, chronological age in months, gender, and ethnicity for each of the treatment groups. The study was conducted using 113 high school students who were enrolled in first-year general chemistry classes at Pueblo South High School in Colorado. The research design used was the quasi-experimental Nonequivalent Control Group Design. The statistical treatment consisted of the Multiple Regression Analysis and the Analysis of Covariance. Based on the GALT, students in the two groups were generally in the concrete and transitional stages of the Piagetian cognitive levels. The findings of the study revealed that the traditional and the modified methods of laboratory reporting did not have any effect on the laboratory performance outcome of the subjects. However, the students who used the traditional method of reporting showed a higher laboratory performance score when evaluation was conducted using the New Standards rubric recommended by the state. Multiple Regression Analysis revealed that there was a significant relationship between the criterion variable student laboratory performance outcome of individuals who employed traditional laboratory reporting methods and the composite set of predictor variables. On the contrary, there was no significant relationship between the criterion variable student laboratory performance outcome of individuals who employed modified laboratory reporting methods and the composite set of predictor variables.

  13. A quantitative comparison of single-cell whole genome amplification methods.

    Directory of Open Access Journals (Sweden)

    Charles F A de Bourcy

    Full Text Available Single-cell sequencing is emerging as an important tool for studies of genomic heterogeneity. Whole genome amplification (WGA is a key step in single-cell sequencing workflows and a multitude of methods have been introduced. Here, we compare three state-of-the-art methods on both bulk and single-cell samples of E. coli DNA: Multiple Displacement Amplification (MDA, Multiple Annealing and Looping Based Amplification Cycles (MALBAC, and the PicoPLEX single-cell WGA kit (NEB-WGA. We considered the effects of reaction gain on coverage uniformity, error rates and the level of background contamination. We compared the suitability of the different WGA methods for the detection of copy-number variations, for the detection of single-nucleotide polymorphisms and for de-novo genome assembly. No single method performed best across all criteria and significant differences in characteristics were observed; the choice of which amplifier to use will depend strongly on the details of the type of question being asked in any given experiment.

  14. [Clinical evaluation of TANDEM PSA in Japanese cases and comparison with other methods].

    Science.gov (United States)

    Kuriyama, M; Yamamoto, N; Shinoda, I; Kawada, Y; Akimoto, S; Shimazaki, J

    1995-01-01

    Clinical evaluation of TANDEM PSA which is the most frequently used prostate specific antigen (PSA) assay method in the world and a comparison with other methods were performed in Japanese cases in a cooperative research fashion. The minimum detectable level of the method was found to be 0.50 ng of PSA in one ml of serum and 1.9 ng/ml was regarded as the upper normal value in Japanese males. The distribution of serum PSA showed a significant difference between the benign prostate hypertrophy (BPH) cases and patients with stage C or D prostate cancer. The sero-diagnosis prostate cancer at an early stage with the TANDEM PSA was difficult. The correlation to other methods of PSA detection was very high. Furthermore, the clinical use of the method in following-up the clinical course of prostate cancer patients was very useful. These findings suggested that the PSA detection using TANDEM PSA is applicable even in Japanese cases although the upper cut-off level is decreased.

  15. Mixing studies at low grade vacuum pan using the radiotracer method

    International Nuclear Information System (INIS)

    Griffith, J.M

    1999-01-01

    In this paper, some preliminary results achieved in the evaluation of the homogenization time at a vacuum pan for massecuite b and seed preparation , using two approaches of the radiotracer method, are presented. Practically no difference between the o n line , using small size detector and the sampling methods, in mixing studies performed at the high-grade massecuite was detected. Results achieved during the trials performed at the vacuum station show that the mechanical agitation in comparison with normal agitation improves the performance of mixing in high-grade massecuite b and in seed preparation at the vacuum pan

  16. Mixing studies at low grade vacuum pan using the radiotracer method

    International Nuclear Information System (INIS)

    Griffith, J.M

    1999-01-01

    In this paper, some preliminary results achieved in the evaluation of the homogenization time at a Vacuum Pan for massecuite B and seed preparation , using two approaches of the radiotracer method, are presented. Practically no difference between the Ion Line , using small size detector and the sampling methods, in mixing studies performed at the high-grade massecuite was detected. Results achieved during the trials performed at the vacuum station show that the mechanical agitation in comparison with normal agitation improves the performance of mixing in high-grade massecuite b and in seed preparation at the vacuum pan

  17. A comparison of Ki-67 counting methods in luminal Breast Cancer: The Average Method vs. the Hot Spot Method.

    Science.gov (United States)

    Jang, Min Hye; Kim, Hyun Jung; Chung, Yul Ri; Lee, Yangkyu; Park, So Yeon

    2017-01-01

    In spite of the usefulness of the Ki-67 labeling index (LI) as a prognostic and predictive marker in breast cancer, its clinical application remains limited due to variability in its measurement and the absence of a standard method of interpretation. This study was designed to compare the two methods of assessing Ki-67 LI: the average method vs. the hot spot method and thus to determine which method is more appropriate in predicting prognosis of luminal/HER2-negative breast cancers. Ki-67 LIs were calculated by direct counting of three representative areas of 493 luminal/HER2-negative breast cancers using the two methods. We calculated the differences in the Ki-67 LIs (ΔKi-67) between the two methods and the ratio of the Ki-67 LIs (H/A ratio) of the two methods. In addition, we compared the performance of the Ki-67 LIs obtained by the two methods as prognostic markers. ΔKi-67 ranged from 0.01% to 33.3% and the H/A ratio ranged from 1.0 to 2.6. Based on the receiver operating characteristic curve method, the predictive powers of the KI-67 LI measured by the two methods were similar (Area under curve: hot spot method, 0.711; average method, 0.700). In multivariate analysis, high Ki-67 LI based on either method was an independent poor prognostic factor, along with high T stage and node metastasis. However, in repeated counts, the hot spot method did not consistently classify tumors into high vs. low Ki-67 LI groups. In conclusion, both the average and hot spot method of evaluating Ki-67 LI have good predictive performances for tumor recurrence in luminal/HER2-negative breast cancers. However, we recommend using the average method for the present because of its greater reproducibility.

  18. A comparison of Ki-67 counting methods in luminal Breast Cancer: The Average Method vs. the Hot Spot Method.

    Directory of Open Access Journals (Sweden)

    Min Hye Jang

    Full Text Available In spite of the usefulness of the Ki-67 labeling index (LI as a prognostic and predictive marker in breast cancer, its clinical application remains limited due to variability in its measurement and the absence of a standard method of interpretation. This study was designed to compare the two methods of assessing Ki-67 LI: the average method vs. the hot spot method and thus to determine which method is more appropriate in predicting prognosis of luminal/HER2-negative breast cancers. Ki-67 LIs were calculated by direct counting of three representative areas of 493 luminal/HER2-negative breast cancers using the two methods. We calculated the differences in the Ki-67 LIs (ΔKi-67 between the two methods and the ratio of the Ki-67 LIs (H/A ratio of the two methods. In addition, we compared the performance of the Ki-67 LIs obtained by the two methods as prognostic markers. ΔKi-67 ranged from 0.01% to 33.3% and the H/A ratio ranged from 1.0 to 2.6. Based on the receiver operating characteristic curve method, the predictive powers of the KI-67 LI measured by the two methods were similar (Area under curve: hot spot method, 0.711; average method, 0.700. In multivariate analysis, high Ki-67 LI based on either method was an independent poor prognostic factor, along with high T stage and node metastasis. However, in repeated counts, the hot spot method did not consistently classify tumors into high vs. low Ki-67 LI groups. In conclusion, both the average and hot spot method of evaluating Ki-67 LI have good predictive performances for tumor recurrence in luminal/HER2-negative breast cancers. However, we recommend using the average method for the present because of its greater reproducibility.

  19. Standard setting: comparison of two methods.

    Science.gov (United States)

    George, Sanju; Haque, M Sayeed; Oyebode, Femi

    2006-09-14

    The outcome of assessments is determined by the standard-setting method used. There is a wide range of standard-setting methods and the two used most extensively in undergraduate medical education in the UK are the norm-reference and the criterion-reference methods. The aims of the study were to compare these two standard-setting methods for a multiple-choice question examination and to estimate the test-retest and inter-rater reliability of the modified Angoff method. The norm-reference method of standard-setting (mean minus 1 SD) was applied to the 'raw' scores of 78 4th-year medical students on a multiple-choice examination (MCQ). Two panels of raters also set the standard using the modified Angoff method for the same multiple-choice question paper on two occasions (6 months apart). We compared the pass/fail rates derived from the norm reference and the Angoff methods and also assessed the test-retest and inter-rater reliability of the modified Angoff method. The pass rate with the norm-reference method was 85% (66/78) and that by the Angoff method was 100% (78 out of 78). The percentage agreement between Angoff method and norm-reference was 78% (95% CI 69% - 87%). The modified Angoff method had an inter-rater reliability of 0.81-0.82 and a test-retest reliability of 0.59-0.74. There were significant differences in the outcomes of these two standard-setting methods, as shown by the difference in the proportion of candidates that passed and failed the assessment. The modified Angoff method was found to have good inter-rater reliability and moderate test-retest reliability.

  20. A comparison of food crispness based on the cloud model.

    Science.gov (United States)

    Wang, Minghui; Sun, Yonghai; Hou, Jumin; Wang, Xia; Bai, Xue; Wu, Chunhui; Yu, Libo; Yang, Jie

    2018-02-01

    The cloud model is a typical model which transforms the qualitative concept into the quantitative description. The cloud model has been used less extensively in texture studies before. The purpose of this study was to apply the cloud model in food crispness comparison. The acoustic signals of carrots, white radishes, potatoes, Fuji apples, and crystal pears were recorded during compression. And three time-domain signal characteristics were extracted, including sound intensity, maximum short-time frame energy, and waveform index. The three signal characteristics and the cloud model were used to compare the crispness of the samples mentioned above. The crispness based on the Ex value of the cloud model, in a descending order, was carrot > potato > white radish > Fuji apple > crystal pear. To verify the results of the acoustic signals, mechanical measurement and sensory evaluation were conducted. The results of the two verification experiments confirmed the feasibility of the cloud model. The microstructures of the five samples were also analyzed. The microstructure parameters were negatively related with crispness (p cloud model method can be used for crispness comparison of different kinds of foods. The method is more accurate than the traditional methods such as mechanical measurement and sensory evaluation. The cloud model method can also be applied to other texture studies extensively. © 2017 Wiley Periodicals, Inc.