WorldWideScience

Sample records for comparison adjustment method

  1. Comparison of different methods for liquid level adjustment in tank prover calibration

    International Nuclear Information System (INIS)

    Garcia, D A; Farias, E C; Gabriel, P C; Aquino, M H; Gomes, R S E; Aibe, V Y

    2015-01-01

    The adjustment of the liquid level during the calibration of tank provers with fixed volume is normally done by overfill but it can be done in different ways. In this article four level adjustment techniques are compared: plate, pipette, ruler and overfill adjustment. The adjustment methods using plate and pipette presented good agreement with the tank's nominal volume and lower uncertainty among the tested methods

  2. A comparison of methods to adjust for continuous covariates in the analysis of randomised trials

    Directory of Open Access Journals (Sweden)

    Brennan C. Kahan

    2016-04-01

    Full Text Available Abstract Background Although covariate adjustment in the analysis of randomised trials can be beneficial, adjustment for continuous covariates is complicated by the fact that the association between covariate and outcome must be specified. Misspecification of this association can lead to reduced power, and potentially incorrect conclusions regarding treatment efficacy. Methods We compared several methods of adjustment to determine which is best when the association between covariate and outcome is unknown. We assessed (a dichotomisation or categorisation; (b assuming a linear association with outcome; (c using fractional polynomials with one (FP1 or two (FP2 polynomial terms; and (d using restricted cubic splines with 3 or 5 knots. We evaluated each method using simulation and through a re-analysis of trial datasets. Results Methods which kept covariates as continuous typically had higher power than methods which used categorisation. Dichotomisation, categorisation, and assuming a linear association all led to large reductions in power when the true association was non-linear. FP2 models and restricted cubic splines with 3 or 5 knots performed best overall. Conclusions For the analysis of randomised trials we recommend (1 adjusting for continuous covariates even if their association with outcome is unknown; (2 keeping covariates as continuous; and (3 using fractional polynomials with two polynomial terms or restricted cubic splines with 3 to 5 knots when a linear association is in doubt.

  3. Comparison of Methods for Adjusting Incorrect Assignments of Items to Subtests Oblique Multiple Group Method Versus Confirmatory Common Factor Method

    NARCIS (Netherlands)

    Stuive, Ilse; Kiers, Henk A.L.; Timmerman, Marieke E.

    2009-01-01

    A common question in test evaluation is whether an a priori assignment of items to subtests is supported by empirical data. If the analysis results indicate the assignment of items to subtests under study is not supported by data, the assignment is often adjusted. In this study the authors compare

  4. Empirical comparison of four baseline covariate adjustment methods in analysis of continuous outcomes in randomized controlled trials

    Directory of Open Access Journals (Sweden)

    Zhang S

    2014-07-01

    Full Text Available Shiyuan Zhang,1 James Paul,2 Manyat Nantha-Aree,2 Norman Buckley,2 Uswa Shahzad,2 Ji Cheng,2 Justin DeBeer,5 Mitchell Winemaker,5 David Wismer,5 Dinshaw Punthakee,5 Victoria Avram,5 Lehana Thabane1–41Department of Clinical Epidemiology and Biostatistics, 2Department of Anesthesia, McMaster University, Hamilton, ON, Canada; 3Biostatistics Unit/Centre for Evaluation of Medicines, St Joseph's Healthcare - Hamilton, Hamilton, ON, Canada; 4Population Health Research Institute, Hamilton Health Science/McMaster University, 5Department of Surgery, Division of Orthopaedics, McMaster University, Hamilton, ON, CanadaBackground: Although seemingly straightforward, the statistical comparison of a continuous variable in a randomized controlled trial that has both a pre- and posttreatment score presents an interesting challenge for trialists. We present here empirical application of four statistical methods (posttreatment scores with analysis of variance, analysis of covariance, change in scores, and percent change in scores, using data from a randomized controlled trial of postoperative pain in patients following total joint arthroplasty (the Morphine COnsumption in Joint Replacement Patients, With and Without GaBapentin Treatment, a RandomIzed ControlLEd Study [MOBILE] trials.Methods: Analysis of covariance (ANCOVA was used to adjust for baseline measures and to provide an unbiased estimate of the mean group difference of the 1-year postoperative knee flexion scores in knee arthroplasty patients. Robustness tests were done by comparing ANCOVA with three comparative methods: the posttreatment scores, change in scores, and percentage change from baseline.Results: All four methods showed similar direction of effect; however, ANCOVA (-3.9; 95% confidence interval [CI]: -9.5, 1.6; P=0.15 and the posttreatment score (-4.3; 95% CI: -9.8, 1.2; P=0.12 method provided the highest precision of estimate compared with the change score (-3.0; 95% CI: -9.9, 3.8; P=0

  5. A comparison of two sleep spindle detection methods based on all night averages: individually adjusted versus fixed frequencies

    Directory of Open Access Journals (Sweden)

    Péter Przemyslaw Ujma

    2015-02-01

    Full Text Available Sleep spindles are frequently studied for their relationship with state and trait cognitive variables, and they are thought to play an important role in sleep-related memory consolidation. Due to their frequent occurrence in NREM sleep, the detection of sleep spindles is only feasible using automatic algorithms, of which a large number is available. We compared subject averages of the spindle parameters computed by a fixed frequency (11-13 Hz for slow spindles, 13-15 Hz for fast spindles automatic detection algorithm and the individual adjustment method (IAM, which uses individual frequency bands for sleep spindle detection. Fast spindle duration and amplitude are strongly correlated in the two algorithms, but there is little overlap in fast spindle density and slow spindle parameters in general. The agreement between fixed and manually determined sleep spindle frequencies is limited, especially in case of slow spindles. This is the most likely reason for the poor agreement between the two detection methods in case of slow spindle parameters. Our results suggest that while various algorithms may reliably detect fast spindles, a more sophisticated algorithm primed to individual spindle frequencies is necessary for the detection of slow spindles as well as individual variations in the number of spindles in general.

  6. Exploring methods for comparing the real-world effectiveness of treatments for osteoporosis: adjusted direct comparisons versus using patients as their own control.

    Science.gov (United States)

    Karlsson, Linda; Mesterton, Johan; Tepie, Maurille Feudjo; Intorcia, Michele; Overbeek, Jetty; Ström, Oskar

    2017-09-21

    Using Swedish and Dutch registry data for women initiating bisphosphonates, we evaluated two methods of comparing the real-world effectiveness of osteoporosis treatments that attempt to adjust for differences in patient baseline characteristics. Each method has advantages and disadvantages; both are potential complements to clinical trial analyses. We evaluated methods of comparing the real-world effectiveness of osteoporosis treatments that attempt to adjust for both observed and unobserved confounding. Swedish and Dutch registry data for women initiating zoledronate or oral bisphosphonates (OBPs; alendronate/risedronate) were used; the primary outcome was fracture. In adjusted direct comparisons (ADCs), regression and matching techniques were used to account for baseline differences in known risk factors for fracture (e.g., age, previous fracture, comorbidities). In an own-control analysis (OCA), for each treatment, fracture incidence in the first 90 days following treatment initiation (the baseline risk period) was compared with fracture incidence in the 1-year period starting 91 days after treatment initiation (the treatment exposure period). In total, 1196 and 149 women initiating zoledronate and 14,764 and 25,058 initiating OBPs were eligible in the Swedish and Dutch registries, respectively. Owing to the small Dutch zoledronate sample, only the Swedish data were used to compare fracture incidences between treatment groups. ADCs showed a numerically higher fracture incidence in the zoledronate than in the OBPs group (hazard ratio 1.09-1.21; not statistically significant, p > 0.05). For both treatment groups, OCA showed a higher fracture incidence in the baseline risk period than in the treatment exposure period, indicating a treatment effect. OCA showed a similar or greater effect in the zoledronate group compared with the OBPs group. ADC and OCA each possesses advantages and disadvantages. Combining both methods may provide an estimate of real

  7. Accounting for center in the Early External Cephalic Version trials: an empirical comparison of statistical methods to adjust for center in a multicenter trial with binary outcomes.

    Science.gov (United States)

    Reitsma, Angela; Chu, Rong; Thorpe, Julia; McDonald, Sarah; Thabane, Lehana; Hutton, Eileen

    2014-09-26

    Clustering of outcomes at centers involved in multicenter trials is a type of center effect. The Consolidated Standards of Reporting Trials Statement recommends that multicenter randomized controlled trials (RCTs) should account for center effects in their analysis, however most do not. The Early External Cephalic Version (EECV) trials published in 2003 and 2011 stratified by center at randomization, but did not account for center in the analyses, and due to the nature of the intervention and number of centers, may have been prone to center effects. Using data from the EECV trials, we undertook an empirical study to compare various statistical approaches to account for center effect while estimating the impact of external cephalic version timing (early or delayed) on the outcomes of cesarean section, preterm birth, and non-cephalic presentation at the time of birth. The data from the EECV pilot trial and the EECV2 trial were merged into one dataset. Fisher's exact method was used to test the overall effect of external cephalic version timing unadjusted for center effects. Seven statistical models that accounted for center effects were applied to the data. The models included: i) the Mantel-Haenszel test, ii) logistic regression with fixed center effect and fixed treatment effect, iii) center-size weighted and iv) un-weighted logistic regression with fixed center effect and fixed treatment-by-center interaction, iv) logistic regression with random center effect and fixed treatment effect, v) logistic regression with random center effect and random treatment-by-center interaction, and vi) generalized estimating equations. For each of the three outcomes of interest approaches to account for center effect did not alter the overall findings of the trial. The results were similar for the majority of the methods used to adjust for center, illustrating the robustness of the findings. Despite literature that suggests center effect can change the estimate of effect in

  8. The Impact of Adjustment for Socioeconomic Status on Comparisons of Cancer Incidence between Two European Countries

    International Nuclear Information System (INIS)

    Donnelly, D. W.; Gavin, A.; Hegarty, A.

    2013-01-01

    Cancer incidence rates vary considerably between countries and by socioeconomic status (SES). We investigate the impact of SES upon the relative cancer risk in two neighbouring countries. Methods. Data on 229,824 cases for 16 cancers diagnosed in 1995-2007 were extracted from the cancer registries in Northern Ireland (NI) and Republic of Ireland (RoI). Cancers in the two countries were compared using incidence rate ratios (IRRs) adjusted for age and age plus area-based SES. Results. Adjusting for SES in addition to age had a considerable impact on NI/RoI comparisons for cancers strongly related to SES. Before SES adjustment, lung cancer incidence rates were 11% higher for males and 7% higher for females in NI, while after adjustment, the IRR was not statistically significant. Cervical cancer rates were lower in NI than in RoI after adjustment for age (IRR: 0.90 (0.84-0.97)), with this difference increasing after adjustment for SES (IRR: 0.85 (0.79-0.92)). For cancers with a weak or nonexistent relationship to SES, adjustment for SES made little difference to the IRR. Conclusion. Socioeconomic factors explain some international variations but also obscure other crucial differences; thus, adjustment for these factors should not become part of international comparisons.

  9. The Impact of Adjustment for Socioeconomic Status on Comparisons of Cancer Incidence between Two European Countries

    Directory of Open Access Journals (Sweden)

    David W. Donnelly

    2013-01-01

    Full Text Available Background. Cancer incidence rates vary considerably between countries and by socioeconomic status (SES. We investigate the impact of SES upon the relative cancer risk in two neighbouring countries. Methods. Data on 229,824 cases for 16 cancers diagnosed in 1995–2007 were extracted from the cancer registries in Northern Ireland (NI and Republic of Ireland (RoI. Cancers in the two countries were compared using incidence rate ratios (IRRs adjusted for age and age plus area-based SES. Results. Adjusting for SES in addition to age had a considerable impact on NI/RoI comparisons for cancers strongly related to SES. Before SES adjustment, lung cancer incidence rates were 11% higher for males and 7% higher for females in NI, while after adjustment, the IRR was not statistically significant. Cervical cancer rates were lower in NI than in RoI after adjustment for age (IRR: 0.90 (0.84–0.97, with this difference increasing after adjustment for SES (IRR: 0.85 (0.79–0.92. For cancers with a weak or nonexistent relationship to SES, adjustment for SES made little difference to the IRR. Conclusion. Socioeconomic factors explain some international variations but also obscure other crucial differences; thus, adjustment for these factors should not become part of international comparisons.

  10. An adjusted probability method for the identification of sociometric status in classrooms

    NARCIS (Netherlands)

    García Bacete, F.J.; Cillessen, A.H.N.

    2017-01-01

    Objective: The aim of this study was to test the performance of an adjusted probability method for sociometric classification proposed by García Bacete (GB) in comparison with two previous methods. Specific goals were to examine the overall agreement between methods, the behavioral correlates of

  11. Method for adjusting warp measurements to a different board dimension

    Science.gov (United States)

    William T. Simpson; John R. Shelly

    2000-01-01

    Warp in lumber is a common problem that occurs while lumber is being dried. In research or other testing programs, it is sometimes necessary to compare warp of different species or warp caused by different process variables. If lumber dimensions are not the same, then direct comparisons are not possible, and adjusting warp to a common dimension would be desirable so...

  12. Seasonal adjustment methods and real time trend-cycle estimation

    CERN Document Server

    Bee Dagum, Estela

    2016-01-01

    This book explores widely used seasonal adjustment methods and recent developments in real time trend-cycle estimation. It discusses in detail the properties and limitations of X12ARIMA, TRAMO-SEATS and STAMP - the main seasonal adjustment methods used by statistical agencies. Several real-world cases illustrate each method and real data examples can be followed throughout the text. The trend-cycle estimation is presented using nonparametric techniques based on moving averages, linear filters and reproducing kernel Hilbert spaces, taking recent advances into account. The book provides a systematical treatment of results that to date have been scattered throughout the literature. Seasonal adjustment and real time trend-cycle prediction play an essential part at all levels of activity in modern economies. They are used by governments to counteract cyclical recessions, by central banks to control inflation, by decision makers for better modeling and planning and by hospitals, manufacturers, builders, transportat...

  13. Direct comparison of risk-adjusted and non-risk-adjusted CUSUM analyses of coronary artery bypass surgery outcomes.

    Science.gov (United States)

    Novick, Richard J; Fox, Stephanie A; Stitt, Larry W; Forbes, Thomas L; Steiner, Stefan

    2006-08-01

    We previously applied non-risk-adjusted cumulative sum methods to analyze coronary bypass outcomes. The objective of this study was to assess the incremental advantage of risk-adjusted cumulative sum methods in this setting. Prospective data were collected in 793 consecutive patients who underwent coronary bypass grafting performed by a single surgeon during a period of 5 years. The composite occurrence of an "adverse outcome" included mortality or any of 10 major complications. An institutional logistic regression model for adverse outcome was developed by using 2608 contemporaneous patients undergoing coronary bypass. The predicted risk of adverse outcome in each of the surgeon's 793 patients was then calculated. A risk-adjusted cumulative sum curve was then generated after specifying control limits and odds ratio. This risk-adjusted curve was compared with the non-risk-adjusted cumulative sum curve, and the clinical significance of this difference was assessed. The surgeon's adverse outcome rate was 96 of 793 (12.1%) versus 270 of 1815 (14.9%) for all the other institution's surgeons combined (P = .06). The non-risk-adjusted curve reached below the lower control limit, signifying excellent outcomes between cases 164 and 313, 323 and 407, and 667 and 793, but transgressed the upper limit between cases 461 and 478. The risk-adjusted cumulative sum curve never transgressed the upper control limit, signifying that cases preceding and including 461 to 478 were at an increased predicted risk. Furthermore, if the risk-adjusted cumulative sum curve was reset to zero whenever a control limit was reached, it still signaled a decrease in adverse outcome at 166, 653, and 782 cases. Risk-adjusted cumulative sum techniques provide incremental advantages over non-risk-adjusted methods by not signaling a decrement in performance when preoperative patient risk is high.

  14. A Fast and Effective Block Adjustment Method with Big Data

    Directory of Open Access Journals (Sweden)

    ZHENG Maoteng

    2017-02-01

    Full Text Available To deal with multi-source, complex and massive data in photogrammetry, and solve the high memory requirement and low computation efficiency of irregular normal equation caused by the randomly aligned and large scale datasets, we introduce the preconditioned conjugate gradient combined with inexact Newton method to solve the normal equation which do not have strip characteristics due to the randomly aligned images. We also use an effective sparse matrix compression format to compress the big normal matrix, a brand new workflow of bundle adjustment is developed. Our method can avoid the direct inversion of the big normal matrix, the memory requirement of the normal matrix is also decreased by the proposed sparse matrix compression format. Combining all these techniques, the proposed method can not only decrease the memory requirement of normal matrix, but also largely improve the efficiency of bundle adjustment while maintaining the same accuracy as the conventional method. Preliminary experiment results show that the bundle adjustment of a dataset with about 4500 images and 9 million image points can be done in only 15 minutes while achieving sub-pixel accuracy.

  15. Performance comparison of resistance-trained subjects by different methods of adjusting for body mass. DOI: http://dx.doi.org/10.5007/1980-0037.2012v14n3p313

    Directory of Open Access Journals (Sweden)

    Wladymir Külkamp

    2012-05-01

    Full Text Available The aim of this study was to compare the performance (1RM of resistance-trained subjects, using different methods of adjusting for body mass (BM: ratio standard, theoretical allometric exponent (0.67, and specific allometric exponents. The study included 11 male and 11 female healthy non-athletes (mean age = 22 years engaged in regular resistance training for at least 6 months. Bench press (BP, 45° leg press (LP and arm curl (AC exercises were performed, and the participants were ranked (in descending order according to each method. The specific allometric exponents for each exercise were: for men – BP (0.73, LP (0.35, and AC (0.71; and for women – BP (1.22, LP (1.02, and AC (0.85. The Kruskal-Wallis test revealed no differences between the rankings. However, visual inspection indicated that the participants were often classified differently in relation to performance by the methods used. Furthermore, no adjusted strength score was equal to the absolute strength values (1RM. The results suggest that there is a range of values in which the differences between exponents do not reflect different rankings (below 0.07 points and a range in which rankings can be fundamentally different (above 0.14 points. This may be important in long-term selection of universally accepted allometric exponents, considering the range of values found in different studies. The standardization of exponents may allow the use of allometry as an additional tool in the prescription of resistance training.

  16. Comparison between clinical significance of height-adjusted and weight-adjusted appendicular skeletal muscle mass.

    Science.gov (United States)

    Furushima, Taishi; Miyachi, Motohiko; Iemitsu, Motoyuki; Murakami, Haruka; Kawano, Hiroshi; Gando, Yuko; Kawakami, Ryoko; Sanada, Kiyoshi

    2017-02-13

    This study aimed to compare relationships between height- or weight-adjusted appendicular skeletal muscle mass (ASM/Ht 2 or ASM/Wt) and risk factors for cardiometabolic diseases or osteoporosis in Japanese men and women. Subjects were healthy Japanese men (n = 583) and women (n = 1218). The study population included a young group (310 men and 357 women; age, 18-40 years) and a middle-aged and elderly group (273 men and 861 women; age, ≥41 years). ASM was measured by dual-energy X-ray absorptiometry. The reference values for class 1 and 2 sarcopenia in each sex were defined as values one and two standard deviations below the sex-specific means of the young group, respectively. The reference values for class 1 and 2 sarcopenia defined by ASM/Ht 2 were 7.77 and 6.89 kg/m 2 in men and 6.06 and 5.31 kg/m 2 in women, respectively. The reference values for ASM/Wt were 35.0 and 32.0% in men and 29.6 and 26.4% in women, respectively. In both men and women, ASM/Wt was negatively correlated with higher triglycerides (TG) and positively correlated with serum high-density lipoprotein cholesterol (HDL-C), but these associations were not found in height-adjusted ASM. In women, TG, systolic blood pressure, and diastolic blood pressure in sarcopenia defined by ASM/Wt were significantly higher than those in normal subjects, but these associations were not found in sarcopenia defined by ASM/Ht 2 . Whole-body and regional bone mineral density in sarcopenia defined by ASM/Ht 2 were significantly lower than those in normal subjects, but these associations were not found in sarcopenia defined by ASM/Wt. Weight-adjusted definition was able to identify cardiometabolic risk factors such as TG and HDL-C while height-adjusted definition could identify factors for osteoporosis.

  17. Adjusting the general growth balance method for migration

    OpenAIRE

    Hill, Kenneth; Queiroz, Bernardo

    2010-01-01

    Death distribution methods proposed for death registration coverage by comparison with census age distributions assume no net migration. This assumption makes it problematic to apply these methods to sub-national and national populations affected by substantial net migration. In this paper, we propose and explore a two-step process in which the Growth Balance Equation is first used to estimate net migration rates, using a model of age-specific migration, and then it is used to compare the obs...

  18. Magnetic field adjustment structure and method for a tapered wiggler

    Science.gov (United States)

    Halbach, Klaus

    1988-01-01

    An improved method and structure is disclosed for adjusting the magnetic field generated by a group of electromagnet poles spaced along the path of a charged particle beam to compensate for energy losses in the charged particles which comprises providing more than one winding on at least some of the electromagnet poles; connecting one respective winding on each of several consecutive adjacent electromagnet poles to a first power supply, and the other respective winding on the electromagnet pole to a different power supply in staggered order; and independently adjusting one power supply to independently vary the current in one winding on each electromagnet pole in a group whereby the magnetic field strength of each of a group of electromagnet poles may be changed in smaller increments.

  19. Aggregation Methods in International Comparisons

    NARCIS (Netherlands)

    B.M. Balk (Bert)

    2001-01-01

    textabstractThis paper reviews the progress that has been made over the past decade in understanding the nature of the various multilateral in- ternational comparison methods. Fifteen methods are discussed and subjected to a system of ten tests. In addition, attention is paid to recently developed

  20. Adjusting Teacher Salaries for the Cost of Living: The Effect on Salary Comparisons and Policy Conclusions

    Science.gov (United States)

    Stoddard, C.

    2005-01-01

    Teaching salaries are commonly adjusted for the cost of living, but this incorrectly accounts for welfare differences across states. Adjusting for area amenities and opportunities, however, produces more accurate salary comparisons. Amenities and opportunities can be measured by the wage premium other workers in a state face. The two methods…

  1. Remotely adjustable fishing jar and method for using same

    International Nuclear Information System (INIS)

    Wyatt, W.B.

    1992-01-01

    This patent describes a method for providing a jarring force to dislodge objects stuck in well bores, the method it comprises: connecting a jarring tool between an operating string and an object in a well bore; selecting a jarring force to be applied to the object; setting the selected reference jarring force into a mechanical memory mechanism by progressively engaging a first latch body and a second latch body; retaining the reference jarring force in the mechanical memory mechanism during diminution of tensional force applied by the operating string; and initiating an upwardly directed impact force within the jarring tool by increasing tensional force on the operating string to a value greater than the tensional force corresponding with the selected jarring force. This patent also describes a remotely adjustable downhole fishing jar apparatus comprising: an operating mandrel; an impact release spring; a mechanical memory mechanism; and releasable latching means

  2. Evaluation of trauma care using TRISS method: the role of adjusted misclassification rate and adjusted w-statistic

    Directory of Open Access Journals (Sweden)

    Bytyçi Cen I

    2009-01-01

    Full Text Available Abstract Background Major trauma is a leading cause of death worldwide. Evaluation of trauma care using Trauma Injury and Injury Severity Score (TRISS method is focused in trauma outcome (deaths and survivors. For testing TRISS method TRISS misclassification rate is used. Calculating w-statistic, as a difference between observed and TRISS expected survivors, we compare our trauma care results with the TRISS standard. Aim The aim of this study is to analyze interaction between misclassification rate and w-statistic and to adjust these parameters to be closer to the truth. Materials and methods Analysis of components of TRISS misclassification rate and w-statistic and actual trauma outcome. Results The component of false negative (FN (by TRISS method unexpected deaths has two parts: preventable (Pd and non-preventable (nonPd trauma deaths. Pd represents inappropriate trauma care of an institution; otherwise nonpreventable trauma deaths represents errors in TRISS method. Removing patients with preventable trauma deaths we get an Adjusted misclassification rate: (FP + FN - Pd/N or (b+c-Pd/N. Substracting nonPd from FN value in w-statistic formula we get an Adjusted w-statistic: [FP-(FN - nonPd]/N, respectively (FP-Pd/N, or (b-Pd/N. Conclusion Because adjusted formulas clean method from inappropriate trauma care, and clean trauma care from the methods error, TRISS adjusted misclassification rate and adjusted w-statistic gives more realistic results and may be used in researches of trauma outcome.

  3. Evaluation of trauma care using TRISS method: the role of adjusted misclassification rate and adjusted w-statistic.

    Science.gov (United States)

    Llullaku, Sadik S; Hyseni, Nexhmi Sh; Bytyçi, Cen I; Rexhepi, Sylejman K

    2009-01-15

    Major trauma is a leading cause of death worldwide. Evaluation of trauma care using Trauma Injury and Injury Severity Score (TRISS) method is focused in trauma outcome (deaths and survivors). For testing TRISS method TRISS misclassification rate is used. Calculating w-statistic, as a difference between observed and TRISS expected survivors, we compare our trauma care results with the TRISS standard. The aim of this study is to analyze interaction between misclassification rate and w-statistic and to adjust these parameters to be closer to the truth. Analysis of components of TRISS misclassification rate and w-statistic and actual trauma outcome. The component of false negative (FN) (by TRISS method unexpected deaths) has two parts: preventable (Pd) and non-preventable (nonPd) trauma deaths. Pd represents inappropriate trauma care of an institution; otherwise nonpreventable trauma deaths represents errors in TRISS method. Removing patients with preventable trauma deaths we get an Adjusted misclassification rate: (FP + FN - Pd)/N or (b+c-Pd)/N. Substracting nonPd from FN value in w-statistic formula we get an Adjusted w-statistic: [FP-(FN - nonPd)]/N, respectively (FP-Pd)/N, or (b-Pd)/N). Because adjusted formulas clean method from inappropriate trauma care, and clean trauma care from the methods error, TRISS adjusted misclassification rate and adjusted w-statistic gives more realistic results and may be used in researches of trauma outcome.

  4. Simple method for generating adjustable trains of picosecond electron bunches

    Directory of Open Access Journals (Sweden)

    P. Muggli

    2010-05-01

    Full Text Available A simple, passive method for producing an adjustable train of picosecond electron bunches is demonstrated. The key component of this method is an electron beam mask consisting of an array of parallel wires that selectively spoils the beam emittance. This mask is positioned in a high magnetic dispersion, low beta-function region of the beam line. The incoming electron beam striking the mask has a time/energy correlation that corresponds to a time/position correlation at the mask location. The mask pattern is transformed into a time pattern or train of bunches when the dispersion is brought back to zero downstream of the mask. Results are presented of a proof-of-principle experiment demonstrating this novel technique that was performed at the Brookhaven National Laboratory Accelerator Test Facility. This technique allows for easy tailoring of the bunch train for a particular application, including varying the bunch width and spacing, and enabling the generation of a trailing witness bunch.

  5. An Adjusted Probability Method for the Identification of Sociometric Status in Classrooms

    Directory of Open Access Journals (Sweden)

    Francisco J. García Bacete

    2017-10-01

    Full Text Available Objective: The aim of this study was to test the performance of an adjusted probability method for sociometric classification proposed by García Bacete (GB in comparison with two previous methods. Specific goals were to examine the overall agreement between methods, the behavioral correlates of each sociometric group, the sources for discrepant classifications between methods, the behavioral profiles of discrepant and consistent cases between methods, and age differences.Method: We compared the GB adjusted probability method with the standard score model proposed by Coie and Dodge (CD and the probability score model proposed by Newcomb and Bukowski (NB. The GB method is an adaptation of the NB method, cutoff scores are derived from the distribution of raw liked most and liked least scores in each classroom instead of using fixed and absolute scores as does NB method. The criteria for neglected status are also modified by the GB method. Participants were 569 children (45% girls from 23 elementary school classrooms (13 Grades 1–2, 10 Grades 5–6.Results: We found agreement as well as differences between the three methods. The CD method yielded discrepancies in the classifications because of its dependence on z-scores and composite dimensions. The NB method was less optimal in the validation of the behavioral characteristics of the sociometric groups, because of its fixed cutoffs for identifying preferred, rejected, and controversial children, and not differentiating between positive and negative nominations for neglected children. The GB method addressed some of the limitations of the other two methods. It improved the classified of neglected students, as well as discrepant cases of the preferred, rejected, and controversial groups. Agreement between methods was higher with the oldest children.Conclusion: GB is a valid sociometric method as evidences by the behavior profiles of the sociometric status groups identified with this method.

  6. COMPARISON OF METHODS FOR GEOMETRIC CAMERA CALIBRATION

    Directory of Open Access Journals (Sweden)

    J. Hieronymus

    2012-09-01

    Full Text Available Methods for geometric calibration of cameras in close-range photogrammetry are established and well investigated. The most common one is based on test-fields with well-known pattern, which are observed from different directions. The parameters of a distortion model are calculated using bundle-block-adjustment-algorithms. This methods works well for short focal lengths, but is essentially more problematic to use with large focal lengths. Those would require very large test-fields and surrounding space. To overcome this problem, there is another common method for calibration used in remote sensing. It employs measurements using collimator and a goniometer. A third calibration method uses diffractive optical elements (DOE to project holograms of well known pattern. In this paper these three calibration methods are compared empirically, especially in terms of accuracy. A camera has been calibrated with those methods mentioned above. All methods provide a set of distortion correction parameters as used by the photogrammetric software Australis. The resulting parameter values are very similar for all investigated methods. The three sets of distortion parameters are crosscompared against all three calibration methods. This is achieved by inserting the gained distortion parameters as fixed input into the calibration algorithms and only adjusting the exterior orientation. The RMS (root mean square of the remaining image coordinate residuals are taken as a measure of distortion correction quality. There are differences resulting from the different calibration methods. Nevertheless the measure is small for every comparison, which means that all three calibration methods can be used for accurate geometric calibration.

  7. LSL: a logarithmic least-squares adjustment method

    International Nuclear Information System (INIS)

    Stallmann, F.W.

    1982-01-01

    To meet regulatory requirements, spectral unfolding codes must not only provide reliable estimates for spectral parameters, but must also be able to determine the uncertainties associated with these parameters. The newer codes, which are more appropriately called adjustment codes, use the least squares principle to determine estimates and uncertainties. The principle is simple and straightforward, but there are several different mathematical models to describe the unfolding problem. In addition to a sound mathematical model, ease of use and range of options are important considerations in the construction of adjustment codes. Based on these considerations, a least squares adjustment code for neutron spectrum unfolding has been constructed some time ago and tentatively named LSL

  8. Modified adjustable suture hang-back recession: Description of technique and comparison with conventional adjustable hang-back recession

    Directory of Open Access Journals (Sweden)

    Siddharth Agrawal

    2017-01-01

    Full Text Available Purpose: This study aims to describe and compare modified hang-back recession with the conventional hang-back recession in large angle comitant exotropia (XT. Methods: A prospective, interventional, double-blinded, randomized study on adult patients (>18 years undergoing single eye recession-resection for large angle (>30 prism diopters constant comitant XT was conducted between January 2011 and December 2015. Patients in Group A underwent modified hang-back lateral rectus recession with adjustable knot while in Group B underwent conventional hang-back recession with an adjustable knot. Outcome parameters studied were readjustment rate, change in deviation at 6 weeks, complications and need for resurgery at 6 months. Results: The groups were comparable in terms of age and preoperative deviation. The patients with the modified hang back (Group A fared significantly better (P < 0.05 than those with conventional hang back (Group B in terms of lesser need for adjustment, greater correction in deviation at 6 weeks and lesser need for resurgery at 6 months. Conclusion: This modification offers several advantages, significantly reduces resurgery requirement and has no added complications.

  9. A system and method for adjusting and presenting stereoscopic content

    DEFF Research Database (Denmark)

    2013-01-01

    on the basis of one or more vision specific parameters (0M, ThetaMuAlphaChi, ThetaMuIotaNu, DeltaTheta) indicating abnormal vision for the user. In this way, presenting stereoscopic content is enabled that is adjusted specifically to the given person. This may e.g. be used for training purposes or for improved...

  10. Case-mix adjustment and the comparison of community health center performance on patient experience measures.

    Science.gov (United States)

    Johnson, M Laura; Rodriguez, Hector P; Solorio, M Rosa

    2010-06-01

    To assess the effect of case-mix adjustment on community health center (CHC) performance on patient experience measures. A Medicaid-managed care plan in Washington State collected patient survey data from 33 CHCs over three fiscal quarters during 2007-2008. The survey included three composite patient experience measures (6-month reports) and two overall ratings of care. The analytic sample includes 2,247 adult patients and 2,859 adults reporting for child patients. We compared the relative importance of patient case-mix adjusters by calculating each adjuster's predictive power and variability across CHCs. We then evaluated the impact of case-mix adjustment on the relative ranking of CHCs. Important case-mix adjusters included adult self-reported health status or parent-reported child health status, adult age, and educational attainment. The effects of case-mix adjustment on patient reports and ratings were different in the adult and child samples. Adjusting for race/ethnicity and language had a greater impact on parent reports than adult reports, but it impacted ratings similarly across the samples. The impact of adjustment on composites and ratings was modest, but it affected the relative ranking of CHCs. To ensure equitable comparison of CHC performance on patient experience measures, reports and ratings should be adjusted for adult self-reported health status or parent-reported child health status, adult age, education, race/ethnicity, and survey language. Because of the differential impact of case-mix adjusters for child and adult surveys, initiatives should consider measuring and reporting adult and child scores separately.

  11. Risk adjustment methods for Home Care Quality Indicators (HCQIs based on the minimum data set for home care

    Directory of Open Access Journals (Sweden)

    Hirdes John P

    2005-01-01

    Full Text Available Abstract Background There has been increasing interest in enhancing accountability in health care. As such, several methods have been developed to compare the quality of home care services. These comparisons can be problematic if client populations vary across providers and no adjustment is made to account for these differences. The current paper explores the effects of risk adjustment for a set of home care quality indicators (HCQIs based on the Minimum Data Set for Home Care (MDS-HC. Methods A total of 22 home care providers in Ontario and the Winnipeg Regional Health Authority (WRHA in Manitoba, Canada, gathered data on their clients using the MDS-HC. These assessment data were used to generate HCQIs for each agency and for the two regions. Three types of risk adjustment methods were contrasted: a client covariates only; b client covariates plus an "Agency Intake Profile" (AIP to adjust for ascertainment and selection bias by the agency; and c client covariates plus the intake Case Mix Index (CMI. Results The mean age and gender distribution in the two populations was very similar. Across the 19 risk-adjusted HCQIs, Ontario CCACs had a significantly higher AIP adjustment value for eight HCQIs, indicating a greater propensity to trigger on these quality issues on admission. On average, Ontario had unadjusted rates that were 0.3% higher than the WRHA. Following risk adjustment with the AIP covariate, Ontario rates were, on average, 1.5% lower than the WRHA. In the WRHA, individual agencies were likely to experience a decline in their standing, whereby they were more likely to be ranked among the worst performers following risk adjustment. The opposite was true for sites in Ontario. Conclusions Risk adjustment is essential when comparing quality of care across providers when home care agencies provide services to populations with different characteristics. While such adjustment had a relatively small effect for the two regions, it did

  12. Risk adjustment methods for Home Care Quality Indicators (HCQIs) based on the minimum data set for home care

    Science.gov (United States)

    Dalby, Dawn M; Hirdes, John P; Fries, Brant E

    2005-01-01

    Background There has been increasing interest in enhancing accountability in health care. As such, several methods have been developed to compare the quality of home care services. These comparisons can be problematic if client populations vary across providers and no adjustment is made to account for these differences. The current paper explores the effects of risk adjustment for a set of home care quality indicators (HCQIs) based on the Minimum Data Set for Home Care (MDS-HC). Methods A total of 22 home care providers in Ontario and the Winnipeg Regional Health Authority (WRHA) in Manitoba, Canada, gathered data on their clients using the MDS-HC. These assessment data were used to generate HCQIs for each agency and for the two regions. Three types of risk adjustment methods were contrasted: a) client covariates only; b) client covariates plus an "Agency Intake Profile" (AIP) to adjust for ascertainment and selection bias by the agency; and c) client covariates plus the intake Case Mix Index (CMI). Results The mean age and gender distribution in the two populations was very similar. Across the 19 risk-adjusted HCQIs, Ontario CCACs had a significantly higher AIP adjustment value for eight HCQIs, indicating a greater propensity to trigger on these quality issues on admission. On average, Ontario had unadjusted rates that were 0.3% higher than the WRHA. Following risk adjustment with the AIP covariate, Ontario rates were, on average, 1.5% lower than the WRHA. In the WRHA, individual agencies were likely to experience a decline in their standing, whereby they were more likely to be ranked among the worst performers following risk adjustment. The opposite was true for sites in Ontario. Conclusions Risk adjustment is essential when comparing quality of care across providers when home care agencies provide services to populations with different characteristics. While such adjustment had a relatively small effect for the two regions, it did substantially affect the

  13. Inter-provider comparison of patient-reported outcomes: developing an adjustment to account for differences in patient case mix.

    Science.gov (United States)

    Nuttall, David; Parkin, David; Devlin, Nancy

    2015-01-01

    This paper describes the development of a methodology for the case-mix adjustment of patient-reported outcome measures (PROMs) data permitting the comparison of outcomes between providers on a like-for-like basis. Statistical models that take account of provider-specific effects form the basis of the proposed case-mix adjustment methodology. Indirect standardisation provides a transparent means of case mix adjusting the PROMs data, which are updated on a monthly basis. Recently published PROMs data for patients undergoing unilateral knee replacement are used to estimate empirical models and to demonstrate the application of the proposed case-mix adjustment methodology in practice. The results are illustrative and are used to highlight a number of theoretical and empirical issues that warrant further exploration. For example, because of differences between PROMs instruments, case-mix adjustment methodologies may require instrument-specific approaches. A number of key assumptions are made in estimating the empirical models, which could be open to challenge. The covariates of post-operative health status could be expanded, and alternative econometric methods could be employed. © 2013 Crown copyright.

  14. Direct risk standardisation: a new method for comparing casemix adjusted event rates using complex models.

    Science.gov (United States)

    Nicholl, Jon; Jacques, Richard M; Campbell, Michael J

    2013-10-29

    Comparison of outcomes between populations or centres may be confounded by any casemix differences and standardisation is carried out to avoid this. However, when the casemix adjustment models are large and complex, direct standardisation has been described as "practically impossible", and indirect standardisation may lead to unfair comparisons. We propose a new method of directly standardising for risk rather than standardising for casemix which overcomes these problems. Using a casemix model which is the same model as would be used in indirect standardisation, the risk in individuals is estimated. Risk categories are defined, and event rates in each category for each centre to be compared are calculated. A weighted sum of the risk category specific event rates is then calculated. We have illustrated this method using data on 6 million admissions to 146 hospitals in England in 2007/8 and an existing model with over 5000 casemix combinations, and a second dataset of 18,668 adult emergency admissions to 9 centres in the UK and overseas and a published model with over 20,000 casemix combinations and a continuous covariate. Substantial differences between conventional directly casemix standardised rates and rates from direct risk standardisation (DRS) were found. Results based on DRS were very similar to Standardised Mortality Ratios (SMRs) obtained from indirect standardisation, with similar standard errors. Direct risk standardisation using our proposed method is as straightforward as using conventional direct or indirect standardisation, always enables fair comparisons of performance to be made, can use continuous casemix covariates, and was found in our examples to have similar standard errors to the SMR. It should be preferred when there is a risk that conventional direct or indirect standardisation will lead to unfair comparisons.

  15. Optimum adjusting method of fuel in a reactor

    International Nuclear Information System (INIS)

    Otsuji, Niro; Shirakawa, Toshihisa; Toyoshi, Isamu; Tatemichi, Shin-ichiro; Mukai, Hideyuki.

    1976-01-01

    Object: To effectively select an intermediate pattern of a control rod to thereby shorten time required to adjust the fuel. Structure: Control rods are divided into several regions in concentric and circular fashion or the like within a core. A control rod position satisfied with a thermal limit value in a maximal power level is preset as a target position by a three dimensional nuclear hydrothermal force counting code or the like. Next, an intermediate pattern of the control rods in each region is determined on the basis of the target position, and while judging operational condition, a part of fuel rods is maintained for a given period of time in a level more than power density of the target position with the power increased within the range not to produce interaction between pellet and cladding material (PCI), so that said power density may be learned. Thereafter, the power is rapidly decreased. Similar operation may be applied to the other fuel rods, after which the control rod may be set the target position to obtain the maximum power level. (Ikeda, J.)

  16. Comparison of dysfunctional attitudes and social adjustment among infertile employed and unemployed women in Iran.

    Science.gov (United States)

    Fatemi, Azadeh S; Younesi, Seyed Jalal; Azkhosh, Manouchehr; Askari, Ali

    2010-04-01

    This study aims to compare dysfunctional attitudes and social adjustment in infertile employed and unemployed females. Due to the stresses of infertility, infertile females are faced with a variety of sexual and psychological problems, as well as dysfunctional attitudes that can lead to depression. Moreover, infertility problems provoke women into maladjustment and inadvertent corruption of relationships. In this regard, our goal is to consider the effects of employment in conjunction with education on dysfunctional attitudes and social adjustment among infertile women in Iran. In this work, we employed the survey method. We recruited 240 infertile women, utilizing the cluster random sampling method. These women filled out the Dysfunctional Attitudes Scale and the social adjustment part of the California Test of Personality. Next, multivariate analysis of variance was performed to test the relationship of employment status and education with dysfunctional attitudes and social adjustment. Our results indicated that dysfunctional attitudes were far more prevalent in infertile unemployed women than in infertile employed women. Also, social adjustment was better in infertile employed women than in infertile unemployed women. It was shown that education level alone does not have significant effect on dysfunctional attitudes and social adjustment. However, we demonstrated that the employment status of infertile women in conjunction with their education level significantly affects the two dimensions of dysfunctional attitudes (relationships, entitlements) and has insignificant effects on social adjustment. It was revealed that in employed infertile women in Iran, the higher education level, the less dysfunctional were attitudes in relationships and entitlements, whereas in unemployed infertile women, those with a college degree had the least and those with master's or higher degrees had the most dysfunctional attitudes in terms of relationships and entitlements.

  17. Short term load forecasting technique based on the seasonal exponential adjustment method and the regression model

    International Nuclear Information System (INIS)

    Wu, Jie; Wang, Jianzhou; Lu, Haiyan; Dong, Yao; Lu, Xiaoxiao

    2013-01-01

    Highlights: ► The seasonal and trend items of the data series are forecasted separately. ► Seasonal item in the data series is verified by the Kendall τ correlation testing. ► Different regression models are applied to the trend item forecasting. ► We examine the superiority of the combined models by the quartile value comparison. ► Paired-sample T test is utilized to confirm the superiority of the combined models. - Abstract: For an energy-limited economy system, it is crucial to forecast load demand accurately. This paper devotes to 1-week-ahead daily load forecasting approach in which load demand series are predicted by employing the information of days before being similar to that of the forecast day. As well as in many nonlinear systems, seasonal item and trend item are coexisting in load demand datasets. In this paper, the existing of the seasonal item in the load demand data series is firstly verified according to the Kendall τ correlation testing method. Then in the belief of the separate forecasting to the seasonal item and the trend item would improve the forecasting accuracy, hybrid models by combining seasonal exponential adjustment method (SEAM) with the regression methods are proposed in this paper, where SEAM and the regression models are employed to seasonal and trend items forecasting respectively. Comparisons of the quartile values as well as the mean absolute percentage error values demonstrate this forecasting technique can significantly improve the accuracy though models applied to the trend item forecasting are eleven different ones. This superior performance of this separate forecasting technique is further confirmed by the paired-sample T tests

  18. Estimating the subjective value of future rewards: comparison of adjusting-amount and adjusting-delay procedures.

    Science.gov (United States)

    Holt, Daniel D; Green, Leonard; Myerson, Joel

    2012-07-01

    The present study examined whether equivalent discounting of delayed rewards is observed with different experimental procedures. If the underlying decision-making process is the same, then similar patterns of results should be observed regardless of procedure, and similar estimates of the subjective value of future rewards (i.e., indifference points) should be obtained. Two experiments compared discounting on three types of procedure: adjusting-delay (AD), adjusting-immediate-amount (AIA), and adjusting-delayed-amount (ADA). For the two procedures for which discounting functions can be established (i.e., AD and AIA), a hyperboloid provided good fits to the data at both the group and individual levels, and individuals' discounting on one procedure tended to be correlated with their discounting on the other. Notably, the AIA procedure produced the more consistent estimates of the degree of discounting, and in particular, discounting on the AIA procedure was unaffected by the order in which choices were presented. Regardless of which of the three procedures was used, however, similar patterns of results were obtained: Participants systematically discounted the value of delayed rewards, and robust magnitude effects were observed. Although each procedure may have its own advantages and disadvantages, use of all three types of procedure in the present study provided converging evidence for common decision-making processes underlying the discounting of delayed rewards. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. How does social comparison within a self-help group influence adjustment to chronic illness? A longitudinal study.

    Science.gov (United States)

    Dibb, Bridget; Yardley, Lucy

    2006-09-01

    Despite the growing popularity of self-help groups for people with chronic illness, there has been surprisingly little research into how these may support adjustment to illness. This study investigated the role that social comparison, occurring within a self-help group, may play in adjustment to chronic illness. A model of adjustment based on control process theory and response shift theory was tested to determine whether social comparisons predicted adjustment after controlling for the catalyst for adjustment (disease severity) and antecedents (demographic and psychological factors). A sample of 301 people with Ménière's disease who were members of the Ménière's Society UK completed questionnaires at baseline and 10-month follow-up assessing adjustment, defined for this study as functional and goal-oriented quality of life. At baseline, they also completed measures of the predictor variables i.e. the antecedents (age, sex, living circumstances, duration of self-help group membership, self-esteem, optimism and perceived control over illness), the catalyst (severity of vertigo, tinnitus, hearing loss and fullness in the ear) and mechanisms of social comparison within the self-help group. The social comparison variables included the extent to which self-help group resources were used, and whether reading about other members' experiences induced positive or negative feelings. Cross-sectional results showed that positive social comparison was indeed associated with better adjustment after controlling for all the other baseline variables, while negative social comparison was associated with worse adjustment. However, greater levels of social comparison at baseline were associated with a deteriorating quality of life over the 10-month follow-up period. Alternative explanations for these findings are discussed.

  20. A Review on Methods of Risk Adjustment and their Use in Integrated Healthcare Systems

    Science.gov (United States)

    Juhnke, Christin; Bethge, Susanne

    2016-01-01

    Introduction: Effective risk adjustment is an aspect that is more and more given weight on the background of competitive health insurance systems and vital healthcare systems. The objective of this review was to obtain an overview of existing models of risk adjustment as well as on crucial weights in risk adjustment. Moreover, the predictive performance of selected methods in international healthcare systems should be analysed. Theory and methods: A comprehensive, systematic literature review on methods of risk adjustment was conducted in terms of an encompassing, interdisciplinary examination of the related disciplines. Results: In general, several distinctions can be made: in terms of risk horizons, in terms of risk factors or in terms of the combination of indicators included. Within these, another differentiation by three levels seems reasonable: methods based on mortality risks, methods based on morbidity risks as well as those based on information on (self-reported) health status. Conclusions and discussion: After the final examination of different methods of risk adjustment it was shown that the methodology used to adjust risks varies. The models differ greatly in terms of their included morbidity indicators. The findings of this review can be used in the evaluation of integrated healthcare delivery systems and can be integrated into quality- and patient-oriented reimbursement of care providers in the design of healthcare contracts. PMID:28316544

  1. Comparison of time adjustment clauses between DZ3910, AS4000 and STCC

    Directory of Open Access Journals (Sweden)

    David Finnie

    2013-03-01

    Full Text Available This article examines time adjustment clauses, as they relate to time adjustment between standard terms of construction contracts. DZ3910, AS4000 and STCC were compared on the basis of how risks are allocated, how this may impact on the contractor’s pricing, and ease of understanding for each clause. ASTCC was found to be the most easily interpreted contract, followed by AS4000 and then NZS3910. These assessments were based on the following: a whether each contract contains words with multiple meanings, b the number of words used per sentence, c the amount of internal cross-referencing, and d the clarity of the contract structure. The allowable pre-conditions for the contractor to claim a time adjustment are similar for all three contracts, and none of them expressly state which party is to bare the risk of buildability, or address the risk of a designer’s disclaimer clause. All of the contracts adopt the principle of contra preferentum which means that the employer bares the risk of variance if there are any ambiguities in the design documentation. Due to their similarities of risk allocation, all of the contracts provide the employer with a similar amount of price surety. AS4000 is the only contract to contain a stringent time-bar clause, limiting a contractor’s time adjustment claim. ASTCC requires the contractor to apply ‘immediately’ and DZ3910 provides a time-bar of 20 working days or as soon as practicable. None of the contracts clarify whether their timing requirements take precedence over the prevention principle, or over any other ground for claiming a time adjustment. The effect of DZ3910’s pre-notification clause 5.19.3 is discussed, and an alternative contents structure is recommended for DZ3910, using a project management method.

  2. HIV quality report cards: impact of case-mix adjustment and statistical methods.

    Science.gov (United States)

    Ohl, Michael E; Richardson, Kelly K; Goto, Michihiko; Vaughan-Sarrazin, Mary; Schweizer, Marin L; Perencevich, Eli N

    2014-10-15

    There will be increasing pressure to publicly report and rank the performance of healthcare systems on human immunodeficiency virus (HIV) quality measures. To inform discussion of public reporting, we evaluated the influence of case-mix adjustment when ranking individual care systems on the viral control quality measure. We used data from the Veterans Health Administration (VHA) HIV Clinical Case Registry and administrative databases to estimate case-mix adjusted viral control for 91 local systems caring for 12 368 patients. We compared results using 2 adjustment methods, the observed-to-expected estimator and the risk-standardized ratio. Overall, 10 913 patients (88.2%) achieved viral control (viral load ≤400 copies/mL). Prior to case-mix adjustment, system-level viral control ranged from 51% to 100%. Seventeen (19%) systems were labeled as low outliers (performance significantly below the overall mean) and 11 (12%) as high outliers. Adjustment for case mix (patient demographics, comorbidity, CD4 nadir, time on therapy, and income from VHA administrative databases) reduced the number of low outliers by approximately one-third, but results differed by method. The adjustment model had moderate discrimination (c statistic = 0.66), suggesting potential for unadjusted risk when using administrative data to measure case mix. Case-mix adjustment affects rankings of care systems on the viral control quality measure. Given the sensitivity of rankings to selection of case-mix adjustment methods-and potential for unadjusted risk when using variables limited to current administrative databases-the HIV care community should explore optimal methods for case-mix adjustment before moving forward with public reporting. Published by Oxford University Press on behalf of the Infectious Diseases Society of America 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  3. Comparison of Social Adjustment in Blind Children and Normal in Primary School in Mashhad

    Directory of Open Access Journals (Sweden)

    AA ModdaresMoghaddam

    2014-05-01

    Methods:This is a descriptive, analytical study in which 270 blind and viewing students of primary schools in Mashhad participated in the academic years 2012-13. The blind students were chosen by census and viewing students through a stratified random sampling from normal schools. For data collection, a standard questionnaire that was social adjustment questionnaire was used. It was made in 1974 in America by Lambert, Wind Miller, Cole & Figueroa, which was translated in 1992 by Dr. Shahny Yeylagh, to be used for children of 7 to 13 years. It was conducted on 1500 boy and girl students of the first to fifth grade in elementary schools of Ahvaz. This test consists of 11 subscales, 38 sub-categories and 260 items. For data analysis, descriptive and inferential statistics such as t-test was used. (P <0.05. Results:The mean scores of social adjustment of students showed no statistically significant difference between the viewing and the blind (p=0.8. Also the mean of social adjustment in blind girls and boys was not statistically significant (p=0.1, but the incompatibility was found in more boys than the girls. Conclusion: Regarding the results, social incompatibility was higher in the blind girl students than the viewing girl students. Also this incompatibility was higher in the boys than the girls thereby requiring scientific and coherent planning for them.

  4. Environmental Chemicals in Urine and Blood: Improving Methods for Creatinine and Lipid Adjustment

    Science.gov (United States)

    O’Brien, Katie M.; Upson, Kristen; Cook, Nancy R.; Weinberg, Clarice R.

    2015-01-01

    Background Investigators measuring exposure biomarkers in urine typically adjust for creatinine to account for dilution-dependent sample variation in urine concentrations. Similarly, it is standard to adjust for serum lipids when measuring lipophilic chemicals in serum. However, there is controversy regarding the best approach, and existing methods may not effectively correct for measurement error. Objectives We compared adjustment methods, including novel approaches, using simulated case–control data. Methods Using a directed acyclic graph framework, we defined six causal scenarios for epidemiologic studies of environmental chemicals measured in urine or serum. The scenarios include variables known to influence creatinine (e.g., age and hydration) or serum lipid levels (e.g., body mass index and recent fat intake). Over a range of true effect sizes, we analyzed each scenario using seven adjustment approaches and estimated the corresponding bias and confidence interval coverage across 1,000 simulated studies. Results For urinary biomarker measurements, our novel method, which incorporates both covariate-adjusted standardization and the inclusion of creatinine as a covariate in the regression model, had low bias and possessed 95% confidence interval coverage of nearly 95% for most simulated scenarios. For serum biomarker measurements, a similar approach involving standardization plus serum lipid level adjustment generally performed well. Conclusions To control measurement error bias caused by variations in serum lipids or by urinary diluteness, we recommend improved methods for standardizing exposure concentrations across individuals. Citation O’Brien KM, Upson K, Cook NR, Weinberg CR. 2016. Environmental chemicals in urine and blood: improving methods for creatinine and lipid adjustment. Environ Health Perspect 124:220–227; http://dx.doi.org/10.1289/ehp.1509693 PMID:26219104

  5. A New Scale Factor Adjustment Method for Magnetic Force Feedback Accelerometer

    Directory of Open Access Journals (Sweden)

    Xiangqing Huang

    2017-10-01

    Full Text Available A new and simple method to adjust the scale factor of a magnetic force feedback accelerometer is presented, which could be used in developing a rotating accelerometer gravity gradient instrument (GGI. Adjusting and matching the acceleration-to-current transfer function of the four accelerometers automatically is one of the basic and necessary technologies for rejecting the common mode accelerations in the development of GGI. In order to adjust the scale factor of the magnetic force rebalance accelerometer, an external current is injected and combined with the normal feedback current; they are then applied together to the torque coil of the magnetic actuator. The injected current could be varied proportionally according to the external adjustment needs, and the change in the acceleration-to-current transfer function then realized dynamically. The new adjustment method has the advantages of no extra assembly and ease of operation. Changes in the scale factors range from 33% smaller to 100% larger are verified experimentally by adjusting the different external coefficients. The static noise of the used accelerometer is compared under conditions with and without the injecting current, and the experimental results find no change at the current noise level, which further confirms the validity of the presented method.

  6. A New Scale Factor Adjustment Method for Magnetic Force Feedback Accelerometer.

    Science.gov (United States)

    Huang, Xiangqing; Deng, Zhongguang; Xie, Yafei; Li, Zhu; Fan, Ji; Tu, Liangcheng

    2017-10-27

    A new and simple method to adjust the scale factor of a magnetic force feedback accelerometer is presented, which could be used in developing a rotating accelerometer gravity gradient instrument (GGI). Adjusting and matching the acceleration-to-current transfer function of the four accelerometers automatically is one of the basic and necessary technologies for rejecting the common mode accelerations in the development of GGI. In order to adjust the scale factor of the magnetic force rebalance accelerometer, an external current is injected and combined with the normal feedback current; they are then applied together to the torque coil of the magnetic actuator. The injected current could be varied proportionally according to the external adjustment needs, and the change in the acceleration-to-current transfer function then realized dynamically. The new adjustment method has the advantages of no extra assembly and ease of operation. Changes in the scale factors range from 33% smaller to 100% larger are verified experimentally by adjusting the different external coefficients. The static noise of the used accelerometer is compared under conditions with and without the injecting current, and the experimental results find no change at the current noise level, which further confirms the validity of the presented method.

  7. Comparing treatment effects after adjustment with multivariable Cox proportional hazards regression and propensity score methods

    NARCIS (Netherlands)

    Martens, Edwin P; de Boer, Anthonius; Pestman, Wiebe R; Belitser, Svetlana V; Stricker, Bruno H Ch; Klungel, Olaf H

    PURPOSE: To compare adjusted effects of drug treatment for hypertension on the risk of stroke from propensity score (PS) methods with a multivariable Cox proportional hazards (Cox PH) regression in an observational study with censored data. METHODS: From two prospective population-based cohort

  8. A method to adjust radiation dose-response relationships for clinical risk factors

    DEFF Research Database (Denmark)

    Appelt, Ane Lindegaard; Vogelius, Ivan R

    2012-01-01

    Several clinical risk factors for radiation induced toxicity have been identified in the literature. Here, we present a method to quantify the effect of clinical risk factors on radiation dose-response curves and apply the method to adjust the dose-response for radiation pneumonitis for patients...

  9. Analysis of methods to determine the latency of online movement adjustments

    NARCIS (Netherlands)

    Oostwoud Wijdenes, L.; Brenner, E.; Smeets, J.B.J.

    2014-01-01

    When studying online movement adjustments, one of the interesting parameters is their latency. We set out to compare three different methods of determining the latency: the threshold, confidence interval, and extrapolation methods. We simulated sets of movements with different movement times and

  10. Comparison of Sentinel-2A and Landsat-8 Nadir BRDF Adjusted Reflectance (NBAR) over Southern Africa

    Science.gov (United States)

    Li, J.; Roy, D. P.; Zhang, H.

    2016-12-01

    The Landsat satellites have been providing moderate resolution imagery of the Earth's surface for over 40 years with continuity provided by the Landsat 8 and planned Landsat 9 missions. The European Space Agency Sentinel-2 satellite was successfully launched into a polar sun-synchronous orbit in 2015 and carries the Multi Spectral Instrument (MSI) that has Landsat-like bands and acquisition coverage. These new sensors acquire images at view angles ± 7.5° (Landsat) and ± 10.3° (Sentinel-2) from nadir that result in small directional effects in the surface reflectance. When data from adjoining paths, or from long time series are used, a model of the surface anisotropy is required to adjust observations to a uniform nadir view (primarily for visual consistency, vegetation monitoring, or detection of subtle surface changes). Recently a generalized approach was published that provides consistent Landsat view angle corrections to provide nadir BRDF-adjusted reflectance (NBAR). Because the BRDF shapes of different terrestrial surfaces are sufficiently similar over the narrow 15° Landsat field of view, a fixed global set of MODIS BRDF spectral model parameters was shown to be adequate for Landsat NBAR derivation with little sensitivity to the land cover type, condition, or surface disturbance. This poster demonstrates the application of this methodology to Sentinel-2 data over a west-east transect across southern Africa. The reflectance differences between adjacent overlapping paths in the forward and backward scatter directions are quantified for both before and after BRDF correction. Sentinel-2 and Landsat-8 reflectance and NBAR inter-comparison results considering different stages of cloud and saturation filtering, and filtering to reduce surface state differences caused by acquisition time differences, demonstrate the utility of the approach. The relevance and limitations of the corrections for providing consistent moderate resolution reflectance are discussed.

  11. A Plant Control Technology Using Reinforcement Learning Method with Automatic Reward Adjustment

    Science.gov (United States)

    Eguchi, Toru; Sekiai, Takaaki; Yamada, Akihiro; Shimizu, Satoru; Fukai, Masayuki

    A control technology using Reinforcement Learning (RL) and Radial Basis Function (RBF) Network has been developed to reduce environmental load substances exhausted from power and industrial plants. This technology consists of the statistic model using RBF Network, which estimates characteristics of plants with respect to environmental load substances, and RL agent, which learns the control logic for the plants using the statistic model. In this technology, it is necessary to design an appropriate reward function given to the agent immediately according to operation conditions and control goals to control plants flexibly. Therefore, we propose an automatic reward adjusting method of RL for plant control. This method adjusts the reward function automatically using information of the statistic model obtained in its learning process. In the simulations, it is confirmed that the proposed method can adjust the reward function adaptively for several test functions, and executes robust control toward the thermal power plant considering the change of operation conditions and control goals.

  12. The adaptive problems of female teenage refugees and their behavioral adjustment methods for coping

    Directory of Open Access Journals (Sweden)

    Mhaidat F

    2016-04-01

    Full Text Available Fatin Mhaidat Department of Educational Psychology, Faculty of Educational Sciences, The Hashemite University, Zarqa, Jordan Abstract: This study aimed at identifying the levels of adaptive problems among teenage female refugees in the government schools and explored the behavioral methods that were used to cope with the problems. The sample was composed of 220 Syrian female students (seventh to first secondary grades enrolled at government schools within the Zarqa Directorate and who came to Jordan due to the war conditions in their home country. The study used the scale of adaptive problems that consists of four dimensions (depression, anger and hostility, low self-esteem, and feeling insecure and a questionnaire of the behavioral adjustment methods for dealing with the problem of asylum. The results indicated that the Syrian teenage female refugees suffer a moderate degree of adaptation problems, and the positive adjustment methods they have used are more than the negatives. Keywords: adaptive problems, female teenage refugees, behavioral adjustment

  13. Adjust the method of the FMEA to the requirements of the aviation industry

    Directory of Open Access Journals (Sweden)

    Andrzej FELLNER

    2015-12-01

    Full Text Available The article presents a summary of current methods used in aviation and rail transport. It also contains a proposal to adjust the method of the FMEA to the latest requirements of the airline industry. The authors suggested tables of indicators Zn, Pr and Dt necessary to implement FMEA method of risk analysis taking into account current achievements aerospace and rail safety. Also proposed acceptable limits of the RPN number which allows you to classify threats.

  14. Adjusted permutation method for multiple attribute decision making with meta-heuristic solution approaches

    Directory of Open Access Journals (Sweden)

    Hossein Karimi

    2011-04-01

    Full Text Available The permutation method of multiple attribute decision making has two significant deficiencies: high computational time and wrong priority output in some problem instances. In this paper, a novel permutation method called adjusted permutation method (APM is proposed to compensate deficiencies of conventional permutation method. We propose Tabu search (TS and particle swarm optimization (PSO to find suitable solutions at a reasonable computational time for large problem instances. The proposed method is examined using some numerical examples to evaluate the performance of the proposed method. The preliminary results show that both approaches provide competent solutions in relatively reasonable amounts of time while TS performs better to solve APM.

  15. Adjustment method for embedded metrology engine in an EM773 series microcontroller.

    Science.gov (United States)

    Blazinšek, Iztok; Kotnik, Bojan; Chowdhury, Amor; Kačič, Zdravko

    2015-09-01

    This paper presents the problems of implementation and adjustment (calibration) of a metrology engine embedded in NXP's EM773 series microcontroller. The metrology engine is used in a smart metering application to collect data about energy utilization and is controlled with the use of metrology engine adjustment (calibration) parameters. The aim of this research is to develop a method which would enable the operators to find and verify the optimum parameters which would ensure the best possible accuracy. Properly adjusted (calibrated) metrology engines can then be used as a base for variety of products used in smart and intelligent environments. This paper focuses on the problems encountered in the development, partial automatisation, implementation and verification of this method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Adjusted Empirical Likelihood Method in the Presence of Nuisance Parameters with Application to the Sharpe Ratio

    Directory of Open Access Journals (Sweden)

    Yuejiao Fu

    2018-04-01

    Full Text Available The Sharpe ratio is a widely used risk-adjusted performance measurement in economics and finance. Most of the known statistical inferential methods devoted to the Sharpe ratio are based on the assumption that the data are normally distributed. In this article, without making any distributional assumption on the data, we develop the adjusted empirical likelihood method to obtain inference for a parameter of interest in the presence of nuisance parameters. We show that the log adjusted empirical likelihood ratio statistic is asymptotically distributed as the chi-square distribution. The proposed method is applied to obtain inference for the Sharpe ratio. Simulation results illustrate that the proposed method is comparable to Jobson and Korkie’s method (1981 and outperforms the empirical likelihood method when the data are from a symmetric distribution. In addition, when the data are from a skewed distribution, the proposed method significantly outperforms all other existing methods. A real-data example is analyzed to exemplify the application of the proposed method.

  17. Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files. SG39 meeting, December 2016

    International Nuclear Information System (INIS)

    Cabellos, Oscar; ); PELLONI, Sandro; Ivanov, Evgeny; Sobes, Vladimir; Fukushima, M.; Yokoyama, Kenji; Palmiotti, Giuseppe; Kodeli, Ivo

    2016-12-01

    The aim of WPEC subgroup 39 'Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files' is to provide criteria and practical approaches to use effectively the results of sensitivity analyses and cross section adjustments for feedback to evaluators and differential measurement experimentalists in order to improve the knowledge of neutron cross sections, uncertainties, and correlations to be used in a wide range of applications. This document is the proceedings of the eighth Subgroup 39 meeting, held at the OECD NEA, Boulogne-Billancourt, France, on 1-2 December 2016. It comprises all the available presentations (slides) given by the participants: A - Presentations: Welcome and actions review (Oscar CABELLOS); B - Methods: - Detailed comparison of Progressive Incremental Adjustment (PIA) sequence results involving adjustments of spectral indices and coolant density effects on the basis of the SG33 benchmark (Sandro PELLONI); - ND assessment alternatives: Validation matrix vs XS adjustment (Evgeny IVANOV); - Implementation of Resonance Parameter Sensitivity Coefficients Calculation in CE TSUNAMI-3D (Vladimir SOBES); C - Experiment analysis, sensitivity calculations and benchmarks: - Benchmark tests of ENDF/B-VIII.0 beta 1 using sodium void reactivity worth of FCA-XXVII-1 assembly (M. FUKUSHIMA, Kenji YOKOYAMA); D - Adjustments: - Cross-section adjustment based on JENDL-4.0 using new experiments on the basis of the SG33 benchmark (Kenji YOKOYAMA); - Comparison of adjustment trends with the Cielo evaluation (Sandro PELLONI); - Expanded adjustment in support of CIELO initiative (Giuseppe PALMIOTTI); - First preliminary results of the adjustment exercise using ASPIS Fe88 and SNEAK-7A/7B k_e_f_f and b_e_f_f benchmarks (Ivo KODELI); E - Future actions, deliverables: - Discussion on future of SG39 and possible new subgroup (Giuseppe PALMIOTTI); - WPEC sub-group proposal: Investigation of Covariance Data in

  18. Lipophilic versus hydrophilic statin therapy for heart failure: a protocol for an adjusted indirect comparison meta-analysis

    Science.gov (United States)

    2013-01-01

    Background Statins are known to reduce cardiovascular morbidity and mortality in primary and secondary prevention studies. Subsequently, a number of nonrandomised studies have shown statins improve clinical outcomes in patients with heart failure (HF). Small randomised controlled trials (RCT) also show improved cardiac function, reduced inflammation and mortality with statins in HF. However, the findings of two large RCTs do not support the evidence provided by previous studies and suggest statins lack beneficial effects in HF. Two meta-analyses have shown statins do not improve survival, whereas two others showed improved cardiac function and reduced inflammation in HF. It appears lipophilic statins produce better survival and other outcome benefits compared to hydrophilic statins. But the two types have not been compared in direct comparison trials in HF. Methods/design We will conduct a systematic review and meta-analysis of lipophilic and hydrophilic statin therapy in patients with HF. Our objectives are: 1. To determine the effects of lipophilic statins on (1) mortality, (2) hospitalisation for worsening HF, (3) cardiac function and (4) inflammation. 2. To determine the effects of hydrophilic statins on (1) mortality, (2) hospitalisation for worsening HF, (3) cardiac function and (4) inflammation. 3. To compare the efficacy of lipophilic and hydrophilic statins on HF outcomes with an adjusted indirect comparison meta-analysis. We will conduct an electronic search of databases for RCTs that evaluate statins in patients with HF. The reference lists of all identified studies will be reviewed. Two independent reviewers will conduct the search. The inclusion criteria include: 1. RCTs comparing statins with placebo or no statin in patients with symptomatic HF. 2. RCTs that employed the intention-to-treat (ITT) principle in data analysis. 3. Symptomatic HF patients of all aetiologies and on standard treatment. 4. Statin of any dose as intervention. 5. Placebo or no

  19. Method Based on Confidence Radius to Adjust the Location of Mobile Terminals

    DEFF Research Database (Denmark)

    García-Fernández, Juan Antonio; Jurado-Navas, Antonio; Fernández-Navarro, Mariano

    2017-01-01

    The present paper details a technique for adjusting in a smart manner the position estimates of any user equipment given by different geolocation/positioning methods in a wireless radiofrequency communication network based on different strategies (observed time difference of arrival , angle of ar...

  20. Singularity-sensitive gauge-based radar rainfall adjustment methods for urban hydrological applications

    Directory of Open Access Journals (Sweden)

    L.-P. Wang

    2015-09-01

    Full Text Available Gauge-based radar rainfall adjustment techniques have been widely used to improve the applicability of radar rainfall estimates to large-scale hydrological modelling. However, their use for urban hydrological applications is limited as they were mostly developed based upon Gaussian approximations and therefore tend to smooth off so-called "singularities" (features of a non-Gaussian field that can be observed in the fine-scale rainfall structure. Overlooking the singularities could be critical, given that their distribution is highly consistent with that of local extreme magnitudes. This deficiency may cause large errors in the subsequent urban hydrological modelling. To address this limitation and improve the applicability of adjustment techniques at urban scales, a method is proposed herein which incorporates a local singularity analysis into existing adjustment techniques and allows the preservation of the singularity structures throughout the adjustment process. In this paper the proposed singularity analysis is incorporated into the Bayesian merging technique and the performance of the resulting singularity-sensitive method is compared with that of the original Bayesian (non singularity-sensitive technique and the commonly used mean field bias adjustment. This test is conducted using as case study four storm events observed in the Portobello catchment (53 km2 (Edinburgh, UK during 2011 and for which radar estimates, dense rain gauge and sewer flow records, as well as a recently calibrated urban drainage model were available. The results suggest that, in general, the proposed singularity-sensitive method can effectively preserve the non-normality in local rainfall structure, while retaining the ability of the original adjustment techniques to generate nearly unbiased estimates. Moreover, the ability of the singularity-sensitive technique to preserve the non-normality in rainfall estimates often leads to better reproduction of the urban

  1. Singularity-sensitive gauge-based radar rainfall adjustment methods for urban hydrological applications

    Science.gov (United States)

    Wang, L.-P.; Ochoa-Rodríguez, S.; Onof, C.; Willems, P.

    2015-09-01

    Gauge-based radar rainfall adjustment techniques have been widely used to improve the applicability of radar rainfall estimates to large-scale hydrological modelling. However, their use for urban hydrological applications is limited as they were mostly developed based upon Gaussian approximations and therefore tend to smooth off so-called "singularities" (features of a non-Gaussian field) that can be observed in the fine-scale rainfall structure. Overlooking the singularities could be critical, given that their distribution is highly consistent with that of local extreme magnitudes. This deficiency may cause large errors in the subsequent urban hydrological modelling. To address this limitation and improve the applicability of adjustment techniques at urban scales, a method is proposed herein which incorporates a local singularity analysis into existing adjustment techniques and allows the preservation of the singularity structures throughout the adjustment process. In this paper the proposed singularity analysis is incorporated into the Bayesian merging technique and the performance of the resulting singularity-sensitive method is compared with that of the original Bayesian (non singularity-sensitive) technique and the commonly used mean field bias adjustment. This test is conducted using as case study four storm events observed in the Portobello catchment (53 km2) (Edinburgh, UK) during 2011 and for which radar estimates, dense rain gauge and sewer flow records, as well as a recently calibrated urban drainage model were available. The results suggest that, in general, the proposed singularity-sensitive method can effectively preserve the non-normality in local rainfall structure, while retaining the ability of the original adjustment techniques to generate nearly unbiased estimates. Moreover, the ability of the singularity-sensitive technique to preserve the non-normality in rainfall estimates often leads to better reproduction of the urban drainage system

  2. The method and program system CABEI for adjusting consistency between natural element and its isotopes data

    Energy Technology Data Exchange (ETDEWEB)

    Tingjin, Liu; Zhengjun, Sun [Chinese Nuclear Data Center, Beijing, BJ (China)

    1996-06-01

    To meet the requirement of nuclear engineering, especially nuclear fusion reactor, now the data in the major evaluated libraries are given not only for natural element but also for its isotopes. Inconsistency between element and its isotopes data is one of the main problem in present evaluated neutron libraries. The formulas for adjusting to satisfy simultaneously the two kinds of consistent relationships were derived by means of least square method, the program system CABEI were developed. This program was tested by calculating the Fe data in CENDL-2.1. The results show that adjusted values satisfy the two kinds of consistent relationships.

  3. IC layout adjustment method and tool for improving dielectric reliability at interconnects

    Energy Technology Data Exchange (ETDEWEB)

    Kahng, Andrew B.; Chan, Tuck Boon

    2018-03-20

    Method for adjusting a layout used in making an integrated circuit includes one or more interconnects in the layout that are susceptible to dielectric breakdown are selected. One or more selected interconnects are adjusted to increase via to wire spacing with respect to at least one via and one wire of the one or more selected interconnects. Preferably, the selecting analyzes signal patterns of interconnects, and estimates the stress ratio based on state probability of routed signal nets in the layout. An annotated layout is provided that describes distances by which one or more via or wire segment edges are to be shifted. Adjustments can include thinning and shifting of wire segments, and rotation of vias.

  4. A novel method to adjust efficacy estimates for uptake of other active treatments in long-term clinical trials.

    Directory of Open Access Journals (Sweden)

    John Simes

    2010-01-01

    Full Text Available When rates of uptake of other drugs differ between treatment arms in long-term trials, the true benefit or harm of the treatment may be underestimated. Methods to allow for such contamination have often been limited by failing to preserve the randomization comparisons. In the Fenofibrate Intervention and Event Lowering in Diabetes (FIELD study, patients were randomized to fenofibrate or placebo, but during the trial many started additional drugs, particularly statins, more so in the placebo group. The effects of fenofibrate estimated by intention-to-treat were likely to have been attenuated. We aimed to quantify this effect and to develop a method for use in other long-term trials.We applied efficacies of statins and other cardiovascular drugs from meta-analyses of randomized trials to adjust the effect of fenofibrate in a penalized Cox model. We assumed that future cardiovascular disease events were reduced by an average of 24% by statins, and 20% by a first other major cardiovascular drug. We applied these estimates to each patient who took these drugs for the period they were on them. We also adjusted the analysis by the rate of discontinuing fenofibrate. Among 4,900 placebo patients, average statin use was 16% over five years. Among 4,895 assigned fenofibrate, statin use was 8% and nonuse of fenofibrate was 10%. In placebo patients, use of cardiovascular drugs was 1% to 3% higher. Before adjustment, fenofibrate was associated with an 11% reduction in coronary events (coronary heart disease death or myocardial infarction (P = 0.16 and an 11% reduction in cardiovascular disease events (P = 0.04. After adjustment, the effects of fenofibrate on coronary events and cardiovascular disease events were 16% (P = 0.06 and 15% (P = 0.008, respectively.This novel application of a penalized Cox model for adjustment of a trial estimate of treatment efficacy incorporates evidence-based estimates for other therapies, preserves comparisons between the

  5. A comparison of interface tracking methods

    International Nuclear Information System (INIS)

    Kothe, D.B.; Rider, W.J.

    1995-01-01

    In this Paper we provide a direct comparison of several important algorithms designed to track fluid interfaces. In the process we propose improved criteria by which these methods are to be judged. We compare and contrast the behavior of the following interface tracking methods: high order monotone capturing schemes, level set methods, volume-of-fluid (VOF) methods, and particle-based (particle-in-cell, or PIC) methods. We compare these methods by first applying a set of standard test problems, then by applying a new set of enhanced problems designed to expose the limitations and weaknesses of each method. We find that the properties of these methods are not adequately assessed until they axe tested with flows having spatial and temporal vorticity gradients. Our results indicate that the particle-based methods are easily the most accurate of those tested. Their practical use, however, is often hampered by their memory and CPU requirements. Particle-based methods employing particles only along interfaces also have difficulty dealing with gross topology changes. Full PIC methods, on the other hand, do not in general have topology restrictions. Following the particle-based methods are VOF volume tracking methods, which are reasonably accurate, physically based, robust, low in cost, and relatively easy to implement. Recent enhancements to the VOF methods using multidimensional interface reconstruction and improved advection provide excellent results on a wide range of test problems

  6. Monte Carlo Method with Heuristic Adjustment for Irregularly Shaped Food Product Volume Measurement

    Directory of Open Access Journals (Sweden)

    Joko Siswantoro

    2014-01-01

    Full Text Available Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method.

  7. Monte Carlo method with heuristic adjustment for irregularly shaped food product volume measurement.

    Science.gov (United States)

    Siswantoro, Joko; Prabuwono, Anton Satria; Abdullah, Azizi; Idrus, Bahari

    2014-01-01

    Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method.

  8. Public Reporting of Primary Care Clinic Quality: Accounting for Sociodemographic Factors in Risk Adjustment and Performance Comparison.

    Science.gov (United States)

    Wholey, Douglas R; Finch, Michael; Kreiger, Rob; Reeves, David

    2018-01-03

    Performance measurement and public reporting are increasingly being used to compare clinic performance. Intended consequences include quality improvement, value-based payment, and consumer choice. Unintended consequences include reducing access for riskier patients and inappropriately labeling some clinics as poor performers, resulting in tampering with stable care processes. Two analytic steps are used to maximize intended and minimize unintended consequences. First, risk adjustment is used to reduce the impact of factors outside providers' control. Second, performance categorization is used to compare clinic performance using risk-adjusted measures. This paper examines the effects of methodological choices, such as risk adjusting for sociodemographic factors in risk adjustment and accounting for patients clustering by clinics in performance categorization, on clinic performance comparison for diabetes care, vascular care, asthma, and colorectal cancer screening. The population includes all patients with commercial and public insurance served by clinics in Minnesota. Although risk adjusting for sociodemographic factors has a significant effect on quality, it does not explain much of the variation in quality. In contrast, taking into account the nesting of patients within clinics in performance categorization has a substantial effect on performance comparison.

  9. Some adjustments to the human capital and the friction cost methods.

    Science.gov (United States)

    Targoutzidis, Antonis

    2018-03-21

    The cost of lost output is a major component of the total cost of illness estimates, especially those for the cost of workplace accidents and diseases. The two main methods for estimating this output, namely the human capital and the friction cost method, lead to very different results, particularly for cases of long-term absence, which makes the choice of method a critical dilemma. Two hidden assumptions, one for each method, are identified in this paper: for human capital method, the assumption that had the accident not happened the individual would remain alive, healthy and employed until retirement, and for friction cost method, the assumption that any created vacancy is covered by an unemployed person. Relevant adjustments to compensate for their impact are proposed: (a) to depreciate the estimates of the human capital method for the risks of premature death, disability or unemployment and (b) to multiply the estimates of the friction cost method with the expected number of job shifts that will be caused by a disability. The impact of these adjustments on the final estimates is very important in terms of magnitude and can lead to better results for each method.

  10. Environmental impact from different modes of transport. Method of comparison

    International Nuclear Information System (INIS)

    2002-03-01

    A prerequisite of long-term sustainable development is that activities of various kinds are adjusted to what humans and the natural world can tolerate. Transport is an activity that affects humans and the environment to a very great extent and in this project, several actors within the transport sector have together laid the foundation for the development of a comparative method to be able to compare the environmental impact at the different stages along the transport chain. The method analyses the effects of different transport concepts on the climate, noise levels, human health, acidification, land use and ozone depletion. Within the framework of the method, a calculation model has been created in Excel which acts as a basis for the comparisons. The user can choose to download the model from the Swedish EPA's on-line bookstore or order it on a floppy disk. Neither the method nor the model are as yet fully developed but our hope is that they can still be used in their present form as a basis and inspire further efforts and research in the field. In the report, we describe most of these shortcomings, the problems associated with the work and the existing development potential. This publication should be seen as the first stage in the development of a method of comparison between different modes of transport in non-monetary terms where there remains a considerable need for further development and amplification

  11. Fitting method of pseudo-polynomial for solving nonlinear parametric adjustment

    Institute of Scientific and Technical Information of China (English)

    陶华学; 宫秀军; 郭金运

    2001-01-01

    The optimal condition and its geometrical characters of the least-square adjustment were proposed. Then the relation between the transformed surface and least-squares was discussed. Based on the above, a non-iterative method, called the fitting method of pseudo-polynomial, was derived in detail. The final least-squares solution can be determined with sufficient accuracy in a single step and is not attained by moving the initial point in the view of iteration. The accuracy of the solution relys wholly on the frequency of Taylor's series. The example verifies the correctness and validness of the method.

  12. Comparison of Thermal Properties Measured by Different Methods

    Energy Technology Data Exchange (ETDEWEB)

    Sundberg, Jan [Geo Innova AB, Linkoeping (Sweden); Kukkonen, Ilmo [Geological Survey of Finland, Helsinki (Finland); Haelldahl, Lars [Hot Disk AB, Uppsala (Sweden)

    2003-04-01

    one or both methods cannot be excluded. For future investigations a set of thermal conductivity standard materials should be selected for testing using the different methods of the laboratories. The material should have thermal properties in the range of typical rocks, be fine-grained and suitable for making samples of different shapes and volumes adjusted to different measurement techniques. Because of large obtained individual variations in the results, comparisons of different methods should continue and include measurements of temperature dependence of thermal properties, especially the specific heat. This should cover the relevant temperature range of about 0-90 deg C. Further comparisons would add to previous studies of temperature dependence of the present rocks.

  13. Comparison of Thermal Properties Measured by Different Methods

    International Nuclear Information System (INIS)

    Sundberg, Jan; Kukkonen, Ilmo; Haelldahl, Lars

    2003-04-01

    one or both methods cannot be excluded. For future investigations a set of thermal conductivity standard materials should be selected for testing using the different methods of the laboratories. The material should have thermal properties in the range of typical rocks, be fine-grained and suitable for making samples of different shapes and volumes adjusted to different measurement techniques. Because of large obtained individual variations in the results, comparisons of different methods should continue and include measurements of temperature dependence of thermal properties, especially the specific heat. This should cover the relevant temperature range of about 0-90 deg C. Further comparisons would add to previous studies of temperature dependence of the present rocks

  14. Using the Nudge and Shove Methods to Adjust Item Difficulty Values.

    Science.gov (United States)

    Royal, Kenneth D

    2015-01-01

    In any examination, it is important that a sufficient mix of items with varying degrees of difficulty be present to produce desirable psychometric properties and increase instructors' ability to make appropriate and accurate inferences about what a student knows and/or can do. The purpose of this "teaching tip" is to demonstrate how examination items can be affected by the quality of distractors, and to present a simple method for adjusting items to meet difficulty specifications.

  15. CALCULATION METHODS OF OPTIMAL ADJUSTMENT OF CONTROL SYSTEM THROUGH DISTURBANCE CHANNEL

    Directory of Open Access Journals (Sweden)

    I. M. Golinko

    2014-01-01

    Full Text Available In the process of automatic control system debugging the great attention is paid to determining formulas’ parameters of optimal dynamic adjustment of regulators, taking into account the dynamics of Objects control. In most cases the known formulas are oriented on design of automatic control system through channel “input-output definition”. But practically in all continuous processes the main task of all regulators is stabilization of output parameters. The Methods of parameters calculation for dynamic adjustment of regulations were developed. These methods allow to optimize the analog and digital regulators, taking into account minimization of regulated influences. There were suggested to use the fact of detuning and maximum value of regulated influence. As the automatic control system optimization with proportional plus reset controllers on disturbance channel is an unimodal task, the main algorithm of optimization is realized by Hooke – Jeeves method. For controllers optimization through channel external disturbance there were obtained functional dependences of parameters calculations of dynamic proportional plus reset controllers from dynamic characteristics of Object control. The obtained dependences allow to improve the work of controllers (regulators of automatic control on external disturbance channel and so it allows to improve the quality of regulation of transient processes. Calculation formulas provide high accuracy and convenience in usage. In suggested method there are no nomographs and this fact expels subjectivity of investigation in determination of parameters of dynamic adjustment of proportional plus reset controllers. Functional dependences can be used for calculation of adjustment of PR controllers in a great range of change of dynamic characteristics of Objects control.

  16. Set up of a method for the adjustment of resonance parameters on integral experiments

    International Nuclear Information System (INIS)

    Blaise, P.

    1996-01-01

    Resonance parameters for actinides play a significant role in the neutronic characteristics of all reactor types. All the major integral parameters strongly depend on the nuclear data of the isotopes in the resonance-energy regions.The author sets up a method for the adjustment of resonance parameters taking into account the self-shielding effects and restricting the cross section deconvolution problem to a limited energy region. (N.T.)

  17. A Cross-Section Adjustment Method for Double Heterogeneity Problem in VHTGR Analysis

    International Nuclear Information System (INIS)

    Yun, Sung Hwan; Cho, Nam Zin

    2011-01-01

    Very High Temperature Gas-Cooled Reactors (VHTGRs) draw strong interest as candidates for a Gen-IV reactor concept, in which TRISO (tristructuralisotropic) fuel is employed to enhance the fuel performance. However, randomly dispersed TRISO fuel particles in a graphite matrix induce the so-called double heterogeneity problem. For design and analysis of such reactors with the double heterogeneity problem, the Monte Carlo method is widely used due to its complex geometry and continuous-energy capabilities. However, its huge computational burden, even in the modern high computing power, is still problematic to perform wholecore analysis in reactor design procedure. To address the double heterogeneity problem using conventional lattice codes, the RPT (Reactivityequivalent Physical Transformation) method considers a homogenized fuel region that is geometrically transformed to provide equivalent self-shielding effect. Another method is the coupled Monte Carlo/Collision Probability method, in which the absorption and nu-fission resonance cross-section libraries in the deterministic CPM3 lattice code are modified group-wise by the double heterogeneity factors determined by Monte Carlo results. In this paper, a new two-step Monte Carlo homogenization method is described as an alternative to those methods above. In the new method, a single cross-section adjustment factor is introduced to provide self-shielding effect equivalent to the self-shielding in heterogeneous geometry for a unit cell of compact fuel. Then, the homogenized fuel compact material with the equivalent cross-section adjustment factor is used in continuous-energy Monte Carlo calculation for various types of fuel blocks (or assemblies). The procedure of cross-section adjustment is implemented in the MCNP5 code

  18. Comparison of infusion pumps calibration methods

    Science.gov (United States)

    Batista, Elsa; Godinho, Isabel; do Céu Ferreira, Maria; Furtado, Andreia; Lucas, Peter; Silva, Claudia

    2017-12-01

    Nowadays, several types of infusion pump are commonly used for drug delivery, such as syringe pumps and peristaltic pumps. These instruments present different measuring features and capacities according to their use and therapeutic application. In order to ensure the metrological traceability of these flow and volume measuring equipment, it is necessary to use suitable calibration methods and standards. Two different calibration methods can be used to determine the flow error of infusion pumps. One is the gravimetric method, considered as a primary method, commonly used by National Metrology Institutes. The other calibration method, a secondary method, relies on an infusion device analyser (IDA) and is typically used by hospital maintenance offices. The suitability of the IDA calibration method was assessed by testing several infusion instruments at different flow rates using the gravimetric method. In addition, a measurement comparison between Portuguese Accredited Laboratories and hospital maintenance offices was performed under the coordination of the Portuguese Institute for Quality, the National Metrology Institute. The obtained results were directly related to the used calibration method and are presented in this paper. This work has been developed in the framework of the EURAMET projects EMRP MeDD and EMPIR 15SIP03.

  19. Cross-Cultural Differences in Adjustment to Aging: A Comparison Between Mexico and Portugal

    Directory of Open Access Journals (Sweden)

    Neyda Ma. Mendoza-Ruvalcaba

    2017-08-01

    Full Text Available ObjectiveTo compare Adjustment to Aging (AtA and Satisfaction with Life in a Mexican and a Portuguese older sample.MethodA total of 723 (n = 340 Mexican and n = 383 Portuguese older adults were included and assessed with the AtA Scale (ATAS and the Satisfaction with Life Scale (SWL. Informed consent was obtained from all participants. Portuguese participants were significantly older than Mexicans (mean age 85.19 and 71.36 years old, respectively and showed higher education level (p < .001. No significant differences on gender and marital status were found.ResultsMexicans considered all aspects of AtA absolutely more important than their Portuguese counterparts (p < .001. For Mexicans, being cherished by their family (82.1%, being healthy, without pain or disease (75.9%, having spiritual religious and existential values (75% and having fun and laughter (75% were the most important for AtA, compared to having curiosity and an interest in learning (22.5%, creating and being creative (20.1% and leaving a mark and seed for the future (18.0% for Portuguese participants. Mexicans also reported a higher SWL than Portuguese participants. Mean scores were 6.10 (SD = 0.76 and 3.66 (SD = 1.47 respectively (p < .001. AtA and SWL were correlated in the Mexican sample (p = .001, but not in the Portuguese (p = .100.DiscussionDifferences on AtA between Mexican and Portuguese older adults should be explained considering their cultural and social context, and their socio-demographic characteristics. The enhancement of AtA, and its relevance to improve well-being and longevity can become a significant resource or health care interventions.

  20. Adjusting the Parameters of Metal Oxide Gapless Surge Arresters’ Equivalent Circuits Using the Harmony Search Method

    Directory of Open Access Journals (Sweden)

    Christos A. Christodoulou

    2017-12-01

    Full Text Available The appropriate circuit modeling of metal oxide gapless surge arresters is critical for insulation coordination studies. Metal oxide arresters present a dynamic behavior for fast front surges; namely, their residual voltage is dependent on the peak value, as well as the duration of the injected impulse current, and should therefore not only be represented by non-linear elements. The aim of the current work is to adjust the parameters of the most frequently used surge arresters’ circuit models by considering the magnitude of the residual voltage, as well as the dissipated energy for given pulses. In this aim, the harmony search method is implemented to adjust parameter values of the arrester equivalent circuit models. This functions by minimizing a defined objective function that compares the simulation outcomes with the manufacturer’s data and the results obtained from previous methodologies.

  1. The big five and identification-contrast processes in social comparison in adjustment to cancer treatment

    NARCIS (Netherlands)

    van der Zee, KI; Buunk, BP; Sanderman, R; Botke, G; van den Bergh, F

    1999-01-01

    The present study examined the relationship between social comparison processes and the Big Five personality factors. In a sample of 112 patients with various forms of cancer it was found that Neuroticism was associated with a tendency to focus on the negative interpretation of social comparison

  2. Early Parental Positive Behavior Support and Childhood Adjustment: Addressing Enduring Questions with New Methods.

    Science.gov (United States)

    Waller, Rebecca; Gardner, Frances; Dishion, Thomas; Sitnick, Stephanie L; Shaw, Daniel S; Winter, Charlotte E; Wilson, Melvin

    2015-05-01

    A large literature provides strong empirical support for the influence of parenting on child outcomes. The current study addresses enduring research questions testing the importance of early parenting behavior to children's adjustment. Specifically, we developed and tested a novel multi-method observational measure of parental positive behavior support at age 2. Next, we tested whether early parental positive behavior support was related to child adjustment at school age, within a multi-agent and multi-method measurement approach and design. Observational and parent-reported data from mother-child dyads (N = 731; 49 percent female) were collected from a high-risk sample at age 2. Follow-up data were collected via teacher report and child assessment at age 7.5. The results supported combining three different observational methods to assess positive behavior support at age 2 within a latent factor. Further, parents' observed positive behavior support at age 2 predicted multiple types of teacher-reported and child-assessed problem behavior and competencies at 7.5 years old. Results supported the validity and predictive capability of a multi-method observational measure of parenting and the importance of a continued focus on the early years within preventive interventions.

  3. Use of the method of neutrons moderation to the adjustment of the concrete dosage through the total humidity of the arid s

    International Nuclear Information System (INIS)

    Howland, J.; Morejon, D.; Simeon, G.; Gracia, R.; Desdin, L.; O'Reilly, V.

    1997-01-01

    The method of neutrons moderation to the fast determination was applied of the content of humidity in the fine and thick arid s. The measure values of humidity were employed in the adjustment of the Dosification of concrete by the total humidity of the arid. The obtained results indicate that the employment of this fitting method allows to get higher values of resistance to the compression and also reduces the dispersions in the concretes production. This method would permit a considerable saving of cement in comparison with the traditional method. (author) [es

  4. Method and apparatus for rapid adjustment of process gas inventory in gaseous diffusion cascades

    International Nuclear Information System (INIS)

    Dyer, R.H.; Fowler, A.H.; Vanstrum, P.R.

    1977-01-01

    The invention relates to an improved method and system for making relatively large and rapid adjustments in the process gas inventory of an electrically powered gaseous diffusion cascade in order to accommodate scheduled changes in the electrical power available for cascade operation. In the preferred form of the invention, the cascade is readied for a decrease in electrical input by simultaneously withdrawing substreams of the cascade B stream into respective process-gas-freezing and storage zones while decreasing the datum-pressure inputs to the positioning systems for the cascade control valves in proportion to the weight of process gas so removed. Consequently, the control valve positions are substantially unchanged by the reduction in invention, and there is minimal disturbance of the cascade isotopic gradient. The cascade is readied for restoration of the power cut by simultaneously evaporating the solids in the freezing zones to regenerate the process gas substreams and introducing them to the cascade A stream while increasing the aforementioned datum pressure inputs in proportion to the weight of process gas so returned. In the preferred form of the system for accomplishing these operations, heat exchangers are provided for freezing, storing, and evaporating the various substreams. Preferably, the heat exchangers are connected to use existing cascade auxiliary systems as a heat sink. A common control is employed to adjust and coordinate the necessary process gas transfers and datum pressure adjustments

  5. Comparison of percentage excess weight loss after laparoscopic sleeve gastrectomy and laparoscopic adjustable gastric banding

    Science.gov (United States)

    Bobowicz, Maciej; Lech, Paweł; Orłowski, Michał; Siczewski, Wiaczesław; Pawlak, Maciej; Świetlik, Dariusz; Witzling, Mieczysław; Michalik, Maciej

    2014-01-01

    Introduction Laparoscopic sleeve gastrectomy (LSG) and laparoscopic adjustable gastric banding (LAGB) are acceptable options for primary bariatric procedures in patients with body mass index (BMI) 35–55 kg/m2. Aim The aim of this study is to compare the effects of these two bariatric procedures 6, 12 and 24 months after surgery. Material and methods Two hundred and two patients were included 72 LSG and 130 LAGB patients. The average age was 38.8 ±11.9 and 39.4 ±10.4 years in LSG and LAGB groups, with initial BMI of 44.1 kg/m2 and 45.2 kg/m2, p = NS. Results The mean percentage of excess weight loss (%EWL) at 6 months for LSG vs. LAGB was 36.3% vs. 30.1% (p = 0.01) and at 12 months was 43.8% vs. 34.6% (p = 0.005). The greatest difference in the mean %EWL at 12 months was observed in patients with initial BMI of 40–49.9 kg/m2 in favor of LSG (47.5% vs. 35.6%; p = 0.01). Two years after surgery there was no advantage of LSG and in the subgroup of patients with BMI 50–55 kg/m2 there was a trend in favor of LAGB (57.2% vs. 30%; p = 0.07). The multiple regression model of independent variables (age, gender, initial BMI and the presence of comorbidities) proved insignificant in prediction of the best outcome in means of %EWL for either operative modality. None of these factors in the logistic regression model could determine the type of surgery that should be used in particular patients. Conclusions During the first 2 years after surgery, the best results were obtained in women with lower BMI undergoing LSG surgery. The LSG provides greater %EWL after a shorter period of time though the difference decreases in time. PMID:25337157

  6. A comparison of internal versus external risk-adjustment for monitoring clinical outcomes

    NARCIS (Netherlands)

    Koetsier, Antonie; de Keizer, Nicolette; Peek, Niels

    2011-01-01

    Internal and external prognostic models can be used to calculate severity of illness adjusted mortality risks. However, it is unclear what the consequences are of using an external model instead of an internal model when monitoring an institution's clinical performance. Theoretically, using an

  7. Research on the phase adjustment method for dispersion interferometer on HL-2A tokamak

    Science.gov (United States)

    Tongyu, WU; Wei, ZHANG; Haoxi, WANG; Yan, ZHOU; Zejie, YIN

    2018-06-01

    A synchronous demodulation system is proposed and deployed for CO2 dispersion interferometer on HL-2A, which aims at high plasma density measurements and real-time feedback control. In order to make sure that the demodulator and the interferometer signal are synchronous in phase, a phase adjustment (PA) method has been developed for the demodulation system. The method takes advantages of the field programmable gate array parallel and pipeline process capabilities to carry out high performance and low latency PA. Some experimental results presented show that the PA method is crucial to the synchronous demodulation system and reliable to follow the fast change of the electron density. The system can measure the line-integrated density with a high precision of 2.0 × 1018 m‑2.

  8. Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files. SG39 meeting, May 2014

    International Nuclear Information System (INIS)

    Aliberti, G.; Archier, P.; Dunn, M.; Dupont, E.; Hill, I.; ); Garcia, A.; Hursin, M.; Pelloni, S.; Ivanova, T.; Kodeli, I.; Palmiotti, G.; Salvatores, M.; Touran, N.; Wenming, Wang; Yokoyama, K.

    2014-05-01

    The aim of WPEC subgroup 39 'Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files' is to provide criteria and practical approaches to use effectively the results of sensitivity analyses and cross section adjustments for feedback to evaluators and differential measurement experimentalists in order to improve the knowledge of neutron cross sections, uncertainties, and correlations to be used in a wide range of applications. This document is the proceedings of the second Subgroup meeting, held at the NEA, Issy-les-Moulineaux, France, on 13 May 2014. It comprises a Summary Record of the meeting and all the available presentations (slides) given by the participants: A - Welcome: Review of actions (M. Salvatores); B - Inter-comparison of sensitivity coefficients: 1 - Sensitivity Computation with Monte Carlo Methods (T. Ivanova); 2 - Sensitivity analysis of FLATTOP-Pu (I. Kodeli); 3 - Sensitivity coefficients by means of SERPENT-2 (S. Pelloni); 4 - Demonstration - Database for ICSBEP (DICE) and Database and Analysis Tool for IRPhE (IDAT) (I. Hill); C - Specific new experiments: 1 - PROTEUS FDWR-II (HCLWR) program summary (M. Hursin); 2 - STEK and SEG Experiments, M. Salvatores 3 - Experiments related to "2"3"5U, "2"3"8U, "5"6Fe and "2"3Na, G. Palmiotti); 4 - Validation of Iron Cross Sections against ASPIS Experiments (JEF/DOC-420) (I. Kodeli); 5 - Benchmark analysis of Iron Cross-sections (EFFDOC-1221) (I. Kodeli 6 - Integral Beta-effective Measurements (K. Yokoyama on behalf of M. Ishikawa); D - Adjustment results: 1 - Impacts of Covariance Data and Interpretation of Adjustment Trends of ADJ2010, (K. Yokoyama); 2 - Revised Recommendations from ADJ2010 Adjustment (K. Yokoyama); 3 - Comparisons and Discussions on Adjustment trends from JEFF (CEA) (P. Archier); 4 - Feedback on CIELO Isotopes from ENDF/B-VII.0 Adjustment (G. Palmiotti); 5 - Demonstration - Plot comparisons of participants' results (E

  9. Resonant frequency detection and adjustment method for a capacitive transducer with differential transformer bridge

    International Nuclear Information System (INIS)

    Hu, M.; Bai, Y. Z.; Zhou, Z. B.; Li, Z. X.; Luo, J.

    2014-01-01

    The capacitive transducer with differential transformer bridge is widely used in ultra-sensitive space accelerometers due to their simple structure and high resolution. In this paper, the front-end electronics of an inductive-capacitive resonant bridge transducer is analyzed. The analysis result shows that the performance of this transducer depends upon the case that the AC pumping frequency operates at the resonance point of the inductive-capacitive bridge. The effect of possible mismatch between the AC pumping frequency and the actual resonant frequency is discussed, and the theoretical analysis indicates that the output voltage noise of the front-end electronics will deteriorate by a factor of about 3 due to either a 5% variation of the AC pumping frequency or a 10% variation of the tuning capacitance. A pre-scanning method to determine the actual resonant frequency is proposed followed by the adjustment of the operating frequency or the change of the tuning capacitance in order to maintain expected high resolution level. An experiment to verify the mismatching effect and the adjustment method is provided

  10. Resonant frequency detection and adjustment method for a capacitive transducer with differential transformer bridge

    Energy Technology Data Exchange (ETDEWEB)

    Hu, M.; Bai, Y. Z., E-mail: abai@mail.hust.edu.cn; Zhou, Z. B., E-mail: zhouzb@mail.hust.edu.cn; Li, Z. X.; Luo, J. [MOE Key Laboratory of Fundamental Physical Quantities Measurement, School of Physics, Huazhong University of Science and Technology, Wuhan 430074 (China)

    2014-05-15

    The capacitive transducer with differential transformer bridge is widely used in ultra-sensitive space accelerometers due to their simple structure and high resolution. In this paper, the front-end electronics of an inductive-capacitive resonant bridge transducer is analyzed. The analysis result shows that the performance of this transducer depends upon the case that the AC pumping frequency operates at the resonance point of the inductive-capacitive bridge. The effect of possible mismatch between the AC pumping frequency and the actual resonant frequency is discussed, and the theoretical analysis indicates that the output voltage noise of the front-end electronics will deteriorate by a factor of about 3 due to either a 5% variation of the AC pumping frequency or a 10% variation of the tuning capacitance. A pre-scanning method to determine the actual resonant frequency is proposed followed by the adjustment of the operating frequency or the change of the tuning capacitance in order to maintain expected high resolution level. An experiment to verify the mismatching effect and the adjustment method is provided.

  11. Using Anchoring Vignettes to Adjust Self-Reported Personality: A Comparison Between Countries

    Directory of Open Access Journals (Sweden)

    Selina Weiss

    2018-03-01

    Full Text Available Data from self-report tools cannot be readily compared between cultures due to culturally specific ways of using a response scale. As such, anchoring vignettes have been proposed as a suitable methodology for correcting against this difference. We developed anchoring vignettes for the Big Five Inventory-44 (BFI-44 to supplement its Likert-type response options. Based on two samples (Rwanda: n = 423; Philippines: n = 143, we evaluated the psychometric properties of the measure both before and after applying the anchoring vignette adjustment. Results show that adjusted scores had better measurement properties, including improved reliability and a more orthogonal correlational structure, relative to scores based on the original Likert scale. Correlations of the Big Five Personality Factors with life satisfaction were essentially unchanged after the vignette-adjustment while correlations with counterproductive were noticeably lower. Overall, these changed findings suggest that the use of anchoring vignette methodology improves the cross-cultural comparability of self-reported personality, a finding of potential interest to the field of global workforce research and development as well as educational policymakers.

  12. Using Anchoring Vignettes to Adjust Self-Reported Personality: A Comparison Between Countries

    Science.gov (United States)

    Weiss, Selina; Roberts, Richard D.

    2018-01-01

    Data from self-report tools cannot be readily compared between cultures due to culturally specific ways of using a response scale. As such, anchoring vignettes have been proposed as a suitable methodology for correcting against this difference. We developed anchoring vignettes for the Big Five Inventory-44 (BFI-44) to supplement its Likert-type response options. Based on two samples (Rwanda: n = 423; Philippines: n = 143), we evaluated the psychometric properties of the measure both before and after applying the anchoring vignette adjustment. Results show that adjusted scores had better measurement properties, including improved reliability and a more orthogonal correlational structure, relative to scores based on the original Likert scale. Correlations of the Big Five Personality Factors with life satisfaction were essentially unchanged after the vignette-adjustment while correlations with counterproductive were noticeably lower. Overall, these changed findings suggest that the use of anchoring vignette methodology improves the cross-cultural comparability of self-reported personality, a finding of potential interest to the field of global workforce research and development as well as educational policymakers. PMID:29593621

  13. National Comparison of Hospital Performances in Lung Cancer Surgery: The Role Of Casemix Adjustment.

    Science.gov (United States)

    Beck, Naomi; Hoeijmakers, Fieke; van der Willik, Esmee M; Heineman, David J; Braun, Jerry; Tollenaar, Rob A E M; Schreurs, Wilhelmina H; Wouters, Michel W J M

    2018-04-03

    When comparing hospitals on outcome indicators, proper adjustment for casemix (a combination of patient- and disease characteristics) is indispensable. This study examines the need for casemix adjustment in evaluating hospital outcomes for Non-Small Cell Lung Cancer (NSCLC) surgery. Data from the Dutch Lung Cancer Audit for Surgery was used to validate factors associated with postoperative 30-day mortality and complicated course with multivariable logistic regression models. Between-hospital variation in casemix was studied by calculating medians and interquartile ranges for separate factors on hospital level and the 'expected' outcomes per hospital as a composite measure. 8040 patients, distributed over 51 Dutch hospitals were included for analysis. Mean observed postoperative mortality and complicated course were 2.2% and 13.6% respectively. Age, ASA-classification, ECOG performance score, lung function, extent of resection, tumor stage and postoperative histopathology were individual significant predictors for both outcomes of postoperative mortality and complicated course. A considerable variation of these casemix factors between hospital-populations was observed, with the expected mortality and complicated course per hospital ranging from 1.4 to 3.2% and 11.5 to 17.1%. The between-hospital variation in casemix of patients undergoing surgery for NSCLC emphasizes the importance of proper adjustment when comparing hospitals on outcome indicators. Copyright © 2018. Published by Elsevier Inc.

  14. Comparative Efficacy of Daratumumab Monotherapy and Pomalidomide Plus Low-Dose Dexamethasone in the Treatment of Multiple Myeloma: A Matching Adjusted Indirect Comparison.

    Science.gov (United States)

    Van Sanden, Suzy; Ito, Tetsuro; Diels, Joris; Vogel, Martin; Belch, Andrew; Oriol, Albert

    2018-03-01

    Daratumumab (a human CD38-directed monoclonal antibody) and pomalidomide (an immunomodulatory drug) plus dexamethasone are both relatively new treatment options for patients with heavily pretreated multiple myeloma. A matching adjusted indirect comparison (MAIC) was used to compare absolute treatment effects of daratumumab versus pomalidomide + low-dose dexamethasone (LoDex; 40 mg) on overall survival (OS), while adjusting for differences between the trial populations. The MAIC method reduces the risk of bias associated with naïve indirect comparisons. Data from 148 patients receiving daratumumab (16 mg/kg), pooled from the GEN501 and SIRIUS studies, were compared separately with data from patients receiving pomalidomide + LoDex in the MM-003 and STRATUS studies. The MAIC-adjusted hazard ratio (HR) for OS of daratumumab versus pomalidomide + LoDex was 0.56 (95% confidence interval [CI], 0.38-0.83; p  = .0041) for MM-003 and 0.51 (95% CI, 0.37-0.69; p  < .0001) for STRATUS. The treatment benefit was even more pronounced when the daratumumab population was restricted to pomalidomide-naïve patients (MM-003: HR, 0.33; 95% CI, 0.17-0.66; p  = .0017; STRATUS: HR, 0.41; 95% CI, 0.21-0.79; p  = .0082). An additional analysis indicated a consistent trend of the OS benefit across subgroups based on M-protein level reduction (≥50%, ≥25%, and <25%). The MAIC results suggest that daratumumab improves OS compared with pomalidomide + LoDex in patients with heavily pretreated multiple myeloma. This matching adjusted indirect comparison of clinical trial data from four studies analyzes the survival outcomes of patients with heavily pretreated, relapsed/refractory multiple myeloma who received either daratumumab monotherapy or pomalidomide plus low-dose dexamethasone. Using this method, daratumumab conferred a significant overall survival benefit compared with pomalidomide plus low-dose dexamethasone. In the absence of head-to-head trials, these

  15. Method, system and apparatus for monitoring and adjusting the quality of indoor air

    Science.gov (United States)

    Hartenstein, Steven D.; Tremblay, Paul L.; Fryer, Michael O.; Hohorst, Frederick A.

    2004-03-23

    A system, method and apparatus is provided for monitoring and adjusting the quality of indoor air. A sensor array senses an air sample from the indoor air and analyzes the air sample to obtain signatures representative of contaminants in the air sample. When the level or type of contaminant poses a threat or hazard to the occupants, the present invention takes corrective actions which may include introducing additional fresh air. The corrective actions taken are intended to promote overall health of personnel, prevent personnel from being overexposed to hazardous contaminants and minimize the cost of operating the HVAC system. The identification of the contaminants is performed by comparing the signatures provided by the sensor array with a database of known signatures. Upon identification, the system takes corrective actions based on the level of contaminant present. The present invention is capable of learning the identity of previously unknown contaminants, which increases its ability to identify contaminants in the future. Indoor air quality is assured by monitoring the contaminants not only in the indoor air, but also in the outdoor air and the air which is to be recirculated. The present invention is easily adaptable to new and existing HVAC systems. In sum, the present invention is able to monitor and adjust the quality of indoor air in real time by sensing the level and type of contaminants present in indoor air, outdoor and recirculated air, providing an intelligent decision about the quality of the air, and minimizing the cost of operating an HVAC system.

  16. Comparison of clinical probability-adjusted D-dimer and age-adjusted D-dimer interpretation to exclude venous thromboembolism.

    Science.gov (United States)

    Takach Lapner, Sarah; Julian, Jim A; Linkins, Lori-Ann; Bates, Shannon; Kearon, Clive

    2017-10-05

    Two new strategies for interpreting D-dimer results have been proposed: i) using a progressively higher D-dimer threshold with increasing age (age-adjusted strategy) and ii) using a D-dimer threshold in patients with low clinical probability that is twice the threshold used in patients with moderate clinical probability (clinical probability-adjusted strategy). Our objective was to compare the diagnostic accuracy of age-adjusted and clinical probability-adjusted D-dimer interpretation in patients with a low or moderate clinical probability of venous thromboembolism (VTE). We performed a retrospective analysis of clinical data and blood samples from two prospective studies. We compared the negative predictive value (NPV) for VTE, and the proportion of patients with a negative D-dimer result, using two D-dimer interpretation strategies: the age-adjusted strategy, which uses a progressively higher D-dimer threshold with increasing age over 50 years (age in years × 10 µg/L FEU); and the clinical probability-adjusted strategy which uses a D-dimer threshold of 1000 µg/L FEU in patients with low clinical probability and 500 µg/L FEU in patients with moderate clinical probability. A total of 1649 outpatients with low or moderate clinical probability for a first suspected deep vein thrombosis or pulmonary embolism were included. The NPV of both the clinical probability-adjusted strategy (99.7 %) and the age-adjusted strategy (99.6 %) were similar. However, the proportion of patients with a negative result was greater with the clinical probability-adjusted strategy (56.1 % vs, 50.9 %; difference 5.2 %; 95 % CI 3.5 % to 6.8 %). These findings suggest that clinical probability-adjusted D-dimer interpretation is a better way of interpreting D-dimer results compared to age-adjusted interpretation.

  17. Associations of child adjustment with parent and family functioning: comparison of families of women with and without breast cancer.

    Science.gov (United States)

    Vannatta, Kathryn; Ramsey, Rachelle R; Noll, Robert B; Gerhardt, Cynthia A

    2010-01-01

    To examine the impact of maternal breast cancer on the emotional and behavioral functioning of school-age children; evaluate whether child adjustment is associated with variations in distress, marital satisfaction, and parenting behavior evidenced by mothers and fathers; and determine whether these associations differ from families that are not contending with cancer. Participants included 40 children (age 8-16 years) of mothers with breast cancer along with their parents as well as 40 families of comparison classmates not affected by parental illness. Questionnaires assessing the domains of interest were administered in families' homes. Mothers with breast cancer and their spouses reported higher levels of distress than comparison parents; child internalizing problems were inversely associated with parental adjustment in both groups. No group differences were found in any indicators of family functioning, including parent-child relationships. Warm and supportive parenting by both mothers and fathers were associated with lower levels of child internalizing behavior, but only in families affected by breast cancer. These results suggest that children of mothers with breast cancer, such as most children, may be at risk for internalizing behavior when parents are distressed. These children may particularly benefit from interactions with mothers and fathers who are warm and supportive, and maintenance of positive parenting may partially account for the apparent resilience of these youth.

  18. [Effect of 2 methods of occlusion adjustment on occlusal balance and muscles of mastication in patient with implant restoration].

    Science.gov (United States)

    Wang, Rong; Xu, Xin

    2015-12-01

    To compare the effect of 2 methods of occlusion adjustment on occlusal balance and muscles of mastication in patients with dental implant restoration. Twenty patients, each with a single edentulous posterior dentition with no distal dentition were selected, and divided into 2 groups. Patients in group A underwent original occlusion adjustment method and patients in group B underwent occlusal plane reduction technique. Ankylos implants were implanted in the edentulous space in each patient and restored with fixed prosthodontics single unit crown. Occlusion was adjusted in each restoration accordingly. Electromyograms were conducted to determine the effect of adjustment methods on occlusion and muscles of mastication 3 months and 6 months after initial restoration and adjustment. Data was collected and measurements for balanced occlusal measuring standards were obtained, including central occlusion force (COF), asymmetry index of molar occlusal force(AMOF). Balanced muscles of mastication measuring standards were also obtained including measurements from electromyogram for the muscles of mastication and the anterior bundle of the temporalis muscle at the mandibular rest position, average electromyogram measurements of the anterior bundle of the temporalis muscle at the intercuspal position(ICP), Astot, masseter muscle asymmetry index, and anterior temporalis asymmetry index (ASTA). Statistical analysis was performed using Student 's t test with SPSS 18.0 software package. Three months after occlusion adjustment, parameters of the original occlusion adjustment method were significantly different between group A and group B in balanced occlusal measuring standards and balanced muscles of mastication measuring standards. Six months after occlusion adjustment, parameters of the original occlusion adjustment methods were significantly different between group A and group B in balanced muscles of mastication measuring standards, but was no significant difference in balanced

  19. Method and apparatus for biological sequence comparison

    Science.gov (United States)

    Marr, T.G.; Chang, W.I.

    1997-12-23

    A method and apparatus are disclosed for comparing biological sequences from a known source of sequences, with a subject (query) sequence. The apparatus takes as input a set of target similarity levels (such as evolutionary distances in units of PAM), and finds all fragments of known sequences that are similar to the subject sequence at each target similarity level, and are long enough to be statistically significant. The invention device filters out fragments from the known sequences that are too short, or have a lower average similarity to the subject sequence than is required by each target similarity level. The subject sequence is then compared only to the remaining known sequences to find the best matches. The filtering member divides the subject sequence into overlapping blocks, each block being sufficiently large to contain a minimum-length alignment from a known sequence. For each block, the filter member compares the block with every possible short fragment in the known sequences and determines a best match for each comparison. The determined set of short fragment best matches for the block provide an upper threshold on alignment values. Regions of a certain length from the known sequences that have a mean alignment value upper threshold greater than a target unit score are concatenated to form a union. The current block is compared to the union and provides an indication of best local alignment with the subject sequence. 5 figs.

  20. Comparison of validation methods for forming simulations

    Science.gov (United States)

    Schug, Alexander; Kapphan, Gabriel; Bardl, Georg; Hinterhölzl, Roland; Drechsler, Klaus

    2018-05-01

    The forming simulation of fibre reinforced thermoplastics could reduce the development time and improve the forming results. But to take advantage of the full potential of the simulations it has to be ensured that the predictions for material behaviour are correct. For that reason, a thorough validation of the material model has to be conducted after characterising the material. Relevant aspects for the validation of the simulation are for example the outer contour, the occurrence of defects and the fibre paths. To measure these features various methods are available. Most relevant and also most difficult to measure are the emerging fibre orientations. For that reason, the focus of this study was on measuring this feature. The aim was to give an overview of the properties of different measuring systems and select the most promising systems for a comparison survey. Selected were an optical, an eddy current and a computer-assisted tomography system with the focus on measuring the fibre orientations. Different formed 3D parts made of unidirectional glass fibre and carbon fibre reinforced thermoplastics were measured. Advantages and disadvantages of the tested systems were revealed. Optical measurement systems are easy to use, but are limited to the surface plies. With an eddy current system also lower plies can be measured, but it is only suitable for carbon fibres. Using a computer-assisted tomography system all plies can be measured, but the system is limited to small parts and challenging to evaluate.

  1. Adjustment technique without explicit formation of normal equations /conjugate gradient method/

    Science.gov (United States)

    Saxena, N. K.

    1974-01-01

    For a simultaneous adjustment of a large geodetic triangulation system, a semiiterative technique is modified and used successfully. In this semiiterative technique, known as the conjugate gradient (CG) method, original observation equations are used, and thus the explicit formation of normal equations is avoided, 'huge' computer storage space being saved in the case of triangulation systems. This method is suitable even for very poorly conditioned systems where solution is obtained only after more iterations. A detailed study of the CG method for its application to large geodetic triangulation systems was done that also considered constraint equations with observation equations. It was programmed and tested on systems as small as two unknowns and three equations up to those as large as 804 unknowns and 1397 equations. When real data (573 unknowns, 965 equations) from a 1858-km-long triangulation system were used, a solution vector accurate to four decimal places was obtained in 2.96 min after 1171 iterations (i.e., 2.0 times the number of unknowns).

  2. Constructing Quality Adjusted Price Indexes: a Comparison of Hedonic and Discrete Choice Models

    OpenAIRE

    N. Jonker

    2001-01-01

    The Boskin report (1996) concluded that the US consumer price index (CPI) overestimated the inflation by 1.1 percentage points. This was due to several measurement errors in the CPI. One of them is called quality change bias. In this paper two methods are compared which can be used to eliminate quality change bias, namely the hedonic method and a method based on the use of discrete choice models. The underlying micro-economic fundations of the two methods are compared as well as their empiric...

  3. Comparison of Two Foreign Body Retrieval Devices with Adjustable Loops in a Swine Model

    International Nuclear Information System (INIS)

    Konya, Andras

    2006-01-01

    The purpose of the study was to compare two similar foreign body retrieval devices, the Texan TM (TX) and the Texan LONGhorn TM (TX-LG), in a swine model. Both devices feature a ≤30-mm adjustable loop. Capture times and total procedure times for retrieving foreign bodies from the infrarenal aorta, inferior vena cava, and stomach were compared. All attempts with both devices (TX, n = 15; TX-LG, n = 14) were successful. Foreign bodies in the vasculature were captured quickly using both devices (mean ± SD, 88 ± 106 sec for TX vs 67 ± 42 sec for TX-LG) with no significant difference between them. The TX-LG, however, allowed significantly better capture times than the TX in the stomach (p = 0.022), Overall, capture times for the TX-LG were significantly better than for the TX (p = 0.029). There was no significant difference between the total procedure times in any anatomic region. TX-LG performed significantly better than the TX in the stomach and therefore overall. The better torque control and maneuverability of TX-LG resulted in better performance in large anatomic spaces

  4. Comparison of three facebow/semi-adjustable articulator systems for planning orthognathic surgery.

    Science.gov (United States)

    O'Malley, A M; Milosevic, A

    2000-06-01

    Our aim was to measure the steepness of the occlusal plane produced by three different semi-adjustable articulators: the Dentatus Type ARL, Denar MkII, and the Whipmix Quickmount 8800, and to assess the influence of possible systematic errors in positioning of study casts on articulators that are used to plan orthognathic surgery. Twenty patients (10 skeletal class II, and 10 skeletal class III) who were having pre-surgical orthodontics at Liverpool University Dental Hospital were studied. The measurement of the steepness of the occlusal plane was taken as the angle between the facebow bite-fork and the horizontal arm of the articulator. This was compared with the angle of the maxillary occlusal plane to the Frankfort plane as measured on lateral cephalometry (the gold standard). The Whipmix was closest to the gold standard as it flattened the occlusal plane by only 2 degrees (P<0.05). The results of the Denar and Dentatus differed significantly from those of the cephalogram as they flattened the occlusal plane by 5 degrees and 6. 5 degrees (P<0.01), respectively. Clinicians are encouraged to verify the steepness of the occlusal plane on mounted study casts before the technician makes the model. Copyright 2000 The British Association of Oral and Maxillofacial Surgeons.

  5. Calculation of Credit Valuation Adjustment Based on Least Square Monte Carlo Methods

    Directory of Open Access Journals (Sweden)

    Qian Liu

    2015-01-01

    Full Text Available Counterparty credit risk has become one of the highest-profile risks facing participants in the financial markets. Despite this, relatively little is known about how counterparty credit risk is actually priced mathematically. We examine this issue using interest rate swaps. This largely traded financial product allows us to well identify the risk profiles of both institutions and their counterparties. Concretely, Hull-White model for rate and mean-reverting model for default intensity have proven to be in correspondence with the reality and to be well suited for financial institutions. Besides, we find that least square Monte Carlo method is quite efficient in the calculation of credit valuation adjustment (CVA, for short as it avoids the redundant step to generate inner scenarios. As a result, it accelerates the convergence speed of the CVA estimators. In the second part, we propose a new method to calculate bilateral CVA to avoid double counting in the existing bibliographies, where several copula functions are adopted to describe the dependence of two first to default times.

  6. Calculations for Adjusting Endogenous Biomarker Levels During Analytical Recovery Assessments for Ligand-Binding Assay Bioanalytical Method Validation.

    Science.gov (United States)

    Marcelletti, John F; Evans, Cindy L; Saxena, Manju; Lopez, Adriana E

    2015-07-01

    It is often necessary to adjust for detectable endogenous biomarker levels in spiked validation samples (VS) and in selectivity determinations during bioanalytical method validation for ligand-binding assays (LBA) with a matrix like normal human serum (NHS). Described herein are case studies of biomarker analyses using multiplex LBA which highlight the challenges associated with such adjustments when calculating percent analytical recovery (%AR). The LBA test methods were the Meso Scale Discovery V-PLEX® proinflammatory and cytokine panels with NHS as test matrix. The NHS matrix blank exhibited varied endogenous content of the 20 individual cytokines before spiking, ranging from undetectable to readily quantifiable. Addition and subtraction methods for adjusting endogenous cytokine levels in %AR calculations are both used in the bioanalytical field. The two methods were compared in %AR calculations following spiking and analysis of VS for cytokines having detectable endogenous levels in NHS. Calculations for %AR obtained by subtracting quantifiable endogenous biomarker concentrations from the respective total analytical VS values yielded reproducible and credible conclusions. The addition method, in contrast, yielded %AR conclusions that were frequently unreliable and discordant with values obtained with the subtraction adjustment method. It is shown that subtraction of assay signal attributable to matrix is a feasible alternative when endogenous biomarkers levels are below the limit of quantitation, but above the limit of detection. These analyses confirm that the subtraction method is preferable over that using addition to adjust for detectable endogenous biomarker levels when calculating %AR for biomarker LBA.

  7. Comparison of gas dehydration methods based on energy ...

    African Journals Online (AJOL)

    Comparison of gas dehydration methods based on energy consumption. ... PROMOTING ACCESS TO AFRICAN RESEARCH ... This study compares three conventional methods of natural gas (Associated Natural Gas) dehydration to carry out ...

  8. Simulated annealing method for electronic circuits design: adaptation and comparison with other optimization methods

    International Nuclear Information System (INIS)

    Berthiau, G.

    1995-10-01

    The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. Finally, our simulated annealing program

  9. Estimating Total Claim Size in the Auto Insurance Industry: a Comparison between Tweedie and Zero-Adjusted Inverse Gaussian Distribution

    Directory of Open Access Journals (Sweden)

    Adriana Bruscato Bortoluzzo

    2011-01-01

    Full Text Available The objective of this article is to estimate insurance claims from an auto dataset using the Tweedie and zero-adjusted inverse Gaussian (ZAIG methods. We identify factors that influence claim size and probability, and compare the results of these methods which both forecast outcomes accurately. Vehicle characteristics like territory, age, origin and type distinctly influence claim size and probability. This distinct impact is not always present in the Tweedie estimated model. Auto insurers should consider estimating total claim size using both the Tweedie and ZAIG methods. This allows for an estimation of confidence interval based on empirical quantiles using bootstrap simulation. Furthermore, the fitted models may be useful in developing a strategy to obtain premium pricing.

  10. Normalized impact factor (NIF): an adjusted method for calculating the citation rate of biomedical journals.

    Science.gov (United States)

    Owlia, P; Vasei, M; Goliaei, B; Nassiri, I

    2011-04-01

    The interests in journal impact factor (JIF) in scientific communities have grown over the last decades. The JIFs are used to evaluate journals quality and the papers published therein. JIF is a discipline specific measure and the comparison between the JIF dedicated to different disciplines is inadequate, unless a normalization process is performed. In this study, normalized impact factor (NIF) was introduced as a relatively simple method enabling the JIFs to be used when evaluating the quality of journals and research works in different disciplines. The NIF index was established based on the multiplication of JIF by a constant factor. The constants were calculated for all 54 disciplines of biomedical field during 2005, 2006, 2007, 2008 and 2009 years. Also, ranking of 393 journals in different biomedical disciplines according to the NIF and JIF were compared to illustrate how the NIF index can be used for the evaluation of publications in different disciplines. The findings prove that the use of the NIF enhances the equality in assessing the quality of research works produced by researchers who work in different disciplines. Copyright © 2010 Elsevier Inc. All rights reserved.

  11. Standard setting: Comparison of two methods

    Directory of Open Access Journals (Sweden)

    Oyebode Femi

    2006-09-01

    Full Text Available Abstract Background The outcome of assessments is determined by the standard-setting method used. There is a wide range of standard – setting methods and the two used most extensively in undergraduate medical education in the UK are the norm-reference and the criterion-reference methods. The aims of the study were to compare these two standard-setting methods for a multiple-choice question examination and to estimate the test-retest and inter-rater reliability of the modified Angoff method. Methods The norm – reference method of standard -setting (mean minus 1 SD was applied to the 'raw' scores of 78 4th-year medical students on a multiple-choice examination (MCQ. Two panels of raters also set the standard using the modified Angoff method for the same multiple-choice question paper on two occasions (6 months apart. We compared the pass/fail rates derived from the norm reference and the Angoff methods and also assessed the test-retest and inter-rater reliability of the modified Angoff method. Results The pass rate with the norm-reference method was 85% (66/78 and that by the Angoff method was 100% (78 out of 78. The percentage agreement between Angoff method and norm-reference was 78% (95% CI 69% – 87%. The modified Angoff method had an inter-rater reliability of 0.81 – 0.82 and a test-retest reliability of 0.59–0.74. Conclusion There were significant differences in the outcomes of these two standard-setting methods, as shown by the difference in the proportion of candidates that passed and failed the assessment. The modified Angoff method was found to have good inter-rater reliability and moderate test-retest reliability.

  12. Standard setting: comparison of two methods.

    Science.gov (United States)

    George, Sanju; Haque, M Sayeed; Oyebode, Femi

    2006-09-14

    The outcome of assessments is determined by the standard-setting method used. There is a wide range of standard-setting methods and the two used most extensively in undergraduate medical education in the UK are the norm-reference and the criterion-reference methods. The aims of the study were to compare these two standard-setting methods for a multiple-choice question examination and to estimate the test-retest and inter-rater reliability of the modified Angoff method. The norm-reference method of standard-setting (mean minus 1 SD) was applied to the 'raw' scores of 78 4th-year medical students on a multiple-choice examination (MCQ). Two panels of raters also set the standard using the modified Angoff method for the same multiple-choice question paper on two occasions (6 months apart). We compared the pass/fail rates derived from the norm reference and the Angoff methods and also assessed the test-retest and inter-rater reliability of the modified Angoff method. The pass rate with the norm-reference method was 85% (66/78) and that by the Angoff method was 100% (78 out of 78). The percentage agreement between Angoff method and norm-reference was 78% (95% CI 69% - 87%). The modified Angoff method had an inter-rater reliability of 0.81-0.82 and a test-retest reliability of 0.59-0.74. There were significant differences in the outcomes of these two standard-setting methods, as shown by the difference in the proportion of candidates that passed and failed the assessment. The modified Angoff method was found to have good inter-rater reliability and moderate test-retest reliability.

  13. Comparison of two fractal interpolation methods

    Science.gov (United States)

    Fu, Yang; Zheng, Zeyu; Xiao, Rui; Shi, Haibo

    2017-03-01

    As a tool for studying complex shapes and structures in nature, fractal theory plays a critical role in revealing the organizational structure of the complex phenomenon. Numerous fractal interpolation methods have been proposed over the past few decades, but they differ substantially in the form features and statistical properties. In this study, we simulated one- and two-dimensional fractal surfaces by using the midpoint displacement method and the Weierstrass-Mandelbrot fractal function method, and observed great differences between the two methods in the statistical characteristics and autocorrelation features. From the aspect of form features, the simulations of the midpoint displacement method showed a relatively flat surface which appears to have peaks with different height as the fractal dimension increases. While the simulations of the Weierstrass-Mandelbrot fractal function method showed a rough surface which appears to have dense and highly similar peaks as the fractal dimension increases. From the aspect of statistical properties, the peak heights from the Weierstrass-Mandelbrot simulations are greater than those of the middle point displacement method with the same fractal dimension, and the variances are approximately two times larger. When the fractal dimension equals to 1.2, 1.4, 1.6, and 1.8, the skewness is positive with the midpoint displacement method and the peaks are all convex, but for the Weierstrass-Mandelbrot fractal function method the skewness is both positive and negative with values fluctuating in the vicinity of zero. The kurtosis is less than one with the midpoint displacement method, and generally less than that of the Weierstrass-Mandelbrot fractal function method. The autocorrelation analysis indicated that the simulation of the midpoint displacement method is not periodic with prominent randomness, which is suitable for simulating aperiodic surface. While the simulation of the Weierstrass-Mandelbrot fractal function method has

  14. A particle method with adjustable transport properties - the generalized consistent Boltzmann algorithm

    International Nuclear Information System (INIS)

    Garcia, A.L.; Alexander, F.J.; Alder, B.J.

    1997-01-01

    The consistent Boltzmann algorithm (CBA) for dense, hard-sphere gases is generalized to obtain the van der Waals equation of state and the corresponding exact viscosity at all densities except at the highest temperatures. A general scheme for adjusting any transport coefficients to higher values is presented

  15. A novel suture method to place and adjust peripheral nerve catheters

    DEFF Research Database (Denmark)

    Rothe, C.; Steen-Hansen, C.; Madsen, M. H.

    2015-01-01

    We have developed a peripheral nerve catheter, attached to a needle, which works like an adjustable suture. We used in-plane ultrasound guidance to place 45 catheters close to the femoral, saphenous, sciatic and distal tibial nerves in cadaver legs. We displaced catheters after their initial...

  16. Comparison of Methods for Oscillation Detection

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Trangbæk, Klaus

    2006-01-01

    This paper compares a selection of methods for detecting oscillations in control loops. The methods are tested on measurement data from a coal-fired power plant, where some oscillations are occurring. Emphasis is put on being able to detect oscillations without having a system model and without...... using process knowledge. The tested methods show potential for detecting the oscillations, however, transient components in the signals cause false detections as well, motivating usage of models in order to remove the expected signals behavior....

  17. Comparison of dietary fiber methods for foods.

    Science.gov (United States)

    Heckman, M M; Lane, S A

    1981-11-01

    In order to evaluate several proposed dietary fiber methods, 12 food samples, representing different food classes, were analyzed by (1) neutral and acid detergent fiber methods (NDF, ADF); (2) NDF with enzyme modification (ENDF); (3) a 2-fraction enzyme method for soluble, insoluble, and total dietary fiber, proposed by Furda (SDF, IDF, TDF); (+) a 1-fraction enzyme method for total dietary fiber proposed by Hellendoorn (TDF). Foods included cereals, fruits, vegetables, pectin, locust bean gum, and soybean polysaccharides. Results show that TDF by Furda and Hellendoorn methods agree reasonably well with literature values by the Southgate method, but ENDF is consistently lower; that ENDF and IDF (Furda method) agree reasonably well; that except for corn corn bran fiber (insoluble) and pectin and locus bean fiber (soluble), all materials have significant fractions of both soluble and insoluble fiber. The Furda method was used on numerous food and ingredient samples and was found to be practical and informative and to have acceptable precision (RSD values of 2.65-7.05%). It is suggested that the Furda (or similar) method be given consideration for the analysis of foods for dietary fiber.

  18. Comparison and qualification of FFTF calorimetric methods

    International Nuclear Information System (INIS)

    McCall, T.B.; Nutt, W.T.; Zimmerman, B.D.

    1981-01-01

    The Fast Flux Test Facility achieved full power operation on December 21, 1981. During the power ascent, the reactor thermal power (calorimetric) was computed by three methods as follows: the plant main data handling computer, Plant Data System (PDS); a second computer, the Experimenter's Data System (EDS); and a manual method requiring human operator activities and a programmable calculator. It is the purpose of this paper to explain the rationale for employing the three methods, describe how each works in a schematic fashion, compare the results, and demonstrate that all methods have met stringent goals based on a statistical analysis of the results

  19. Comparison of sampling methods for animal manure

    NARCIS (Netherlands)

    Derikx, P.J.L.; Ogink, N.W.M.; Hoeksma, P.

    1997-01-01

    Currently available and recently developed sampling methods for slurry and solid manure were tested for bias and reproducibility in the determination of total phosphorus and nitrogen content of samples. Sampling methods were based on techniques in which samples were taken either during loading from

  20. Comparison of two Galerkin quadrature methods

    International Nuclear Information System (INIS)

    Morel, J. E.; Warsa, J. S.; Franke, B. C.; Prinja, A. K.

    2013-01-01

    We compare two methods for generating Galerkin quadrature for problems with highly forward-peaked scattering. In Method 1, the standard Sn method is used to generate the moment-to-discrete matrix and the discrete-to-moment is generated by inverting the moment-to-discrete matrix. In Method 2, which we introduce here, the standard Sn method is used to generate the discrete-to-moment matrix and the moment-to-discrete matrix is generated by inverting the discrete-to-moment matrix. Method 1 has the advantage that it preserves both N eigenvalues and N eigenvectors (in a pointwise sense) of the scattering operator with an N-point quadrature. Method 2 has the advantage that it generates consistent angular moment equations from the corresponding S N equations while preserving N eigenvalues of the scattering operator with an N-point quadrature. Our computational results indicate that these two methods are quite comparable for the test problem considered. (authors)

  1. Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files. SG39 meeting, November 2014

    International Nuclear Information System (INIS)

    Aufiero, Manuele; Ivanov, Evgeny; Hoefer, Axel; Yokoyama, Kenji; Da Cruz, Dirceu Ferreira; KODELI, Ivan-Alexander; Hursin, Mathieu; Pelloni, Sandro; Palmiotti, Giuseppe; Salvatores, Massimo; Barnes, Andrew; Cabellos De Francisco, Oscar; ); Ivanova, Tatiana; )

    2014-11-01

    The aim of WPEC subgroup 39 'Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files' is to provide criteria and practical approaches to use effectively the results of sensitivity analyses and cross section adjustments for feedback to evaluators and differential measurement experimentalists in order to improve the knowledge of neutron cross sections, uncertainties, and correlations to be used in a wide range of applications. This document is the proceedings of the third formal Subgroup meeting held at the NEA, Issy-les-Moulineaux, France, on 27-28 November 2014. It comprises a Summary Record of the meeting and all the available presentations (slides) given by the participants: A - Sensitivity methods: 1 - Perturbation/sensitivity calculations with Serpent (M. Aufiero); 2 - Comparison of deterministic and Monte Carlo sensitivity analysis of SNEAK-7A and FLATTOP-Pu Benchmarks (I. Kodeli); B - Integral experiments: 1 - PROTEUS experiments: selected experiments sensitivity profiles and availability, (M. Hursin, M. Salvatores - PROTEUS Experiments, HCLWR configurations); 2 - SINBAD Benchmark Database and FNS/JAEA Liquid Oxygen TOF Experiment Analysis (I. Kodeli); 3 - STEK experiment Opportunity for Validation of Fission Products Nuclear Data (D. Da Cruz); 4 - SEG (tailored adjoint flux shapes) (M. Savatores - comments) 5 - IPPE transmission experiments (Fe, 238 U) (T. Ivanova); 6 - RPI semi-integral (Fe, 238 U) (G. Palmiotti - comments); 7 - New experiments, e.g. in connection with the new NSC Expert Group on 'Improvement of Integral Experiments Data for Minor Actinide Management' (G. Palmiotti - Some comments from the Expert Group) 8 - Additional PSI adjustment studies accounting for nonlinearity (S. Pelloni); 9 - Adjustment methodology issues (G. Palmiotti); C - Am-241 and fission product issues: 1 - Am-241 validation for criticality-safety calculations (A. Barnes - Visio

  2. Comparison of the direct enzyme assay method with the membrane ...

    African Journals Online (AJOL)

    Comparison of the direct enzyme assay method with the membrane filtration technique in the quantification and monitoring of microbial indicator organisms – seasonal variations in the activities of coliforms and E. coli, temperature and pH.

  3. Comparison of two methods for customer differentiation

    NARCIS (Netherlands)

    A.F. Gabor (Adriana); Y. Guang (Yang); S. Axsäter (Sven)

    2014-01-01

    textabstractIn response to customer specific time guarantee requirements, service providers can offer differentiated ser- vices. However, conventional customer differentiation methods often lead to high holding costs and may have some practical drawbacks. We compare two customer differentiation

  4. A Comparison of Distillery Stillage Disposal Methods

    OpenAIRE

    V. Sajbrt; M. Rosol; P. Ditl

    2010-01-01

    This paper compares the main stillage disposal methods from the point of view of technology, economics and energetics. Attention is paid to the disposal of both solid and liquid phase. Specifically, the following methods are considered: a) livestock feeding, b) combustion of granulated stillages, c) fertilizer production, d) anaerobic digestion with biogas production and e) chemical pretreatment and subsequent secondary treatment. Other disposal techniques mentioned in the literature (electro...

  5. Comparison of green method for chitin deacetylation

    Science.gov (United States)

    Anwar, Muslih; Anggraeni, Ayu Septi; Amin, M. Harisuddin Al

    2017-03-01

    Developing highly environmentally friendly and cost-effective approaches for the chitosan production has paramount important in the future technology. Deacetylation process is one of the most importing steps to classify the quality of chitosan. This research aimed to study the best method for deacetylation of chitin considered by several factors like the concentration of base, temperature, time and reaction method. From the green chemistry point of view, conventional refluxing method relatively wasted energy compared to another method such as maceration, grinding and sonication. The degree of deacetylation (DD) of chitosan was studied by sonication, resulted in slightly increasing of DD from 73.14 to 73.28% during the time from 0.5 h to 1 h. Deacetylation of chitin with various sodium hydroxide concentration 60, 70 and 80% gave 73.14, 76.36 and 77.88% of DD, respectively. Variation of temperature at 40, 60, and 80 °C was slightly affected on increasing DD from 67.53, 72.84 and 73.14%, respectively. The DD of chitosan significantly increased from 60.19, 74.27 and 81.20% respectively correspondent to varying NaOH concentration 60, 70 and 80% using the maceration method. Solid phase grinding method for half hour resulted in 79.49% of DD. The application of ultrasound grinding method not only was enhanced toward the deacetylation but also favoured the depolymerization process. Moreover, maceration for 7 days with 80% NaOH can be as an alternative method.

  6. Comparison of Standard and Fast Charging Methods for Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Petr Chlebis

    2014-01-01

    Full Text Available This paper describes a comparison of standard and fast charging methods used in the field of electric vehicles and also comparison of their efficiency in terms of electrical energy consumption. The comparison was performed on three-phase buck converter, which was designed for EV’s fast charging station. The results were obtained by both mathematical and simulation methods. The laboratory model of entire physical application, which will be further used for simulation results verification, is being built in these days.

  7. A Comparison of Distillery Stillage Disposal Methods

    Directory of Open Access Journals (Sweden)

    V. Sajbrt

    2010-01-01

    Full Text Available This paper compares the main stillage disposal methods from the point of view of technology, economics and energetics. Attention is paid to the disposal of both solid and liquid phase. Specifically, the following methods are considered: a livestock feeding, b combustion of granulated stillages, c fertilizer production, d anaerobic digestion with biogas production and e chemical pretreatment and subsequent secondary treatment. Other disposal techniques mentioned in the literature (electrofenton reaction, electrocoagulation and reverse osmosis have not been considered, due to their high costs and technological requirements.Energy and economic calculations were carried out for a planned production of 120 m3 of stillage per day in a given distillery. Only specific treatment operating costs (per 1 m3 of stillage were compared, including operational costs for energy, transport and chemicals. These values were determined for January 31st, 2009. Resulting sequence of cost effectiveness: 1. – chemical pretreatment, 2. – combustion of granulated stillage, 3. – transportation of stillage to a biogas station, 4. – fertilizer production, 5. – livestock feeding. This study found that chemical pretreatment of stillage with secondary treatment (a method developed at the Department of Process Engineering, CTU was more suitable than the other methods. Also, there are some important technical advantages. Using this method, the total operating costs are approximately 1 150 ??/day, i.e. about 9,5 ??/m3 of stillage. The price of chemicals is the most important item in these costs, representing about 85 % of the total operating costs.

  8. Comparison of gestational dating methods and implications ...

    Science.gov (United States)

    OBJECTIVES: Estimating gestational age is usually based on date of last menstrual period (LMP) or clinical estimation (CE); both approaches introduce potential bias. Differences in methods of estimation may lead to misclassificat ion and inconsistencies in risk estimates, particularly if exposure assignment is also gestation-dependent. This paper examines a'what-if' scenario in which alternative methods are used and attempts to elucidate how method choice affects observed results.METHODS: We constructed two 20-week gestational age cohorts of pregnancies between 2000 and 2005 (New Jersey, Pennsylvania, Ohio, USA) using live birth certificates : one defined preterm birth (PTB) status using CE and one using LMP. Within these, we estimated risk for 4 categories of preterm birth (PTBs per 106 pregnancies) and risk differences (RD (95% Cl s)) associated with exposure to particulate matter (PM2. 5).RESULTS: More births were classified preterm using LMP (16%) compared with CE (8%). RD divergences increased between cohorts as exposure period approached delivery. Among births between 28 and 31 weeks, week 7 PM2.5 exposure conveyed RDs of 44 (21 to 67) for CE and 50 (18 to 82) for LMP populations, while week 24 exposure conveyed RDs of 33 (11 to 56) and -20 (-50 to 10), respectively.CONCLUSIONS: Different results from analyses restricted to births with both CE and LMP are most likely due to differences in dating methods rather than selection issues. Results are sensitive t

  9. Comparison of some nonlinear smoothing methods

    International Nuclear Information System (INIS)

    Bell, P.R.; Dillon, R.S.

    1977-01-01

    Due to the poor quality of many nuclear medicine images, computer-driven smoothing procedures are frequently employed to enhance the diagnostic utility of these images. While linear methods were first tried, it was discovered that nonlinear techniques produced superior smoothing with little detail suppression. We have compared four methods: Gaussian smoothing (linear), two-dimensional least-squares smoothing (linear), two-dimensional least-squares bounding (nonlinear), and two-dimensional median smoothing (nonlinear). The two dimensional least-squares procedures have yielded the most satisfactorily enhanced images, with the median smoothers providing quite good images, even in the presence of widely aberrant points

  10. Comparison of power curve monitoring methods

    Directory of Open Access Journals (Sweden)

    Cambron Philippe

    2017-01-01

    Full Text Available Performance monitoring is an important aspect of operating wind farms. This can be done through the power curve monitoring (PCM of wind turbines (WT. In the past years, important work has been conducted on PCM. Various methodologies have been proposed, each one with interesting results. However, it is difficult to compare these methods because they have been developed using their respective data sets. The objective of this actual work is to compare some of the proposed PCM methods using common data sets. The metric used to compare the PCM methods is the time needed to detect a change in the power curve. Two power curve models will be covered to establish the effect the model type has on the monitoring outcomes. Each model was tested with two control charts. Other methodologies and metrics proposed in the literature for power curve monitoring such as areas under the power curve and the use of statistical copulas have also been covered. Results demonstrate that model-based PCM methods are more reliable at the detecting a performance change than other methodologies and that the effectiveness of the control chart depends on the types of shift observed.

  11. Asymmetric adjustment

    NARCIS (Netherlands)

    2010-01-01

    A method of adjusting a signal processing parameter for a first hearing aid and a second hearing aid forming parts of a binaural hearing aid system to be worn by a user is provided. The binaural hearing aid system comprises a user specific model representing a desired asymmetry between a first ear

  12. Comparison of five DNA quantification methods

    DEFF Research Database (Denmark)

    Nielsen, Karsten; Mogensen, Helle Smidt; Hedman, Johannes

    2008-01-01

    Six commercial preparations of human genomic DNA were quantified using five quantification methods: UV spectrometry, SYBR-Green dye staining, slot blot hybridization with the probe D17Z1, Quantifiler Human DNA Quantification kit and RB1 rt-PCR. All methods measured higher DNA concentrations than...... Quantification kit in two experiments. The measured DNA concentrations with Quantifiler were 125 and 160% higher than expected based on the manufacturers' information. When the Quantifiler human DNA standard (Raji cell line) was replaced by the commercial human DNA preparation G147A (Promega) to generate the DNA...... standard curve in the Quantifiler Human DNA Quantification kit, the DNA quantification results of the human DNA preparations were 31% higher than expected based on the manufacturers' information. The results indicate a calibration problem with the Quantifiler human DNA standard for its use...

  13. A comparison of methods for cascade prediction

    OpenAIRE

    Guo, Ruocheng; Shakarian, Paulo

    2016-01-01

    Information cascades exist in a wide variety of platforms on Internet. A very important real-world problem is to identify which information cascades can go viral. A system addressing this problem can be used in a variety of applications including public health, marketing and counter-terrorism. As a cascade can be considered as compound of the social network and the time series. However, in related literature where methods for solving the cascade prediction problem were proposed, the experimen...

  14. Comparison of testing methods for particulate filters

    International Nuclear Information System (INIS)

    Ullmann, W.; Przyborowski, S.

    1983-01-01

    Four testing methods for particulate filters were compared by using the test rigs of the National Board of Nuclear Safety and Radiation Protection: 1) Measurement of filter penetration P as a function of particle size d by using a polydisperse NaC1 test aerosol and a scintillation particle counter; 2) Modified sodium flame test for measurement of total filter penetration P for various polydisperse NaC1 test aerosols; 3) Measurement of total filter penetration P for a polydisperse NaC1 test aerosol labelled with short-lived radon daughter products; 4) Measurement of total filter penetration P for a special paraffin oil test aerosol (oil fog test used in FRG according DIN 24 184, test aerosol A). The investigations were carried out on sheets of glass fibre paper (five grades of paper). Detailed information about the four testing methods and the used particle size distributions is given. The different results of the various methods are the base for the discussion of the most important parameters which influence the filter penetration P. The course of the function P=f(d) shows the great influence of the particle size. As expected there was also found a great dependence both from the test aerosol as well as from the principle and the measuring range of the aerosol-measuring device. The differences between the results of the various test methods are greater the lower the penetration. The use of NaCl test aerosol with various particle size distributions gives great differences for the respective penetration values. On the basis of these results and the values given by Dorman conclusions are made about the investigation of particulate filters both for the determination of filter penetration P as well as for the leak test of installed filters

  15. Comparison of methods for estimating premorbid intelligence

    OpenAIRE

    Bright, Peter; van der Linde, Ian

    2018-01-01

    To evaluate impact of neurological injury on cognitive performance it is typically necessary to derive a baseline (or ‘premorbid’) estimate of a patient’s general cognitive ability prior to the onset of impairment. In this paper, we consider a range of common methods for producing this estimate, including those based on current best performance, embedded ‘hold/no hold’ tests, demographic information, and word reading ability. Ninety-two neurologically healthy adult participants were assessed ...

  16. COMPARISON OF DIGITAL IMAGE STEGANOGRAPHY METHODS

    Directory of Open Access Journals (Sweden)

    S. A. Seyyedi

    2013-01-01

    Full Text Available Steganography is a method of hiding information in other information of different format (container. There are many steganography techniques with various types of container. In the Internet, digital images are the most popular and frequently used containers. We consider main image steganography techniques and their advantages and disadvantages. We also identify the requirements of a good steganography algorithm and compare various such algorithms.

  17. Analysis and comparison of biometric methods

    OpenAIRE

    Zatloukal, Filip

    2011-01-01

    The thesis deals with biometrics and biometric systems and the possibility to use these systems in the enterprise. Aim of this study is an analysis and description of selected types of biometric identification methods and their advantages and shortcomings. The work is divided into two parts. The first part is theoretical, describes the basic concepts of biometrics, biometric identification criteria, currently used identification systems, the ways of biometric systems use, performance measurem...

  18. A Water Hammer Protection Method for Mine Drainage System Based on Velocity Adjustment of Hydraulic Control Valve

    Directory of Open Access Journals (Sweden)

    Yanfei Kou

    2016-01-01

    Full Text Available Water hammer analysis is a fundamental work of pipeline systems design process for water distribution networks. The main characteristics for mine drainage system are the limited space and high cost of equipment and pipeline changing. In order to solve the protection problem of valve-closing water hammer for mine drainage system, a water hammer protection method for mine drainage system based on velocity adjustment of HCV (Hydraulic Control Valve is proposed in this paper. The mathematic model of water hammer fluctuations is established based on the characteristic line method. Then, boundary conditions of water hammer controlling for mine drainage system are determined and its simplex model is established. The optimization adjustment strategy is solved from the mathematic model of multistage valve-closing. Taking a mine drainage system as an example, compared results between simulations and experiments show that the proposed method and the optimized valve-closing strategy are effective.

  19. A Comparison of the Crustal Deformation Predicted by Glacial Isostatic Adjustment to Seismicity in the Baffin Region of Northern Canada

    Science.gov (United States)

    James, T. S.; Schamehorn, T.; Bent, A. L.; Allen, T. I.; Mulder, T.; Simon, K.

    2016-12-01

    The horizontal crustal strain-rates induced by glacial isostatic adjustment (GIA) in northern Canada and western Greenland region are compared to the spatial pattern of seismicity. For the comparison, an updated seismicity catalogue was created from the 2010 version of the NRCan Seismic Hazard Earthquake Epicentre File (SHEEF2010) catalogue and the Greenland Ice Sheet Monitoring Network (GLISN) catalogue of the Geological Survey of Denmark and Greenland (GEUS). Crustal motion rates were computed with the Innu/Laur16 ice-sheet history and the VM5a viscosity profile (Simon et al., 2015; 2016). This GIA model optimizes the fit to relative sea-level and vertical crustal motion measurements around Hudson Bay and in the Canadian Arctic Archipelago (CAA). A region in Baffin Bay with historically high seismicity, including the 1933 M 7.4 and the 1934 and 1945 M 6.5 earthquakes, features high predicted GIA strain-rates. Elsewhere, agreement is not strong, with zones of seismicity occurring where predicted horizontal crustal strain-rates are small and large crustal strain-rates predicted where earthquake occurrence is muted. For example, large compressional crustal strain-rates are predicted beneath seismically quiescent portions of the Greenland ice sheet. Similarly, large predicted extensional strain-rates occur around southern Hudson Bay and the Foxe Basin, which are also regions of relative seismic quiescence. Additional factors to be considered include the orientation of the background stress field, relative to the predicted stress changes, and potential pre-existing zones of lithospheric weakness.

  20. Comparison of methods for calculating decay lifetimes

    International Nuclear Information System (INIS)

    Tobocman, W.

    1978-01-01

    A simple scattering model is used to test alternative methods for calculating decay lifetimes, or equivalently, resonance widths. We consider the scattering of s-wave particles by a square well with a square barrier. Exact values for resonance energies and resonance widths are compared with values calculated from Wigner-Weisskopf perturbation theory and from the Garside-MacDonald projection operator formalism. The Garside-MacDonald formalism gives essentially exact results while the predictions of the Wigner-Weisskopf formalism are fairly poor

  1. Comparison of accounting methods for business combinations

    Directory of Open Access Journals (Sweden)

    Jaroslav Sedláček

    2012-01-01

    Full Text Available The revised accounting rules applicable to business combinations in force on July1st 2009, are the result of several years efforts the convergence of U.S. and International Committee of the Financial Accounting Standards. Following the harmonization of global accounting procedures are revised and implemented also Czech accounting regulations. In our research we wanted to see how changes can affect the strategy and timing of business combinations. Comparative analysis is mainly focused on the differences between U.S. and international accounting policies and Czech accounting regulations. Key areas of analysis and synthesis are the identification of business combination, accounting methods for business combinations and goodwill recognition. The result is to assess the impact of the identified differences in the reported financial position and profit or loss of company.

  2. Comparison of microstickies measurement methods. Part I, sample preparation and measurement methods

    Science.gov (United States)

    Mahendra R. Doshi; Angeles Blanco; Carlos Negro; Gilles M. Dorris; Carlos C. Castro; Axel Hamann; R. Daniel Haynes; Carl Houtman; Karen Scallon; Hans-Joachim Putz; Hans Johansson; R.A. Venditti; K. Copeland; H.-M. Chang

    2003-01-01

    Recently, we completed a project on the comparison of macrostickies measurement methods. Based on the success of the project, we decided to embark on this new project on comparison of microstickies measurement methods. When we started this project, there were some concerns and doubts principally due to the lack of an accepted definition of microstickies. However, we...

  3. Comparison of sine dwell and broadband methods for modal testing

    Science.gov (United States)

    Chen, Jay-Chung

    1989-01-01

    The objectives of modal tests for large complex spacecraft structural systems are outlined. The comparison criteria for the modal test methods, namely, the broadband excitation and the sine dwell methods, are established. Using the Galileo spacecraft modal test and the Centaur G Prime upper stage vehicle modal test as examples, the relative advantage or disadvantage of each method is examined. The usefulness or shortcomings of the methods are given from a practical engineering viewpoint.

  4. Comparison of genetic algorithms with conjugate gradient methods

    Science.gov (United States)

    Bosworth, J. L.; Foo, N. Y.; Zeigler, B. P.

    1972-01-01

    Genetic algorithms for mathematical function optimization are modeled on search strategies employed in natural adaptation. Comparisons of genetic algorithms with conjugate gradient methods, which were made on an IBM 1800 digital computer, show that genetic algorithms display superior performance over gradient methods for functions which are poorly behaved mathematically, for multimodal functions, and for functions obscured by additive random noise. Genetic methods offer performance comparable to gradient methods for many of the standard functions.

  5. Comparison of nuclear analytical methods with competitive methods

    International Nuclear Information System (INIS)

    1987-10-01

    The use of nuclear analytical techniques, especially neutron activation analysis, already have a 50 year old history. Today several sensitive and accurate, non-nuclear trace element analytical techniques are available and new methods are continuously developed. The IAEA is supporting the development of nuclear analytical laboratories in its Member States. In order to be able to advise the developing countries which methods to use in different applications, it is important to know the present status and development trends of nuclear analytical methods, what are their benefits, drawbacks and recommended fields of application, compared with other, non-nuclear techniques. In order to get an answer to these questions the IAEA convened this Advisory Group Meeting. This volume is the outcome of the presentations and discussions of the meeting. A separate abstract was prepared for each of the 21 papers. Refs, figs, tabs

  6. A simple statistical method for catch comparison studies

    DEFF Research Database (Denmark)

    Holst, René; Revill, Andrew

    2009-01-01

    For analysing catch comparison data, we propose a simple method based on Generalised Linear Mixed Models (GLMM) and use polynomial approximations to fit the proportions caught in the test codend. The method provides comparisons of fish catch at length by the two gears through a continuous curve...... with a realistic confidence band. We demonstrate the versatility of this method, on field data obtained from the first known testing in European waters of the Rhode Island (USA) 'Eliminator' trawl. These data are interesting as they include a range of species with different selective patterns. Crown Copyright (C...

  7. Comparison of methods for intestinal histamine application

    DEFF Research Database (Denmark)

    Vind, S; Søondergaard, I; Poulsen, L K

    1991-01-01

    The study was conducted to investigate whether introduction of histamine in enterosoluble capsules produced the same amount of urinary histamine metabolites as that found after application of histamine through a duodeno-jejunal tube. Secondly, to examine whether a histamine-restrictive or a fast ...... conclude that oral administration of enterosoluble capsules is an easy and appropriate method for intestinal histamine challenge. Fast and histamine-restrictive diets are not necessary, but subjects should record unexpected responses in a food and symptom diary.......The study was conducted to investigate whether introduction of histamine in enterosoluble capsules produced the same amount of urinary histamine metabolites as that found after application of histamine through a duodeno-jejunal tube. Secondly, to examine whether a histamine-restrictive or a fast...... all other intervals did not differ significantly between the two challenge regimens. Fast (water only) and histamine-restrictive diet versus non-restrictive diet did not affect the urinary MIAA. MIAA was significantly higher overall during the first 24 h after challenge than in any other fraction. We...

  8. Comparison of different methods for thoron progeny measurement

    International Nuclear Information System (INIS)

    Bi Lei; Zhu Li; Shang Bing; Cui Hongxing; Zhang Qingzhao

    2009-01-01

    Four popular methods for thoron progeny measurement were discussed, including the aspects of detector,principle, precondition, calculation advantages and disadvantages. Comparison experiments were made in mine and houses with high background in Yunnan Province. Since indoor thoron progeny changes with time obviously and with no rule, α track method is recommended in the area of radiation protection for environmental detection and assessment. (authors)

  9. Comparison of Nested-PCR technique and culture method in ...

    African Journals Online (AJOL)

    USER

    2010-04-05

    Apr 5, 2010 ... Full Length Research Paper. Comparison of ... The aim of the present study was to evaluate the diagnostic value of nested PCR in genitourinary ... method. Based on obtained results, the positivity rate of urine samples in this study was 5.0% by using culture and PCR methods and 2.5% for acid fast staining.

  10. Comparison of estimation methods for fitting weibull distribution to ...

    African Journals Online (AJOL)

    Comparison of estimation methods for fitting weibull distribution to the natural stand of Oluwa Forest Reserve, Ondo State, Nigeria. ... Journal of Research in Forestry, Wildlife and Environment ... The result revealed that maximum likelihood method was more accurate in fitting the Weibull distribution to the natural stand.

  11. Method for optimum determination of adjustable parameters in the boiling water reactor core simulator using operating data on flux distribution

    International Nuclear Information System (INIS)

    Kiguchi, T.; Kawai, T.

    1975-01-01

    A method has been developed to optimally and automatically determine the adjustable parameters of the boiling water reactor three-dimensional core simulator FLARE. The steepest gradient method is adopted for the optimization. The parameters are adjusted to best fit the operating data on power distribution measured by traversing in-core probes (TIP). The average error in the calculated TIP readings normalized by the core average is 0.053 at the rated power. The k-infinity correction term has also been derived theoretically to reduce the relatively large error in the calculated TIP readings near the tips of control rods, which is induced by the coarseness of mesh points. By introducing this correction, the average error decreases to 0.047. The void-quality relation is recognized as a function of coolant flow rate. The relation is estimated to fit the measured distributions of TIP reading at the partial power states

  12. System and method of adjusting the equilibrium temperature of an inductively-heated susceptor

    Science.gov (United States)

    Matsen, Marc R; Negley, Mark A; Geren, William Preston

    2015-02-24

    A system for inductively heating a workpiece may include an induction coil, at least one susceptor face sheet, and a current controller coupled. The induction coil may be configured to conduct an alternating current and generate a magnetic field in response to the alternating current. The susceptor face sheet may be configured to have a workpiece positioned therewith. The susceptor face sheet may be formed of a ferromagnetic alloy having a Curie temperature and being inductively heatable to an equilibrium temperature approaching the Curie temperature in response to the magnetic field. The current controller may be coupled to the induction coil and may be configured to adjust the alternating current in a manner causing a change in at least one heating parameter of the susceptor face sheet.

  13. Adjusting for treatment switching in randomised controlled trials - A simulation study and a simplified two-stage method.

    Science.gov (United States)

    Latimer, Nicholas R; Abrams, K R; Lambert, P C; Crowther, M J; Wailoo, A J; Morden, J P; Akehurst, R L; Campbell, M J

    2017-04-01

    Estimates of the overall survival benefit of new cancer treatments are often confounded by treatment switching in randomised controlled trials (RCTs) - whereby patients randomised to the control group are permitted to switch onto the experimental treatment upon disease progression. In health technology assessment, estimates of the unconfounded overall survival benefit associated with the new treatment are needed. Several switching adjustment methods have been advocated in the literature, some of which have been used in health technology assessment. However, it is unclear which methods are likely to produce least bias in realistic RCT-based scenarios. We simulated RCTs in which switching, associated with patient prognosis, was permitted. Treatment effect size and time dependency, switching proportions and disease severity were varied across scenarios. We assessed the performance of alternative adjustment methods based upon bias, coverage and mean squared error, related to the estimation of true restricted mean survival in the absence of switching in the control group. We found that when the treatment effect was not time-dependent, rank preserving structural failure time models (RPSFTM) and iterative parameter estimation methods produced low levels of bias. However, in the presence of a time-dependent treatment effect, these methods produced higher levels of bias, similar to those produced by an inverse probability of censoring weights method. The inverse probability of censoring weights and structural nested models produced high levels of bias when switching proportions exceeded 85%. A simplified two-stage Weibull method produced low bias across all scenarios and provided the treatment switching mechanism is suitable, represents an appropriate adjustment method.

  14. Development and Testing of a Decision Making Based Method to Adjust Automatically the Harrowing Intensity

    Science.gov (United States)

    Rueda-Ayala, Victor; Weis, Martin; Keller, Martina; Andújar, Dionisio; Gerhards, Roland

    2013-01-01

    Harrowing is often used to reduce weed competition, generally using a constant intensity across a whole field. The efficacy of weed harrowing in wheat and barley can be optimized, if site-specific conditions of soil, weed infestation and crop growth stage are taken into account. This study aimed to develop and test an algorithm to automatically adjust the harrowing intensity by varying the tine angle and number of passes. The field variability of crop leaf cover, weed density and soil density was acquired with geo-referenced sensors to investigate the harrowing selectivity and crop recovery. Crop leaf cover and weed density were assessed using bispectral cameras through differential images analysis. The draught force of the soil opposite to the direction of travel was measured with electronic load cell sensor connected to a rigid tine mounted in front of the harrow. Optimal harrowing intensity levels were derived in previously implemented experiments, based on the weed control efficacy and yield gain. The assessments of crop leaf cover, weed density and soil density were combined via rules with the aforementioned optimal intensities, in a linguistic fuzzy inference system (LFIS). The system was evaluated in two field experiments that compared constant intensities with variable intensities inferred by the system. A higher weed density reduction could be achieved when the harrowing intensity was not kept constant along the cultivated plot. Varying the intensity tended to reduce the crop leaf cover, though slightly improving crop yield. A real-time intensity adjustment with this system is achievable, if the cameras are attached in the front and at the rear or sides of the harrow. PMID:23669712

  15. Development and Testing of a Decision Making Based Method to Adjust Automatically the Harrowing Intensity

    Directory of Open Access Journals (Sweden)

    Roland Gerhards

    2013-05-01

    Full Text Available Harrowing is often used to reduce weed competition, generally using a constant intensity across a whole field. The efficacy of weed harrowing in wheat and barley can be optimized, if site-specific conditions of soil, weed infestation and crop growth stage are taken into account. This study aimed to develop and test an algorithm to automatically adjust the harrowing intensity by varying the tine angle and number of passes. The field variability of crop leaf cover, weed density and soil density was acquired with geo-referenced sensors to investigate the harrowing selectivity and crop recovery. Crop leaf cover and weed density were assessed using bispectral cameras through differential images analysis. The draught force of the soil opposite to the direction of travel was measured with electronic load cell sensor connected to a rigid tine mounted in front of the harrow. Optimal harrowing intensity levels were derived in previously implemented experiments, based on the weed control efficacy and yield gain. The assessments of crop leaf cover, weed density and soil density were combined via rules with the aforementioned optimal intensities, in a linguistic fuzzy inference system (LFIS. The system was evaluated in two field experiments that compared constant intensities with variable intensities inferred by the system. A higher weed density reduction could be achieved when the harrowing intensity was not kept constant along the cultivated plot. Varying the intensity tended to reduce the crop leaf cover, though slightly improving crop yield. A real-time intensity adjustment with this system is achievable, if the cameras are attached in the front and at the rear or sides of the harrow.

  16. Comparison between Evapotranspiration Fluxes Assessment Methods

    Science.gov (United States)

    Casola, A.; Longobardi, A.; Villani, P.

    2009-11-01

    Knowledge of hydrological processes acting in the water balance is determinant for a rational water resources management plan. Among these, the water losses as vapour, in the form of evapotranspiration, play an important role in the water balance and the heat transfers between the land surface and the atmosphere. Mass and energy interactions between soil, atmosphere and vegetation, in fact, influence all hydrological processes modificating rainfall interception, infiltration, evapotraspiration, surface runoff and groundwater recharge.A numbers of methods have been developed in scientific literature for modelling evapotranspiration. They can be divided in three main groups: i) traditional meteorological models, ii) energy fluxes balance models, considering interaction between vegetation and the atmosphere, and iii) remote sensing based models. The present analysis preliminary performs a study of fluxes directions and an evaluation of energy balance closure in a typical Mediterranean short vegetation area, using data series recorded from an eddy covariance station, located in the Campania region, Southern Italy. The analysis was performed on different seasons of the year with the aim to assess climatic forcing features impact on fluxes balance, to evaluate the smaller imbalance and to highlight influencing factors and sampling errors on balance closure. The present study also concerns evapotranspiration fluxes assessment at the point scale. Evapotranspiration is evaluated both from empirical relationships (Penmann-Montheit, Penmann F AO, Prestley&Taylor) calibrated with measured energy fluxes at mentioned experimental site, and from measured latent heat data scaled by the latent heat of vaporization. These results are compared with traditional and reliable well known models at the plot scale (Coutagne, Turc, Thorthwaite).

  17. Comparison of steam generator methods in PISC

    International Nuclear Information System (INIS)

    Lahdenperae, K.; Kankare, M.

    1996-01-01

    The main objective of the study (PISC III, action 5) was the experimental evaluation of the performance of methods used in in-service inspection of steam generator tubes used in nuclear power plants. The study was organized by the Joint Research Center of the European Community (JRC). The round robin test with blind boxes started in 1991. During the study training boxes and blind boxes were circulated in 29 laboratories in Europe, Japan and the USA. The boxes contained steam generator tubes with artificial and natural (chemically induced) flaws. The material was inconell. The blind boxes contained 66 tubes and 95 flaws. All flaws were introduced into different discontinuities, under support plates, above the tube sheet and into U-bends. The flaws included volumetric flaws (wastage, pitting, wear), axial and circumferential notches and chemically induced SCC cracks and IGA. After the round robin test the reference laboratory performed the destructive examination of reported flaws. The flaw detection probability (FDP) for all flaws and for teams inspecting all tubes was 60-85%. The detection of flaws deeper than 40% of the wall thickness was good. Flaws with a depth of less than 20% were not detected. When all flaws were considered, depth sizing was found to have a wide dispersion. Similarly, measured lengths did not as a rule correlate with true lengths. The classification of flaws in cracks and of volumetric flaws was not very successful, the correct classification probability being only about 70%. Evaluation of the flaws showed some shortcomings. The correct rejection probability was at best 83% for teams inspecting all boxes. (3 refs.)

  18. Drive Beam Quadrupoles for the CLIC Project: a Novel Method of Fiducialisation and a New Micrometric Adjustment System

    CERN Document Server

    AUTHOR|(SzGeCERN)411678; Duquenne, Mathieu; Sandomierski, Jacek; Sosin, Mateusz; Rude, Vivien

    2014-01-01

    This paper presents a new method of fiducialisation applied to determine the magnetic axis of the Drive Beam quadrupole of the CLIC project with respect to external alignment fiducials, within a micrometric accuracy and precision. It introduces also a new micrometric adjustment system along 5 Degrees of Freedom, developed for the same Drive Beam quadrupole. The combination of both developments opens very interesting perspectives to get a more simple and accurate alignment of the quadrupoles.

  19. The Comparison and Relationship between Religious Orientation and Practical Commitment to Religious Beliefs with Marital Adjustment in Seminary Scholars and University Students

    Directory of Open Access Journals (Sweden)

    رویا رسولی

    2015-04-01

    Full Text Available Spirituality and faith are powerful aspects of human experience. So, it is important to consider the relation between faith, beliefs, and marriage. The purpose of this study was to compare the relationship between religious orientation and practical commitment to religious beliefs with marital adjustment among seminary scholars and Yazd university students. Research sample consists 200 subjects including 50 student couples and 50 couples of seminary scholars collected via available sampling method from Yazd University and seminary scholars. Research instruments included: 1 Religious Orientation Scale 2 Test of Practical Commitment to Religious Beliefs, and 3 Dyadic Adjustment Scale. Correlation analyses showed that a relationship between religious orientation and marital adjustment. Marital adjustment has positive correlation with religiosity and negatively associated with unconstructed religiosity. Also there was a relationship between practical commitments to religious beliefs with marital adjustment in the groups. Relationship between practical commitments to religious beliefs with marital adjustment was higher than relationship between religious orientation and marital adjustment. the results of independent t-test analysis, showed signifycant differences between university students and seminary scholars in terms of religious orientation, practical commitent to religious beliefs and marital adjustment. Also, practical commitment to religious beliefs, marital adjustment and religious orientation in seminary schoolars were higher than students. Marital adjustment in seminary scholars was higher than students due to marital satisfaction because religious persons have faith beliefs. We conclude that faith beliefs impact marital satisfaction, marital adjustment conflict solving, and forgiveness. Negative beliefs about divorce and the believe that god supports marriage, may explain the relationship between commitment to religious beliefs and

  20. Introducing conjoint analysis method into delayed lotteries studies: its validity and time stability are higher than in adjusting.

    Science.gov (United States)

    Białek, Michał; Markiewicz, Łukasz; Sawicki, Przemysław

    2015-01-01

    The delayed lotteries are much more common in everyday life than are pure lotteries. Usually, we need to wait to find out the outcome of the risky decision (e.g., investing in a stock market, engaging in a relationship). However, most research has studied the time discounting and probability discounting in isolation using the methodologies designed specifically to track changes in one parameter. Most commonly used method is adjusting, but its reported validity and time stability in research on discounting are suboptimal. The goal of this study was to introduce the novel method for analyzing delayed lotteries-conjoint analysis-which hypothetically is more suitable for analyzing individual preferences in this area. A set of two studies compared the conjoint analysis with adjusting. The results suggest that individual parameters of discounting strength estimated with conjoint have higher predictive value (Study 1 and 2), and they are more stable over time (Study 2) compared to adjusting. We discuss these findings, despite the exploratory character of reported studies, by suggesting that future research on delayed lotteries should be cross-validated using both methods.

  1. Introducing conjoint analysis method into delayed lotteries studies: Its validity and time stability are higher than in adjusting

    Directory of Open Access Journals (Sweden)

    Michal eBialek

    2015-01-01

    Full Text Available The delayed lotteries are much more common in everyday life than are pure lotteries. Usually, we need to wait to find out the outcome of the risky decision (e.g., investing in a stock market, engaging in a relationship. However, most research has studied the time discounting and probability discounting in isolation using the methodologies designed specifically to track changes in one parameter. Most commonly used method is adjusting, but its reported validity and time stability in research on discounting are suboptimal.The goal of this study was to introduce the novel method for analyzing delayed lotteries - conjoint analysis - which hypothetically is more suitable for analyzing individual preferences in this area. A set of two studies compared the conjoint analysis with adjusting. The results suggest that individual parameters of discounting strength estimated with conjoint have higher predictive value (Study 1 & 2, and they are more stable over time (Study 2 compared to adjusting. We discuss these findings, despite the exploratory character of reported studies, by suggesting that future research on delayed lotteries should be cross-validated using both methods.

  2. Risk adjustment models for interhospital comparison of CS rates using Robson's ten group classification system and other socio-demographic and clinical variables.

    Science.gov (United States)

    Colais, Paola; Fantini, Maria P; Fusco, Danilo; Carretta, Elisa; Stivanello, Elisa; Lenzi, Jacopo; Pieri, Giulia; Perucci, Carlo A

    2012-06-21

    Caesarean section (CS) rate is a quality of health care indicator frequently used at national and international level. The aim of this study was to assess whether adjustment for Robson's Ten Group Classification System (TGCS), and clinical and socio-demographic variables of the mother and the fetus is necessary for inter-hospital comparisons of CS rates. The study population includes 64,423 deliveries in Emilia-Romagna between January 1, 2003 and December 31, 2004, classified according to theTGCS. Poisson regression was used to estimate crude and adjusted hospital relative risks of CS compared to a reference category. Analyses were carried out in the overall population and separately according to the Robson groups (groups I, II, III, IV and V-X combined). Adjusted relative risks (RR) of CS were estimated using two risk-adjustment models; the first (M1) including the TGCS group as the only adjustment factor; the second (M2) including in addition demographic and clinical confounders identified using a stepwise selection procedure. Percentage variations between crude and adjusted RRs by hospital were calculated to evaluate the confounding effect of covariates. The percentage variations from crude to adjusted RR proved to be similar in M1 and M2 model. However, stratified analyses by Robson's classification groups showed that residual confounding for clinical and demographic variables was present in groups I (nulliparous, single, cephalic, ≥37 weeks, spontaneous labour) and III (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, spontaneous labour) and IV (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, induced or CS before labour) and to a minor extent in groups II (nulliparous, single, cephalic, ≥37 weeks, induced or CS before labour) and IV (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, induced or CS before labour). The TGCS classification is useful for inter-hospital comparison of CS section rates, but

  3. Sibling comparison of differential parental treatment in adolescence: gender, self-esteem, and emotionality as mediators of the parenting-adjustment association.

    Science.gov (United States)

    Feinberg, M E; Neiderhiser, J M; Simmens, S; Reiss, D; Hetherington, E M

    2000-01-01

    This study employs findings from social comparison research to investigate adolescents' comparisons with siblings with regard to parental treatment. The sibling comparison hypothesis was tested on a sample of 516 two-child families by examining whether gender, self-esteem, and emotionality-which have been found in previous research to moderate social comparison-also moderate sibling comparison as reflected by siblings' own evaluations of differential parental treatment. Results supported a moderating effect for self-esteem and emotionality but not gender. The sibling comparison process was further examined by using a structural equation model in which parenting toward each child was associated with the adjustment of that child and of the child's sibling. Evidence of the "sibling barricade" effect-that is, parenting toward one child being linked with opposite results on the child's sibling as on the target child-was found in a limited number of cases and interpreted as reflecting a sibling comparison process. For older siblings, emotionality and self-esteem moderated the sibling barricade effect but in the opposite direction as predicted. Results are discussed in terms of older siblings' increased sensitivity to parenting as well as the report of differential parenting reflecting the child's level of comfort and benign understanding of differential parenting, which buffers the child against environmental vicissitudes evoking sibling comparison processes.

  4. Proximal Alternating Direction Method with Relaxed Proximal Parameters for the Least Squares Covariance Adjustment Problem

    Directory of Open Access Journals (Sweden)

    Minghua Xu

    2014-01-01

    Full Text Available We consider the problem of seeking a symmetric positive semidefinite matrix in a closed convex set to approximate a given matrix. This problem may arise in several areas of numerical linear algebra or come from finance industry or statistics and thus has many applications. For solving this class of matrix optimization problems, many methods have been proposed in the literature. The proximal alternating direction method is one of those methods which can be easily applied to solve these matrix optimization problems. Generally, the proximal parameters of the proximal alternating direction method are greater than zero. In this paper, we conclude that the restriction on the proximal parameters can be relaxed for solving this kind of matrix optimization problems. Numerical experiments also show that the proximal alternating direction method with the relaxed proximal parameters is convergent and generally has a better performance than the classical proximal alternating direction method.

  5. The Adjusted Net Asset Valuation Method – Connecting the dots between Theory and Practice

    Directory of Open Access Journals (Sweden)

    Silvia Ghiță-Mitrescu

    2016-01-01

    The goal of this paper is to present the theoretical background of this method as well as itspractical application. We will first analyze the main theoretical issues regarding the correctionsthat need to be performed in order to transform the book value of assets and liabilities to theirmarket value, afterwards proceeding to an example on how this method is applied to the balancesheet of a company. Finally, we will conclude on the importance of the method for a company’sevaluation process.

  6. Meta-analysis: adjusted indirect comparison of drug-eluting bead transarterial chemoembolization versus 90Y-radioembolization for hepatocellular carcinoma

    International Nuclear Information System (INIS)

    Ludwig, Johannes M.; Xing, Minzhi; Zhang, Di; Kim, Hyun S.

    2017-01-01

    To investigate comparative effectiveness of drug-eluting bead transarterial chemoembolization (DEB-TACE) versus Yttrium-90 ( 90 Y)-radioembolization for hepatocellular carcinoma (HCC). Studies comparing conventional (c)TACE versus 90 Y-radioembolization or DEB-TACE for HCC treatment were identified using PubMed/Medline, Embase, and Cochrane databases. The adjusted indirect meta-analytic method for effectiveness comparison of DEB-TACE versus 90 Y-radioembolization was used. Wilcoxon rank-sum test was used to compare baseline characteristics. A priori defined sensitivity analysis of stratified study subgroups was performed for primary outcome analyses. Publication bias was tested by Egger's and Begg's tests. Fourteen studies comparing DEB-TACE or 90 Y-radioembolization with cTACE were included. Analysis revealed a 1-year overall survival benefit for DEB-TACE over 90 Y-radioembolization (79 % vs. 54.8 %; OR: 0.57; 95 %CI: 0.355-0.915; p = 0.02; I-squared: 0 %; p > 0.5), but not for the 2-year (61 % vs. 34 %; OR: 0.65; 95%CI: 0.294-1.437; p = 0.29) and 3-year survival (56.4 % vs. 20.9 %; OR: 0.713; 95 % CI: 0.21-2.548; p = 0.62). There was significant heterogeneity in the 2- and 3-year survival analyses. The pooled median overall survival was longer for DEB-TACE (22.6 vs. 14.7 months). There was no significant difference in tumour response rate. DEB-TACE and 90 Y-radioembolization are efficacious treatments for patients suffering from HCC; DEB-TACE demonstrated survival benefit at 1-year compared to 90 Y-radioembolization but direct comparison is warranted for further evaluation. (orig.)

  7. Meta-analysis: adjusted indirect comparison of drug-eluting bead transarterial chemoembolization versus {sup 90}Y-radioembolization for hepatocellular carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Ludwig, Johannes M.; Xing, Minzhi [Yale School of Medicine, Division of Interventional Radiology, Department of Radiology and Biomedical Imaging, New Haven, CT (United States); Zhang, Di [University of Pittsburgh Graduate School of Public Health, Department of Biostatistics, Pittsburgh, PA (United States); Kim, Hyun S. [Yale School of Medicine, Division of Interventional Radiology, Department of Radiology and Biomedical Imaging, New Haven, CT (United States); Yale School of Medicine, Yale Cancer Center, New Haven, CT (United States)

    2017-05-15

    To investigate comparative effectiveness of drug-eluting bead transarterial chemoembolization (DEB-TACE) versus Yttrium-90 ({sup 90}Y)-radioembolization for hepatocellular carcinoma (HCC). Studies comparing conventional (c)TACE versus {sup 90}Y-radioembolization or DEB-TACE for HCC treatment were identified using PubMed/Medline, Embase, and Cochrane databases. The adjusted indirect meta-analytic method for effectiveness comparison of DEB-TACE versus {sup 90}Y-radioembolization was used. Wilcoxon rank-sum test was used to compare baseline characteristics. A priori defined sensitivity analysis of stratified study subgroups was performed for primary outcome analyses. Publication bias was tested by Egger's and Begg's tests. Fourteen studies comparing DEB-TACE or {sup 90}Y-radioembolization with cTACE were included. Analysis revealed a 1-year overall survival benefit for DEB-TACE over {sup 90}Y-radioembolization (79 % vs. 54.8 %; OR: 0.57; 95 %CI: 0.355-0.915; p = 0.02; I-squared: 0 %; p > 0.5), but not for the 2-year (61 % vs. 34 %; OR: 0.65; 95%CI: 0.294-1.437; p = 0.29) and 3-year survival (56.4 % vs. 20.9 %; OR: 0.713; 95 % CI: 0.21-2.548; p = 0.62). There was significant heterogeneity in the 2- and 3-year survival analyses. The pooled median overall survival was longer for DEB-TACE (22.6 vs. 14.7 months). There was no significant difference in tumour response rate. DEB-TACE and {sup 90}Y-radioembolization are efficacious treatments for patients suffering from HCC; DEB-TACE demonstrated survival benefit at 1-year compared to {sup 90}Y-radioembolization but direct comparison is warranted for further evaluation. (orig.)

  8. Comparison of hurricane exposure methods and associations with county fetal death rates, adjusting for environmental quality

    Science.gov (United States)

    Adverse effects of hurricanes are increasing as coastal populations grow and events become more severe. Hurricane exposure during pregnancy can influence fetal death rates through mechanisms related to healthcare, infrastructure disruption, nutrition, and injury. Estimation of hu...

  9. Rationale of a quick adjustment method for crystal orientation in oscillation photography

    International Nuclear Information System (INIS)

    Suh, I.H.; Suh, J.M.; Ko, T.S.

    1988-01-01

    The rationale for a convenient crystal orientation method for oscillation photography is presented. The method involves the measurement of the deviations of reflection spots from the equator. These deviations are added or subtracted to give the horizontal and vertical arc corrections. (orig.)

  10. INTER LABORATORY COMBAT HELMET BLUNT IMPACT TEST METHOD COMPARISON

    Science.gov (United States)

    2018-03-26

    data by Instrumentation for Impact  Test , SAE standard J211‐1 [4]. Although the entire curve is collected, the interest of this  project  team  solely...HELMET BLUNT IMPACT TEST METHOD COMPARISON by Tony J. Kayhart Charles A. Hewitt and Jonathan Cyganik March 2018 Final...INTER-LABORATORY COMBAT HELMET BLUNT IMPACT TEST METHOD COMPARISON 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR

  11. Shaft adjuster

    Science.gov (United States)

    Harry, Herbert H.

    1989-01-01

    Apparatus and method for the adjustment and alignment of shafts in high power devices. A plurality of adjacent rotatable angled cylinders are positioned between a base and the shaft to be aligned which when rotated introduce an axial offset. The apparatus is electrically conductive and constructed of a structurally rigid material. The angled cylinders allow the shaft such as the center conductor in a pulse line machine to be offset in any desired alignment position within the range of the apparatus.

  12. A prospective crossover comparison of neurally adjusted ventilatory assist and pressure-support ventilation in a pediatric and neonatal intensive care unit population.

    LENUS (Irish Health Repository)

    Breatnach, Cormac

    2012-02-01

    OBJECTIVE: To compare neurally adjusted ventilatory assist ventilation with pressure-support ventilation. DESIGN: Prospective, crossover comparison study. SETTING: Tertiary care pediatric and neonatal intensive care unit. PATIENTS: Sixteen ventilated infants and children: mean age = 9.7 months (range = 2 days-4 yrs) and mean weight = 6.2 kg (range = 2.4-13.7kg). INTERVENTIONS: A modified nasogastric tube was inserted and correct positioning was confirmed. Patients were ventilated in pressure-support mode with a pneumatic trigger for a 30-min period and then in neurally adjusted ventilatory assist mode for up to 4 hrs. MEASUREMENTS AND MAIN RESULTS: Data collected for comparison included activating trigger (neural vs. pneumatic), peak and mean airway pressures, expired minute and tidal volumes, heart rate, respiratory rate, pulse oximetry, end-tidal CO2 and arterial blood gases. Synchrony was improved in neurally adjusted ventilatory assist mode with 65% (+\\/-21%) of breaths triggered neurally vs. 35% pneumatically (p < .001) and 85% (+\\/-8%) of breaths cycled-off neurally vs. 15% pneumatically (p = .0001). The peak airway pressure in neurally adjusted ventilatory assist mode was significantly lower than in pressure-support mode with a 28% decrease in pressure after 30 mins (p = .003) and 32% decrease after 3 hrs (p < .001). Mean airway pressure was reduced by 11% at 30 mins (p = .13) and 9% at 3 hrs (p = .31) in neurally adjusted ventilatory assist mode although this did not reach statistical significance. Patient hemodynamics and gas exchange remained stable for the study period. No adverse patient events or device effects were noted. CONCLUSIONS: In a neonatal and pediatric intensive care unit population, ventilation in neurally adjusted ventilatory assist mode was associated with improved patient-ventilator synchrony and lower peak airway pressure when compared with pressure-support ventilation with a pneumatic trigger. Ventilating patients in this new mode

  13. An Algebraic Method of Synchronous Pulsewidth Modulation for Converters for Adjustable Speed Drives

    DEFF Research Database (Denmark)

    Oleschuk, Valentin; Blaabjerg, Frede

    2002-01-01

    This paper describes the basic peculiarities of a new method of feedforward synchronous pulsewidth modulation (PWM) of the output voltage of converters, based on one-stage closed-form strategy of PWM with pure algebraic control dependencies. It is applied to voltage source inverters with a contin......This paper describes the basic peculiarities of a new method of feedforward synchronous pulsewidth modulation (PWM) of the output voltage of converters, based on one-stage closed-form strategy of PWM with pure algebraic control dependencies. It is applied to voltage source inverters...... with a continuous scheme of conventional voltage space vector modulation and with two basic variants of symmetrical discontinuous PWM. Simulations give the behaviour of the proposed method and show the advantage of algebraic synchronous PWM compared with the typical asynchronous, for low indices of the frequency...

  14. Adjustment of a rapid method for quantification of Fusarium spp. spore suspensions in plant pathology.

    Science.gov (United States)

    Caligiore-Gei, Pablo F; Valdez, Jorge G

    2015-01-01

    The use of a Neubauer chamber is a broadly employed method when cell suspensions need to be quantified. However, this technique may take a long time and needs trained personnel. Spectrophotometry has proved to be a rapid, simple and accurate method to estimate the concentration of spore suspensions of isolates of the genus Fusarium. In this work we present a linear formula to relate absorbance measurements at 530nm with the number of microconidia/ml in a suspension. Copyright © 2014 Asociación Argentina de Microbiología. Publicado por Elsevier España, S.L.U. All rights reserved.

  15. A comparison of non-invasive versus invasive methods of ...

    African Journals Online (AJOL)

    Puneet Khanna

    for Hb estimation from the laboratory [total haemoglobin mass (tHb)] and arterial blood gas (ABG) machine (aHb), using ... A comparison of non-invasive versus invasive methods of haemoglobin estimation in patients undergoing intracranial surgery. 161 .... making decisions for blood transfusions based on these results.

  16. Removal of phenol from water : a comparison of energization methods

    NARCIS (Netherlands)

    Grabowski, L.R.; Veldhuizen, van E.M.; Rutgers, W.R.

    2005-01-01

    Direct electrical energization methods for removal of persistent substances from water are under investigation in the framework of the ytriD-project. The emphasis of the first stage of the project is the energy efficiency. A comparison is made between a batch reactor with a thin layer of water and

  17. Comparison of radioimmunology and serology methods for LH determination

    International Nuclear Information System (INIS)

    Szymanski, W.; Jakowicki, J.

    1976-01-01

    A comparison is presented of LH determinations by immunoassay and radioimmunoassay using the 125 I-labelled HCG double antibody system. The results obtained by both methods are well comparable and in normal and elevated LH levels the serological determinations may be used for estimating concentration levels found by RIA. (L.O.)

  18. Proper comparison among methods using a confusion matrix

    CSIR Research Space (South Africa)

    Salmon

    2015-07-01

    Full Text Available -1 IGARSS 2015, Milan, Italy, 26-31 July 2015 Proper comparison among methods using a confusion matrix 1,2 B.P. Salmon, 2,3 W. Kleynhans, 2,3 C.P. Schwegmann and 1J.C. Olivier 1School of Engineering and ICT, University of Tasmania, Australia 2...

  19. A comparison of three time-domain anomaly detection methods

    Energy Technology Data Exchange (ETDEWEB)

    Schoonewelle, H.; Hagen, T.H.J.J. van der; Hoogenboom, J.E. [Delft University of Technology (Netherlands). Interfaculty Reactor Institute

    1996-01-01

    Three anomaly detection methods based on a comparison of signal values with predictions from an autoregressive model are presented. These methods are: the extremes method, the {chi}{sup 2} method and the sequential probability ratio test. The methods are used to detect a change of the standard deviation of the residual noise obtained from applying an autoregressive model. They are fast and can be used in on-line applications. For each method some important anomaly detection parameters are determined by calculation or simulation. These parameters are: the false alarm rate, the average time to alarm and - being of minor importance -the alarm failure rate. Each method is optimized with respect to the average time to alarm for a given value of the false alarm rate. The methods are compared with each other, resulting in the sequential probability ratio test being clearly superior. (author).

  20. A comparison of three time-domain anomaly detection methods

    International Nuclear Information System (INIS)

    Schoonewelle, H.; Hagen, T.H.J.J. van der; Hoogenboom, J.E.

    1996-01-01

    Three anomaly detection methods based on a comparison of signal values with predictions from an autoregressive model are presented. These methods are: the extremes method, the χ 2 method and the sequential probability ratio test. The methods are used to detect a change of the standard deviation of the residual noise obtained from applying an autoregressive model. They are fast and can be used in on-line applications. For each method some important anomaly detection parameters are determined by calculation or simulation. These parameters are: the false alarm rate, the average time to alarm and - being of minor importance -the alarm failure rate. Each method is optimized with respect to the average time to alarm for a given value of the false alarm rate. The methods are compared with each other, resulting in the sequential probability ratio test being clearly superior. (author)

  1. MANGO – Modal Analysis for Grid Operation: A Method for Damping Improvement through Operating Point Adjustment

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Zhenyu; Zhou, Ning; Tuffner, Francis K.; Chen, Yousu; Trudnowski, Daniel J.; Diao, Ruisheng; Fuller, Jason C.; Mittelstadt, William A.; Hauer, John F.; Dagle, Jeffery E.

    2010-10-18

    Small signal stability problems are one of the major threats to grid stability and reliability in the U.S. power grid. An undamped mode can cause large-amplitude oscillations and may result in system breakups and large-scale blackouts. There have been several incidents of system-wide oscillations. Of those incidents, the most notable is the August 10, 1996 western system breakup, a result of undamped system-wide oscillations. Significant efforts have been devoted to monitoring system oscillatory behaviors from measurements in the past 20 years. The deployment of phasor measurement units (PMU) provides high-precision, time-synchronized data needed for detecting oscillation modes. Measurement-based modal analysis, also known as ModeMeter, uses real-time phasor measurements to identify system oscillation modes and their damping. Low damping indicates potential system stability issues. Modal analysis has been demonstrated with phasor measurements to have the capability of estimating system modes from both oscillation signals and ambient data. With more and more phasor measurements available and ModeMeter techniques maturing, there is yet a need for methods to bring modal analysis from monitoring to actions. The methods should be able to associate low damping with grid operating conditions, so operators or automated operation schemes can respond when low damping is observed. The work presented in this report aims to develop such a method and establish a Modal Analysis for Grid Operation (MANGO) procedure to aid grid operation decision making to increase inter-area modal damping. The procedure can provide operation suggestions (such as increasing generation or decreasing load) for mitigating inter-area oscillations.

  2. FEM-based Printhead Intelligent Adjusting Method for Printing Conduct Material

    Directory of Open Access Journals (Sweden)

    Liang Xiaodan

    2017-01-01

    Full Text Available Ink-jet printing circuit board has some advantage, such as non-contact manufacture, high manufacture accuracy, and low pollution and so on. In order to improve the and printing precision, the finite element technology is adopted to model the piezoelectric print heads, and a new bacteria foraging algorithm with a lifecycle strategy is proposed to optimize the parameters of driving waveforms for getting the desired droplet characteristics. Results of numerical simulation show such algorithm has a good performance. Additionally, the droplet jetting simulation results and measured results confirmed such method precisely gets the desired droplet characteristics.

  3. Analysis of Longitudinal Studies With Repeated Outcome Measures: Adjusting for Time-Dependent Confounding Using Conventional Methods.

    Science.gov (United States)

    Keogh, Ruth H; Daniel, Rhian M; VanderWeele, Tyler J; Vansteelandt, Stijn

    2018-05-01

    Estimation of causal effects of time-varying exposures using longitudinal data is a common problem in epidemiology. When there are time-varying confounders, which may include past outcomes, affected by prior exposure, standard regression methods can lead to bias. Methods such as inverse probability weighted estimation of marginal structural models have been developed to address this problem. However, in this paper we show how standard regression methods can be used, even in the presence of time-dependent confounding, to estimate the total effect of an exposure on a subsequent outcome by controlling appropriately for prior exposures, outcomes, and time-varying covariates. We refer to the resulting estimation approach as sequential conditional mean models (SCMMs), which can be fitted using generalized estimating equations. We outline this approach and describe how including propensity score adjustment is advantageous. We compare the causal effects being estimated using SCMMs and marginal structural models, and we compare the two approaches using simulations. SCMMs enable more precise inferences, with greater robustness against model misspecification via propensity score adjustment, and easily accommodate continuous exposures and interactions. A new test for direct effects of past exposures on a subsequent outcome is described.

  4. Arima model and exponential smoothing method: A comparison

    Science.gov (United States)

    Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri

    2013-04-01

    This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.

  5. ACCELERATION RENDERING METHOD ON RAY TRACING WITH ANGLE COMPARISON AND DISTANCE COMPARISON

    Directory of Open Access Journals (Sweden)

    Liliana liliana

    2007-01-01

    Full Text Available In computer graphics applications, to produce realistic images, a method that is often used is ray tracing. Ray tracing does not only model local illumination but also global illumination. Local illumination count ambient, diffuse and specular effects only, but global illumination also count mirroring and transparency. Local illumination count effects from the lamp(s but global illumination count effects from other object(s too. Objects that are usually modeled are primitive objects and mesh objects. The advantage of mesh modeling is various, interesting and real-like shape. Mesh contains many primitive objects like triangle or square (rare. A problem in mesh object modeling is long rendering time. It is because every ray must be checked with a lot of triangle of the mesh. Added by ray from other objects checking, the number of ray that traced will increase. It causes the increasing of rendering time. To solve this problem, in this research, new methods are developed to make the rendering process of mesh object faster. The new methods are angle comparison and distance comparison. These methods are used to reduce the number of ray checking. The rays predicted will not intersect with the mesh, are not checked weather the ray intersects the mesh. With angle comparison, if using small angle to compare, the rendering process will be fast. This method has disadvantage, if the shape of each triangle is big, some triangles will be corrupted. If the angle to compare is bigger, mesh corruption can be avoided but the rendering time will be longer than without comparison. With distance comparison, the rendering time is less than without comparison, and no triangle will be corrupted.

  6. Study of spectral response of a neutron filter. Design of a method to adjust spectra

    International Nuclear Information System (INIS)

    Colomb-Dolci, F.

    1999-02-01

    The first part of this thesis describes an experimental method which intends to determine a neutron spectrum in the epithermal range [1 eV -10 keV]. Based on measurements of reaction rates provided by activation foils, it gives flux level in each energy range corresponding to each probe. This method can be used in any reactor location or in a neutron beam. It can determine scepter on eight energy groups, five groups in the epithermal range. The second part of this thesis presents a study of an epithermal neutron beam design, in the frame of Neutron Capture Therapy. A beam tube was specially built to test filters made up of different materials. Its geometry was designed to favour epithermal neutron crossing and to cut thermal and fast neutrons. A code scheme was validated to simulate the device response with a Monte Carlo code. Measurements were made at ISIS reactor and experimental spectra were compared to calculated ones. This validated code scheme was used to simulate different materials usable as shields in the tube. A study of these shields is presented at the end of this thesis. (author)

  7. A GRAMMATICAL ADJUSTMENT ANALYSIS OF STATISTICAL MACHINE TRANSLATION METHOD USED BY GOOGLE TRANSLATE COMPARED TO HUMAN TRANSLATION IN TRANSLATING ENGLISH TEXT TO INDONESIAN

    Directory of Open Access Journals (Sweden)

    Eko Pujianto

    2017-04-01

    Full Text Available Google translate is a program which provides fast, free and effortless translating service. This service uses a unique method to translate. The system is called ―Statistical Machine Translation‖, the newest method in automatic translation. Machine translation (MT is an area of many kinds of different subjects of study and technique from linguistics, computers science, artificial intelligent (AI, translation theory, and statistics. SMT works by using statistical methods and mathematics to process the training data. The training data is corpus-based. It is a compilation of sentences and words of the languages (SL and TL from translation done by human. By using this method, Google let their machine discovers the rules for themselves. They do this by analyzing millions of documents that have already been translated by human translators and then generate the result based on the corpus/training data. However, questions arise when the results of the automatic translation prove to be unreliable in some extent. This paper questions the dependability of Google translate in comparison with grammatical adjustment that naturally characterizes human translators' specific advantage. The attempt is manifested through the analysis of the TL of some texts translated by the SMT. It is expected that by using the sample of TL produced by SMT we can learn the potential flaws of the translation. If such exists, the partial of more substantial undependability of SMT may open more windows to the debates of whether this service may suffice the users‘ need.

  8. Performance evaluation of inpatient service in Beijing: a horizontal comparison with risk adjustment based on Diagnosis Related Groups.

    Science.gov (United States)

    Jian, Weiyan; Huang, Yinmin; Hu, Mu; Zhang, Xiumei

    2009-04-30

    The medical performance evaluation, which provides a basis for rational decision-making, is an important part of medical service research. Current progress with health services reform in China is far from satisfactory, without sufficient regulation. To achieve better progress, an effective tool for evaluating medical performance needs to be established. In view of this, this study attempted to develop such a tool appropriate for the Chinese context. Data was collected from the front pages of medical records (FPMR) of all large general public hospitals (21 hospitals) in the third and fourth quarter of 2007. Locally developed Diagnosis Related Groups (DRGs) were introduced as a tool for risk adjustment and performance evaluation indicators were established: Charge Efficiency Index (CEI), Time Efficiency Index (TEI) and inpatient mortality of low-risk group cases (IMLRG), to reflect respectively work efficiency and medical service quality. Using these indicators, the inpatient services' performance was horizontally compared among hospitals. Case-mix Index (CMI) was used to adjust efficiency indices and then produce adjusted CEI (aCEI) and adjusted TEI (aTEI). Poisson distribution analysis was used to test the statistical significance of the IMLRG differences between different hospitals. Using the aCEI, aTEI and IMLRG scores for the 21 hospitals, Hospital A and C had relatively good overall performance because their medical charges were lower, LOS shorter and IMLRG smaller. The performance of Hospital P and Q was the worst due to their relatively high charge level, long LOS and high IMLRG. Various performance problems also existed in the other hospitals. It is possible to develop an accurate and easy to run performance evaluation system using Case-Mix as the tool for risk adjustment, choosing indicators close to consumers and managers, and utilizing routine report forms as the basic information source. To keep such a system running effectively, it is necessary to

  9. Improved methods for the mathematically controlled comparison of biochemical systems

    Directory of Open Access Journals (Sweden)

    Schwacke John H

    2004-06-01

    Full Text Available Abstract The method of mathematically controlled comparison provides a structured approach for the comparison of alternative biochemical pathways with respect to selected functional effectiveness measures. Under this approach, alternative implementations of a biochemical pathway are modeled mathematically, forced to be equivalent through the application of selected constraints, and compared with respect to selected functional effectiveness measures. While the method has been applied successfully in a variety of studies, we offer recommendations for improvements to the method that (1 relax requirements for definition of constraints sufficient to remove all degrees of freedom in forming the equivalent alternative, (2 facilitate generalization of the results thus avoiding the need to condition those findings on the selected constraints, and (3 provide additional insights into the effect of selected constraints on the functional effectiveness measures. We present improvements to the method and related statistical models, apply the method to a previously conducted comparison of network regulation in the immune system, and compare our results to those previously reported.

  10. Method and apparatus for rapid adjustment of process gas inventory in gaseous diffusion cascades

    International Nuclear Information System (INIS)

    1980-01-01

    A method is specified for the operation of a gaseous diffusion cascade wherein electrically driven compressors circulate a process gas through a plurality of serially connected gaseous diffusion stages to establish first and second countercurrently flowing cascade streams of process gas, one of the streams being at a relatively low pressure and enriched in a component of the process gas and the other being at a higher pressure and depleted in the same, and wherein automatic control systems maintain the stage process gas pressures by positioning process gas flow control valve openings at values which are functions of the difference between reference-signal inputs to the systems, and signal inputs proportional to the process gas pressures in the gaseous diffusion stages associated with the systems, the cascade process gas inventory being altered, while the cascade is operating, by simultaneously directing into separate process-gas freezing zones a plurality of substreams derived from one of the first and second streams at different points along the lengths thereof to solidify approximately equal weights of process gas in the zone while reducing the reference-signal inputs to maintain the positions of the control valves substantially unchanged despite the removal of process gas inventory via the substreams. (author)

  11. Adjusting survival time estimates to account for treatment switching in randomized controlled trials--an economic evaluation context: methods, limitations, and recommendations.

    Science.gov (United States)

    Latimer, Nicholas R; Abrams, Keith R; Lambert, Paul C; Crowther, Michael J; Wailoo, Allan J; Morden, James P; Akehurst, Ron L; Campbell, Michael J

    2014-04-01

    Treatment switching commonly occurs in clinical trials of novel interventions in the advanced or metastatic cancer setting. However, methods to adjust for switching have been used inconsistently and potentially inappropriately in health technology assessments (HTAs). We present recommendations on the use of methods to adjust survival estimates in the presence of treatment switching in the context of economic evaluations. We provide background on the treatment switching issue and summarize methods used to adjust for it in HTAs. We discuss the assumptions and limitations associated with adjustment methods and draw on results of a simulation study to make recommendations on their use. We demonstrate that methods used to adjust for treatment switching have important limitations and often produce bias in realistic scenarios. We present an analysis framework that aims to increase the probability that suitable adjustment methods can be identified on a case-by-case basis. We recommend that the characteristics of clinical trials, and the treatment switching mechanism observed within them, should be considered alongside the key assumptions of the adjustment methods. Key assumptions include the "no unmeasured confounders" assumption associated with the inverse probability of censoring weights (IPCW) method and the "common treatment effect" assumption associated with the rank preserving structural failure time model (RPSFTM). The limitations associated with switching adjustment methods such as the RPSFTM and IPCW mean that they are appropriate in different scenarios. In some scenarios, both methods may be prone to bias; "2-stage" methods should be considered, and intention-to-treat analyses may sometimes produce the least bias. The data requirements of adjustment methods also have important implications for clinical trialists.

  12. Soil biological quality of grassland fertilized with adjusted cattle manure slurries in comparison with organic and inorganic fertilizers

    NARCIS (Netherlands)

    Eekeren, van N.J.M.; Boer, de Herman; Bloem, J.; Schouten, T.; Rutgers, M.; Goede, de R.G.M.; Brussaard, L.

    2009-01-01

    We studied the effect of five fertilizers (including two adjusted manure slurries) and an untreated control on soil biota and explored the effect on the ecosystem services they provided. Our results suggest that the available N (NO3- and NH4+) in the soil plays a central role in the effect of

  13. Comparison of vibrational conductivity and radiative energy transfer methods

    Science.gov (United States)

    Le Bot, A.

    2005-05-01

    This paper is concerned with the comparison of two methods well suited for the prediction of the wideband response of built-up structures subjected to high-frequency vibrational excitation. The first method is sometimes called the vibrational conductivity method and the second one is rather known as the radiosity method in the field of acoustics, or the radiative energy transfer method. Both are based on quite similar physical assumptions i.e. uncorrelated sources, mean response and high-frequency excitation. Both are based on analogies with some equations encountered in the field of heat transfer. However these models do not lead to similar results. This paper compares the two methods. Some numerical simulations on a pair of plates joined along one edge are provided to illustrate the discussion.

  14. Qualitative Analysis of Chang'e-1 γ-ray Spectrometer Spectra Using Noise Adjusted Singular Value Decomposition Method

    International Nuclear Information System (INIS)

    Yang Jia; Ge Liangquan; Xiong Shengqing

    2010-01-01

    From the features of spectra shape of Chang'e-1 γ-ray spectrometer(CE1-GRS) data, it is difficult to determine elemental compositions on the lunar surface. Aimed at this problem, this paper proposes using noise adjusted singular value decomposition (NASVD) method to extract orthogonal spectral components from CE1-GRS data. Then the peak signals in the spectra of lower-order layers corresponding to the observed spectrum of each lunar region are respectively analyzed. Elemental compositions of each lunar region can be determined based upon whether the energy corresponding to each peak signal equals to the energy corresponding to the characteristic gamma-ray line emissions of specific elements. The result shows that a number of elements such as U, Th, K, Fe, Ti, Si, O, Al, Mg, Ca and Na are qualitatively determined by this method. (authors)

  15. Comparison of several methods of sires evaluation for total milk ...

    African Journals Online (AJOL)

    2015-01-24

    Jan 24, 2015 ... Comparison of several methods of sires evaluation for total milk yield in a herd of Holstein cows in Yemen. F.R. Al-Samarai1,*, Y.K. Abdulrahman1, F.A. Mohammed2, F.H. Al-Zaidi2 and N.N. Al-Anbari3. 1Department of Veterinary Public Health/College of Veterinary Medicine, University of Baghdad, Iraq.

  16. Classification Method in Integrated Information Network Using Vector Image Comparison

    Directory of Open Access Journals (Sweden)

    Zhou Yuan

    2014-05-01

    Full Text Available Wireless Integrated Information Network (WMN consists of integrated information that can get data from its surrounding, such as image, voice. To transmit information, large resource is required which decreases the service time of the network. In this paper we present a Classification Approach based on Vector Image Comparison (VIC for WMN that improve the service time of the network. The available methods for sub-region selection and conversion are also proposed.

  17. COMPARISON OF HOLOGRAPHIC AND ITERATIVE METHODS FOR AMPLITUDE OBJECT RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    I. A. Shevkunov

    2015-01-01

    Full Text Available Experimental comparison of four methods for the wavefront reconstruction is presented. We considered two iterative and two holographic methods with different mathematical models and algorithms for recovery. The first two of these methods do not use a reference wave recording scheme that reduces requirements for stability of the installation. A major role in phase information reconstruction by such methods is played by a set of spatial intensity distributions, which are recorded as the recording matrix is being moved along the optical axis. The obtained data are used consistently for wavefront reconstruction using an iterative procedure. In the course of this procedure numerical distribution of the wavefront between the planes is performed. Thus, phase information of the wavefront is stored in every plane and calculated amplitude distributions are replaced for the measured ones in these planes. In the first of the compared methods, a two-dimensional Fresnel transform and iterative calculation in the object plane are used as a mathematical model. In the second approach, an angular spectrum method is used for numerical wavefront propagation, and the iterative calculation is carried out only between closely located planes of data registration. Two digital holography methods, based on the usage of the reference wave in the recording scheme and differing from each other by numerical reconstruction algorithm of digital holograms, are compared with the first two methods. The comparison proved that the iterative method based on 2D Fresnel transform gives results comparable with the result of common holographic method with the Fourier-filtering. It is shown that holographic method for reconstructing of the object complex amplitude in the process of the object amplitude reduction is the best among considered ones.

  18. Application of adjusted subpixel method (ASM) in HRCT measurements of the bronchi in bronchial asthma patients and healthy individuals

    International Nuclear Information System (INIS)

    Mincewicz, Grzegorz; Rumiński, Jacek; Krzykowski, Grzegorz

    2012-01-01

    Background: Recently, we described a model system which included corrections of high-resolution computed tomography (HRCT) bronchial measurements based on the adjusted subpixel method (ASM). Objective: To verify the clinical application of ASM by comparing bronchial measurements obtained by means of the traditional eye-driven method, subpixel method alone and ASM in a group comprised of bronchial asthma patients and healthy individuals. Methods: The study included 30 bronchial asthma patients and the control group comprised of 20 volunteers with no symptoms of asthma. The lowest internal and external diameters of the bronchial cross-sections (ID and ED) and their derivative parameters were determined in HRCT scans using: (1) traditional eye-driven method, (2) subpixel technique, and (3) ASM. Results: In the case of the eye-driven method, lower ID values along with lower bronchial lumen area and its percentage ratio to total bronchial area were basic parameters that differed between asthma patients and healthy controls. In the case of the subpixel method and ASM, both groups were not significantly different in terms of ID. Significant differences were observed in values of ED and total bronchial area with both parameters being significantly higher in asthma patients. Compared to ASM, the eye-driven method overstated the values of ID and ED by about 30% and 10% respectively, while understating bronchial wall thickness by about 18%. Conclusions: Results obtained in this study suggest that the traditional eye-driven method of HRCT-based measurement of bronchial tree components probably overstates the degree of bronchial patency in asthma patients.

  19. The combined geodetic network adjusted on the reference ellipsoid – a comparison of three functional models for GNSS observations

    Directory of Open Access Journals (Sweden)

    Kadaj Roman

    2016-12-01

    Full Text Available The adjustment problem of the so-called combined (hybrid, integrated network created with GNSS vectors and terrestrial observations has been the subject of many theoretical and applied works. The network adjustment in various mathematical spaces was considered: in the Cartesian geocentric system on a reference ellipsoid and on a mapping plane. For practical reasons, it often takes a geodetic coordinate system associated with the reference ellipsoid. In this case, the Cartesian GNSS vectors are converted, for example, into geodesic parameters (azimuth and length on the ellipsoid, but the simple form of converted pseudo-observations are the direct differences of the geodetic coordinates. Unfortunately, such an approach may be essentially distorted by a systematic error resulting from the position error of the GNSS vector, before its projection on the ellipsoid surface. In this paper, an analysis of the impact of this error on the determined measures of geometric ellipsoid elements, including the differences of geodetic coordinates or geodesic parameters is presented. Assuming that the adjustment of a combined network on the ellipsoid shows that the optimal functional approach in relation to the satellite observation, is to create the observational equations directly for the original GNSS Cartesian vector components, writing them directly as a function of the geodetic coordinates (in numerical applications, we use the linearized forms of observational equations with explicitly specified coefficients. While retaining the original character of the Cartesian vector, one avoids any systematic errors that may occur in the conversion of the original GNSS vectors to ellipsoid elements, for example the vector of the geodesic parameters. The problem is theoretically developed and numerically tested. An example of the adjustment of a subnet loaded from the database of reference stations of the ASG-EUPOS system was considered for the preferred functional

  20. Assessment and comparison of methods for solar ultraviolet radiation measurements

    Energy Technology Data Exchange (ETDEWEB)

    Leszczynski, K

    1995-06-01

    In the study, the different methods to measure the solar ultraviolet radiation are compared. The methods included are spectroradiometric, erythemally weighted broadband and multi-channel measurements. The comparison of the different methods is based on a literature review and assessments of optical characteristics of the spectroradiometer Optronic 742 of the Finnish Centre for Radiation and Nuclear Safety (STUK) and of the erythemally weighted Robertson-Berger type broadband radiometers Solar Light models 500 and 501 of the Finnish Meteorological Institute and STUK. An introduction to the sources of error in solar UV measurements, to methods for radiometric characterization of UV radiometers together with methods for error reduction are presented. Reviews on experiences from world-wide UV monitoring efforts and instrumentation as well as on the results from international UV radiometer intercomparisons are also presented. (62 refs.).

  1. Assessment and comparison of methods for solar ultraviolet radiation measurements

    International Nuclear Information System (INIS)

    Leszczynski, K.

    1995-06-01

    In the study, the different methods to measure the solar ultraviolet radiation are compared. The methods included are spectroradiometric, erythemally weighted broadband and multi-channel measurements. The comparison of the different methods is based on a literature review and assessments of optical characteristics of the spectroradiometer Optronic 742 of the Finnish Centre for Radiation and Nuclear Safety (STUK) and of the erythemally weighted Robertson-Berger type broadband radiometers Solar Light models 500 and 501 of the Finnish Meteorological Institute and STUK. An introduction to the sources of error in solar UV measurements, to methods for radiometric characterization of UV radiometers together with methods for error reduction are presented. Reviews on experiences from world-wide UV monitoring efforts and instrumentation as well as on the results from international UV radiometer intercomparisons are also presented. (62 refs.)

  2. Diagnostic specificity of poor premorbid adjustment: comparison of schizophrenia, schizoaffective disorder, and mood disorder with psychotic features.

    Science.gov (United States)

    Tarbox, Sarah I; Brown, Leslie H; Haas, Gretchen L

    2012-10-01

    Individuals with schizophrenia have significant deficits in premorbid social and academic adjustment compared to individuals with non-psychotic diagnoses. However, it is unclear how severity and developmental trajectory of premorbid maladjustment compare across psychotic disorders. This study examined the association between premorbid functioning (in childhood, early adolescence, and late adolescence) and psychotic disorder diagnosis in a first-episode sample of 105 individuals: schizophrenia (n=68), schizoaffective disorder (n=22), and mood disorder with psychotic features (n=15). Social and academic maladjustment was assessed using the Cannon-Spoor Premorbid Adjustment Scale. Worse social functioning in late adolescence was associated with higher odds of schizophrenia compared to odds of either schizoaffective disorder or mood disorder with psychotic features, independently of child and early adolescent maladjustment. Greater social dysfunction in childhood was associated with higher odds of schizoaffective disorder compared to odds of schizophrenia. Premorbid decline in academic adjustment was observed for all groups, but did not predict diagnosis at any stage of development. Results suggest that social functioning is disrupted in the premorbid phase of both schizophrenia and schizoaffective disorder, but remains fairly stable in mood disorders with psychotic features. Disparities in the onset and time course of social dysfunction suggest important developmental differences between schizophrenia and schizoaffective disorder. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. An interlaboratory comparison of methods for measuring rock matrix porosity

    International Nuclear Information System (INIS)

    Rasilainen, K.; Hellmuth, K.H.; Kivekaes, L.; Ruskeeniemi, T.; Melamed, A.; Siitari-Kauppi, M.

    1996-09-01

    An interlaboratory comparison study was conducted for the available Finnish methods of rock matrix porosity measurements. The aim was first to compare different experimental methods for future applications, and second to obtain quality assured data for the needs of matrix diffusion modelling. Three different versions of water immersion techniques, a tracer elution method, a helium gas through-diffusion method, and a C-14-PMMA method were tested. All methods selected for this study were established experimental tools in the respective laboratories, and they had already been individually tested. Rock samples for the study were obtained from a homogeneous granitic drill core section from the natural analogue site at Palmottu. The drill core section was cut into slabs that were expected to be practically identical. The subsamples were then circulated between the different laboratories using a round robin approach. The circulation was possible because all methods were non-destructive, except the C-14-PMMA method, which was always the last method to be applied. The possible effect of drying temperature on the measured porosity was also preliminarily tested. These measurements were done in the order of increasing drying temperature. Based on the study, it can be concluded that all methods are comparable in their accuracy. The selection of methods for future applications can therefore be based on practical considerations. Drying temperature seemed to have very little effect on the measured porosity, but a more detailed study is needed for definite conclusions. (author) (4 refs.)

  4. Characterization of the CALIBAN Critical Assembly Neutron Spectra using Several Adjustment Methods Based on Activation Foils Measurement

    Science.gov (United States)

    Casoli, Pierre; Grégoire, Gilles; Rousseau, Guillaume; Jacquet, Xavier; Authier, Nicolas

    2016-02-01

    CALIBAN is a metallic critical assembly managed by the Criticality, Neutron Science and Measurement Department located on the French CEA Center of Valduc. The reactor is extensively used for benchmark experiments dedicated to the evaluation of nuclear data, for electronic hardening or to study the effect of the neutrons on various materials. Therefore CALIBAN irradiation characteristics and especially its central cavity neutron spectrum have to be very accurately evaluated. In order to strengthen our knowledge of this spectrum, several adjustment methods based on activation foils measurements are being studied for a few years in the laboratory. Firstly two codes included in the UMG package have been tested and compared: MAXED and GRAVEL. More recently, the CALIBAN cavity spectrum has been studied using CALMAR, a new adjustment tool currently under development at the CEA Center of Cadarache. The article will discuss and compare the results and the quality of spectrum rebuilding obtained with the UMG codes and with the CALMAR software, from a set of activation measurements carried out in the CALIBAN irradiation cavity.

  5. Characterization of the CALIBAN Critical Assembly Neutron Spectra using Several Adjustment Methods Based on Activation Foils Measurement

    Directory of Open Access Journals (Sweden)

    Casoli Pierre

    2016-01-01

    Full Text Available CALIBAN is a metallic critical assembly managed by the Criticality, Neutron Science and Measurement Department located on the French CEA Center of Valduc. The reactor is extensively used for benchmark experiments dedicated to the evaluation of nuclear data, for electronic hardening or to study the effect of the neutrons on various materials. Therefore CALIBAN irradiation characteristics and especially its central cavity neutron spectrum have to be very accurately evaluated. In order to strengthen our knowledge of this spectrum, several adjustment methods based on activation foils measurements are being studied for a few years in the laboratory. Firstly two codes included in the UMG package have been tested and compared: MAXED and GRAVEL. More recently, the CALIBAN cavity spectrum has been studied using CALMAR, a new adjustment tool currently under development at the CEA Center of Cadarache. The article will discuss and compare the results and the quality of spectrum rebuilding obtained with the UMG codes and with the CALMAR software, from a set of activation measurements carried out in the CALIBAN irradiation cavity.

  6. Application of adjusted subpixel method (ASM) in HRCT measurements of the bronchi in bronchial asthma patients and healthy individuals.

    Science.gov (United States)

    Mincewicz, Grzegorz; Rumiński, Jacek; Krzykowski, Grzegorz

    2012-02-01

    Recently, we described a model system which included corrections of high-resolution computed tomography (HRCT) bronchial measurements based on the adjusted subpixel method (ASM). To verify the clinical application of ASM by comparing bronchial measurements obtained by means of the traditional eye-driven method, subpixel method alone and ASM in a group comprised of bronchial asthma patients and healthy individuals. The study included 30 bronchial asthma patients and the control group comprised of 20 volunteers with no symptoms of asthma. The lowest internal and external diameters of the bronchial cross-sections (ID and ED) and their derivative parameters were determined in HRCT scans using: (1) traditional eye-driven method, (2) subpixel technique, and (3) ASM. In the case of the eye-driven method, lower ID values along with lower bronchial lumen area and its percentage ratio to total bronchial area were basic parameters that differed between asthma patients and healthy controls. In the case of the subpixel method and ASM, both groups were not significantly different in terms of ID. Significant differences were observed in values of ED and total bronchial area with both parameters being significantly higher in asthma patients. Compared to ASM, the eye-driven method overstated the values of ID and ED by about 30% and 10% respectively, while understating bronchial wall thickness by about 18%. Results obtained in this study suggest that the traditional eye-driven method of HRCT-based measurement of bronchial tree components probably overstates the degree of bronchial patency in asthma patients. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  7. Comparison of breeding methods for forage yield in red clover

    Directory of Open Access Journals (Sweden)

    Libor Jalůvka

    2009-01-01

    Full Text Available Three methods of red clover (Trifolium pratense L. breeding for forage yield in two harvest years on locations in Bredelokke (Denmark, Hladké Životice (Czech Republic and Les Alleuds (France were compared. Three types of 46 candivars1, developed by A recurrent selection in subsequent generations (37 candivars, divided into early and late group, B polycross progenies (4 candivars and C ge­no-phe­no­ty­pic selection (5 candivars were compared. The trials were sown in 2005 and cut three times in 2006 and 2007; their evaluation is based primarily on total yield of dry matter. The candivars developed by polycross and geno-phenotypic selections gave significantly higher yields than candivars from the recurrent selection. However, the candivars developed by the methods B and C did not differ significantly. The candivars developed by these progressive methods were suitable for higher yielding and drier environment in Hladké Životice (where was the highest yield level even if averaged annual precipitation were lower by 73 and 113 mm in comparison to other locations, respectively; here was ave­ra­ge yield higher by 19 and 13% for B and C in comparison to A method. Highly significant interaction of the candivars with locations was found. It can be concluded that varieties specifically aimed to different locations by the methods B and C should be bred; also the parental entries should be selected there.

  8. Steroid hormones in environmental matrices: extraction method comparison.

    Science.gov (United States)

    Andaluri, Gangadhar; Suri, Rominder P S; Graham, Kendon

    2017-11-09

    The U.S. Environmental Protection Agency (EPA) has developed methods for the analysis of steroid hormones in water, soil, sediment, and municipal biosolids by HRGC/HRMS (EPA Method 1698). Following the guidelines provided in US-EPA Method 1698, the extraction methods were validated with reagent water and applied to municipal wastewater, surface water, and municipal biosolids using GC/MS/MS for the analysis of nine most commonly detected steroid hormones. This is the first reported comparison of the separatory funnel extraction (SFE), continuous liquid-liquid extraction (CLLE), and Soxhlet extraction methods developed by the U.S. EPA. Furthermore, a solid phase extraction (SPE) method was also developed in-house for the extraction of steroid hormones from aquatic environmental samples. This study provides valuable information regarding the robustness of the different extraction methods. Statistical analysis of the data showed that SPE-based methods provided better recovery efficiencies and lower variability of the steroid hormones followed by SFE. The analytical methods developed in-house for extraction of biosolids showed a wide recovery range; however, the variability was low (≤ 7% RSD). Soxhlet extraction and CLLE are lengthy procedures and have been shown to provide highly variably recovery efficiencies. The results of this study are guidance for better sample preparation strategies in analytical methods for steroid hormone analysis, and SPE adds to the choice in environmental sample analysis.

  9. Comparison of urine analysis using manual and sedimentation methods.

    Science.gov (United States)

    Kurup, R; Leich, M

    2012-06-01

    Microscopic examination of urine sediment is an essential part in the evaluation of renal and urinary tract diseases. Traditionally, urine sediments are assessed by microscopic examination of centrifuged urine. However the current method used by the Georgetown Public Hospital Corporation Medical Laboratory involves uncentrifuged urine. To encourage high level of care, the results provided to the physician must be accurate and reliable for proper diagnosis. The aim of this study is to determine whether the centrifuge method is more clinically significant than the uncentrifuged method. In this study, a comparison between the results obtained from centrifuged and uncentrifuged methods were performed. A total of 167 urine samples were randomly collected and analysed during the period April-May 2010 at the Medical Laboratory, Georgetown Public Hospital Corporation. The urine samples were first analysed microscopically by the uncentrifuged, and then by the centrifuged method. The results obtained from both methods were recorded in a log book. These results were then entered into a database created in Microsoft Excel, and analysed for differences and similarities using this application. Analysis was further done in SPSS software to compare the results using Pearson ' correlation. When compared using Pearson's correlation coefficient analysis, both methods showed a good correlation between urinary sediments with the exception of white bloods cells. The centrifuged method had a slightly higher identification rate for all of the parameters. There is substantial agreement between the centrifuged and uncentrifuged methods. However the uncentrifuged method provides for a rapid turnaround time.

  10. Optimal Inconsistency Repairing of Pairwise Comparison Matrices Using Integrated Linear Programming and Eigenvector Methods

    Directory of Open Access Journals (Sweden)

    Haiqing Zhang

    2014-01-01

    Full Text Available Satisfying consistency requirements of pairwise comparison matrix (PCM is a critical step in decision making methodologies. An algorithm has been proposed to find a new modified consistent PCM in which it can replace the original inconsistent PCM in analytic hierarchy process (AHP or in fuzzy AHP. This paper defines the modified consistent PCM by the original inconsistent PCM and an adjustable consistent PCM combined. The algorithm adopts a segment tree to gradually approach the greatest lower bound of the distance with the original PCM to obtain the middle value of an adjustable PCM. It also proposes a theorem to obtain the lower value and the upper value of an adjustable PCM based on two constraints. The experiments for crisp elements show that the proposed approach can preserve more of the original information than previous works of the same consistent value. The convergence rate of our algorithm is significantly faster than previous works with respect to different parameters. The experiments for fuzzy elements show that our method could obtain suitable modified fuzzy PCMs.

  11. The choice of statistical methods for comparisons of dosimetric data in radiotherapy.

    Science.gov (United States)

    Chaikh, Abdulhamid; Giraud, Jean-Yves; Perrin, Emmanuel; Bresciani, Jean-Pierre; Balosso, Jacques

    2014-09-18

    Novel irradiation techniques are continuously introduced in radiotherapy to optimize the accuracy, the security and the clinical outcome of treatments. These changes could raise the question of discontinuity in dosimetric presentation and the subsequent need for practice adjustments in case of significant modifications. This study proposes a comprehensive approach to compare different techniques and tests whether their respective dose calculation algorithms give rise to statistically significant differences in the treatment doses for the patient. Statistical investigation principles are presented in the framework of a clinical example based on 62 fields of radiotherapy for lung cancer. The delivered doses in monitor units were calculated using three different dose calculation methods: the reference method accounts the dose without tissues density corrections using Pencil Beam Convolution (PBC) algorithm, whereas new methods calculate the dose with tissues density correction for 1D and 3D using Modified Batho (MB) method and Equivalent Tissue air ratio (ETAR) method, respectively. The normality of the data and the homogeneity of variance between groups were tested using Shapiro-Wilks and Levene test, respectively, then non-parametric statistical tests were performed. Specifically, the dose means estimated by the different calculation methods were compared using Friedman's test and Wilcoxon signed-rank test. In addition, the correlation between the doses calculated by the three methods was assessed using Spearman's rank and Kendall's rank tests. The Friedman's test showed a significant effect on the calculation method for the delivered dose of lung cancer patients (p Wilcoxon signed-rank test of paired comparisons indicated that the delivered dose was significantly reduced using density-corrected methods as compared to the reference method. Spearman's and Kendall's rank tests indicated a positive correlation between the doses calculated with the different methods

  12. Realistic PIC modelling of laser-plasma interaction: a direct implicit method with adjustable damping and high order weight functions

    International Nuclear Information System (INIS)

    Drouin, M.

    2009-11-01

    This research thesis proposes a new formulation of the relativistic implicit direct method, based on the weak formulation of the wave equation which is solved by means of a Newton algorithm. The first part of this thesis deals with the properties of the explicit particle-in-cell (PIC) methods: properties and limitations of an explicit PIC code, linear analysis of a numerical plasma, numerical heating phenomenon, interest of a higher order interpolation function, and presentation of two applications in high density relativistic laser-plasma interaction. The second and main part of this report deals with adapting the direct implicit method to laser-plasma interaction: presentation of the state of the art, formulating of the direct implicit method, resolution of the wave equation. The third part concerns various numerical and physical validations of the ELIXIRS code: case of laser wave propagation in vacuum, demonstration of the adjustable damping which is a characteristic of the proposed algorithm, influence of space-time discretization on energy conservation, expansion of a thermal plasma in vacuum, two cases of plasma-beam unsteadiness in relativistic regime, and then a case of the overcritical laser-plasma interaction

  13. Comparison of Transmission Line Methods for Surface Acoustic Wave Modeling

    Science.gov (United States)

    Wilson, William; Atkinson, Gary

    2009-01-01

    Surface Acoustic Wave (SAW) technology is low cost, rugged, lightweight, extremely low power and can be used to develop passive wireless sensors. For these reasons, NASA is investigating the use of SAW technology for Integrated Vehicle Health Monitoring (IVHM) of aerospace structures. To facilitate rapid prototyping of passive SAW sensors for aerospace applications, SAW models have been developed. This paper reports on the comparison of three methods of modeling SAWs. The three models are the Impulse Response Method (a first order model), and two second order matrix methods; the conventional matrix approach, and a modified matrix approach that is extended to include internal finger reflections. The second order models are based upon matrices that were originally developed for analyzing microwave circuits using transmission line theory. Results from the models are presented with measured data from devices. Keywords: Surface Acoustic Wave, SAW, transmission line models, Impulse Response Method.

  14. An algebraic topological method for multimodal brain networks comparison

    Directory of Open Access Journals (Sweden)

    Tiago eSimas

    2015-07-01

    Full Text Available Understanding brain connectivity is one of the most important issues in neuroscience. Nonetheless, connectivity data can reflect either functional relationships of brain activities or anatomical connections between brain areas. Although both representations should be related, this relationship is not straightforward. We have devised a powerful method that allows different operations between networks that share the same set of nodes, by embedding them in a common metric space, enforcing transitivity to the graph topology. Here, we apply this method to construct an aggregated network from a set of functional graphs, each one from a different subject. Once this aggregated functional network is constructed, we use again our method to compare it with the structural connectivity to identify particular brain regions that differ in both modalities (anatomical and functional. Remarkably, these brain regions include functional areas that form part of the classical resting state networks. We conclude that our method -based on the comparison of the aggregated functional network- reveals some emerging features that could not be observed when the comparison is performed with the classical averaged functional network.

  15. Methacholine challenge test: Comparison of tidal breathing and dosimeter methods in children.

    Science.gov (United States)

    Mazi, Ahlam; Lands, Larry C; Zielinski, David

    2018-02-01

    Methacholine Challenge Test (MCT) is used to confirm, assess the severity and/or rule out asthma. Two MCT methods are described as equivalent by the American Thoracic Society (ATS), the tidal breathing and the dosimeter methods. However, the majority of adult studies suggest that individuals with asthma do not react at the same PC 20 between the two methods. Additionally, the nebulizers used are no longer available and studies suggest current nebulizers are not equivalent to these. Our study investigates the difference in positive MCT tests between three methods in a pediatric population. A retrospective, chart review of all MCT performed with spirometry at the Montreal Children's Hospital from January 2006 to March 2016. A comparison of the percentage positive MCT tests with three methods, tidal breathing, APS dosimeter and dose adjusted DA-dosimeter, was performed at different cutoff points up to 8 mg/mL. A total of 747 subjects performed the tidal breathing method, 920 subjects the APS dosimeter method, and 200 subjects the DA-dosimeter method. At a PC 20 cutoff ≤4 mg/mL, the percentage positive MCT was significantly higher using the tidal breathing method (76.3%) compared to the APS dosimeter (45.1%) and DA-dosimeter (65%) methods (P < 0.0001). The choice of nebulizer and technique significantly impacts the rate of positivity when using MCT to diagnose and assess asthma. Lack of direct comparison of techniques within the same individuals and clinical assessment should be addressed in future studies to standardize MCT methodology in children. © 2017 Wiley Periodicals, Inc.

  16. Ibrutinib versus previous standard of care: an adjusted comparison in patients with relapsed/refractory chronic lymphocytic leukaemia.

    Science.gov (United States)

    Hansson, Lotta; Asklid, Anna; Diels, Joris; Eketorp-Sylvan, Sandra; Repits, Johanna; Søltoft, Frans; Jäger, Ulrich; Österborg, Anders

    2017-10-01

    This study explored the relative efficacy of ibrutinib versus previous standard-of-care treatments in relapsed/refractory patients with chronic lymphocytic leukaemia (CLL), using multivariate regression modelling to adjust for baseline prognostic factors. Individual patient data were collected from an observational Stockholm cohort of consecutive patients (n = 144) diagnosed with CLL between 2002 and 2013 who had received at least second-line treatment. Data were compared with results of the RESONATE clinical trial. A multivariate Cox proportional hazards regression model was used which estimated the hazard ratio (HR) of ibrutinib versus previous standard of care. The adjusted HR of ibrutinib versus the previous standard-of-care cohort was 0.15 (p ibrutinib in the RESONATE study were significantly longer than with previous standard-of-care regimens used in second or later lines in routine healthcare. The approach used, which must be interpreted with caution, compares patient-level data from a clinical trial with outcomes observed in a daily clinical practice and may complement results from randomised trials or provide preliminary wider comparative information until phase 3 data exist.

  17. Reverse-total shoulder arthroplasty cost-effectiveness: A quality-adjusted life years comparison with total hip arthroplasty.

    Science.gov (United States)

    Bachman, Daniel; Nyland, John; Krupp, Ryan

    2016-02-18

    To compare reverse-total shoulder arthroplasty (RSA) cost-effectiveness with total hip arthroplasty cost-effectiveness. This study used a stochastic model and decision-making algorithm to compare the cost-effectiveness of RSA and total hip arthroplasty. Fifteen patients underwent pre-operative, and 3, 6, and 12 mo post-operative clinical examinations and Short Form-36 Health Survey completion. Short form-36 Health Survey subscale scores were converted to EuroQual Group Five Dimension Health Outcome scores and compared with historical data from age-matched patients who had undergone total hip arthroplasty. Quality-adjusted life year (QALY) improvements based on life expectancies were calculated. The cost/QALY was $3900 for total hip arthroplasty and $11100 for RSA. After adjusting the model to only include shoulder-specific physical function subscale items, the RSA QALY improved to 2.8 years, and its cost/QALY decreased to $8100. Based on industry accepted standards, cost/QALY estimates supported both RSA and total hip arthroplasty cost-effectiveness. Although total hip arthroplasty remains the quality of life improvement "gold standard" among arthroplasty procedures, cost/QALY estimates identified in this study support the growing use of RSA to improve patient quality of life.

  18. Convexity Adjustments

    DEFF Research Database (Denmark)

    M. Gaspar, Raquel; Murgoci, Agatha

    2010-01-01

    A convexity adjustment (or convexity correction) in fixed income markets arises when one uses prices of standard (plain vanilla) products plus an adjustment to price nonstandard products. We explain the basic and appealing idea behind the use of convexity adjustments and focus on the situations...

  19. Comparison of n-γ discrimination by zero-crossing and digital charge comparison methods

    International Nuclear Information System (INIS)

    Wolski, D.; Moszynski, M.; Ludziejewski, T.; Johnson, A.; Klamra, W.; Skeppstedt, Oe.

    1995-01-01

    A comparative study of the n-γ discrimination done by the digital charge comparison and zero-crossing methods was carried out for a 130 mm in diameter and 130 mm high BC501A liquid scintillator coupled to a 130 mm diameter XP4512B photomultiplier. The high quality of the tested detector was reflected in a photoelectron yield of 2300±100 phe/MeV and excellent n-γ discrimination properties with energy discrimination thresholds corresponding to very low neutron (or electron) energies. The superiority of the Z/C method was demonstrated for the n-γ discrimination method alone, as well as, for the simultaneous separation by the pulse shape discrimination and the time-of-flight methods down to about 30 keV recoil electron energy. The digital charge comparison method fails for a large dynamic range of energy and its separation is weakly improved by time-of-flight method for low energies. (orig.)

  20. A comparison of surveillance methods for small incidence rates

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Woodall, William H.; Reynolds, Marion R.

    2008-05-15

    A number of methods have been proposed to detect an increasing shift in the incidence rate of a rare health event, such as a congenital malformation. Among these are the Sets method, two modifcations of the Sets method, and the CUSUM method based on the Poisson distribution. We consider the situation where data are observed as a sequence of Bernoulli trials and propose the Bernoulli CUSUM chart as a desirable method for the surveillance of rare health events. We compare the performance of the Sets method and its modifcations to the Bernoulli CUSUM chart under a wide variety of circumstances. Chart design parameters were chosen to satisfy a minimax criteria.We used the steady- state average run length to measure chart performance instead of the average run length which was used in nearly all previous comparisons involving the Sets method or its modifcations. Except in a very few instances, we found that the Bernoulli CUSUM chart has better steady-state average run length performance than the Sets method and its modifcations for the extensive number of cases considered.

  1. A comparison of ancestral state reconstruction methods for quantitative characters.

    Science.gov (United States)

    Royer-Carenzi, Manuela; Didier, Gilles

    2016-09-07

    Choosing an ancestral state reconstruction method among the alternatives available for quantitative characters may be puzzling. We present here a comparison of seven of them, namely the maximum likelihood, restricted maximum likelihood, generalized least squares under Brownian, Brownian-with-trend and Ornstein-Uhlenbeck models, phylogenetic independent contrasts and squared parsimony methods. A review of the relations between these methods shows that the maximum likelihood, the restricted maximum likelihood and the generalized least squares under Brownian model infer the same ancestral states and can only be distinguished by the distributions accounting for the reconstruction uncertainty which they provide. The respective accuracy of the methods is assessed over character evolution simulated under a Brownian motion with (and without) directional or stabilizing selection. We give the general form of ancestral state distributions conditioned on leaf states under the simulation models. Ancestral distributions are used first, to give a theoretical lower bound of the expected reconstruction error, and second, to develop an original evaluation scheme which is more efficient than comparing the reconstructed and the simulated states. Our simulations show that: (i) the distributions of the reconstruction uncertainty provided by the methods generally make sense (some more than others); (ii) it is essential to detect the presence of an evolutionary trend and to choose a reconstruction method accordingly; (iii) all the methods show good performances on characters under stabilizing selection; (iv) without trend or stabilizing selection, the maximum likelihood method is generally the most accurate. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. A Comparison of Surface Acoustic Wave Modeling Methods

    Science.gov (United States)

    Wilson, W. c.; Atkinson, G. M.

    2009-01-01

    Surface Acoustic Wave (SAW) technology is low cost, rugged, lightweight, extremely low power and can be used to develop passive wireless sensors. For these reasons, NASA is investigating the use of SAW technology for Integrated Vehicle Health Monitoring (IVHM) of aerospace structures. To facilitate rapid prototyping of passive SAW sensors for aerospace applications, SAW models have been developed. This paper reports on the comparison of three methods of modeling SAWs. The three models are the Impulse Response Method a first order model, and two second order matrix methods; the conventional matrix approach, and a modified matrix approach that is extended to include internal finger reflections. The second order models are based upon matrices that were originally developed for analyzing microwave circuits using transmission line theory. Results from the models are presented with measured data from devices.

  3. Comparison of Video Steganography Methods for Watermark Embedding

    Directory of Open Access Journals (Sweden)

    Griberman David

    2016-05-01

    Full Text Available The paper focuses on the comparison of video steganography methods for the purpose of digital watermarking in the context of copyright protection. Four embedding methods that use Discrete Cosine and Discrete Wavelet Transforms have been researched and compared based on their embedding efficiency and fidelity. A video steganography program has been developed in the Java programming language with all of the researched methods implemented for experiments. The experiments used 3 video containers with different amounts of movement. The impact of the movement has been addressed in the paper as well as the ways of potential improvement of embedding efficiency using adaptive embedding based on the movement amount. Results of the research have been verified using a survey with 17 participants.

  4. A comparison of two instructional methods for drawing Lewis Structures

    Science.gov (United States)

    Terhune, Kari

    Two instructional methods for teaching Lewis structures were compared -- the Direct Octet Rule Method (DORM) and the Commonly Accepted Method (CAM). The DORM gives the number of bonds and the number of nonbonding electrons immediately, while the CAM involves moving electron pairs from nonbonding to bonding electrons, if necessary. The research question was as follows: Will high school chemistry students draw more accurate Lewis structures using the DORM or the CAM? Students in Regular Chemistry 1 (N = 23), Honors Chemistry 1 (N = 51) and Chemistry 2 (N = 15) at an urban high school were the study participants. An identical pretest and posttest was given before and after instruction. Students were given instruction with either the DORM (N = 45), the treatment method, or the CAM (N = 44), the control for two days. After the posttest, 15 students were interviewed, using a semistructured interview process. The pretest/posttest consisted of 23 numerical response questions and 2 to 6 free response questions that were graded using a rubric. A two-way ANOVA showed a significant interaction effect between the groups and the methods, F (1, 70) = 10.960, p = 0.001. Post hoc comparisons using the Bonferroni pairwise comparison showed that Reg Chem 1 students demonstrated larger gain scores when they had been taught the CAM (Mean difference = 3.275, SE = 1.324, p Chemistry 1 students performed better with the DORM, perhaps due to better math skills, enhanced working memory, and better metacognitive skills. Regular Chemistry 1 students performed better with the CAM, perhaps because it is more visual. Teachers may want to use the CAM or a direct-pairing method to introduce the topic and use the DORM in advanced classes when a correct structure is needed quickly.

  5. A comparison of methods for evaluating structure during ship collisions

    International Nuclear Information System (INIS)

    Ammerman, D.J.; Daidola, J.C.

    1996-01-01

    A comparison is provided of the results of various methods for evaluating structure during a ship-to-ship collision. The baseline vessel utilized in the analyses is a 67.4 meter in length displacement hull struck by an identical vessel traveling at speeds ranging from 10 to 30 knots. The structural response of the struck vessel and motion of both the struck and striking vessels are assessed by finite element analysis. These same results are then compared to predictions utilizing the open-quotes Tanker Structural Analysis for Minor Collisionsclose quotes (TSAMC) Method, the Minorsky Method, the Haywood Collision Process, and comparison to full-scale tests. Consideration is given to the nature of structural deformation, absorbed energy, penetration, rigid body motion, and virtual mass affecting the hydrodynamic response. Insights are provided with regard to the calibration of the finite element model which was achievable through utilizing the more empirical analyses and the extent to which the finite element analysis is able to simulate the entire collision event. 7 refs., 8 figs., 4 tabs

  6. A new comparison method for dew-point generators

    Science.gov (United States)

    Heinonen, Martti

    1999-12-01

    A new method for comparing dew-point generators was developed at the Centre for Metrology and Accreditation. In this method, the generators participating in a comparison are compared with a transportable saturator unit using a dew-point comparator. The method was tested by constructing a test apparatus and by comparing it with the MIKES primary dew-point generator several times in the dew-point temperature range from -40 to +75 °C. The expanded uncertainty (k = 2) of the apparatus was estimated to be between 0.05 and 0.07 °C and the difference between the comparator system and the generator is well within these limits. In particular, all of the results obtained in the range below 0 °C are within ±0.03 °C. It is concluded that a new type of a transfer standard with characteristics most suitable for dew-point comparisons can be developed on the basis of the principles presented in this paper.

  7. Comparison Study of Subspace Identification Methods Applied to Flexible Structures

    Science.gov (United States)

    Abdelghani, M.; Verhaegen, M.; Van Overschee, P.; De Moor, B.

    1998-09-01

    In the past few years, various time domain methods for identifying dynamic models of mechanical structures from modal experimental data have appeared. Much attention has been given recently to so-called subspace methods for identifying state space models. This paper presents a detailed comparison study of these subspace identification methods: the eigensystem realisation algorithm with observer/Kalman filter Markov parameters computed from input/output data (ERA/OM), the robust version of the numerical algorithm for subspace system identification (N4SID), and a refined version of the past outputs scheme of the multiple-output error state space (MOESP) family of algorithms. The comparison is performed by simulating experimental data using the five mode reduced model of the NASA Mini-Mast structure. The general conclusion is that for the case of white noise excitations as well as coloured noise excitations, the N4SID/MOESP algorithms perform equally well but give better results (improved transfer function estimates, improved estimates of the output) compared to the ERA/OM algorithm. The key computational step in the three algorithms is the approximation of the extended observability matrix of the system to be identified, for N4SID/MOESP, or of the observer for the system to be identified, for the ERA/OM. Furthermore, the three algorithms only require the specification of one dimensioning parameter.

  8. An efficient method to generate a perturbed parameter ensemble of a fully coupled AOGCM without flux-adjustment

    Directory of Open Access Journals (Sweden)

    P. J. Irvine

    2013-09-01

    Full Text Available We present a simple method to generate a perturbed parameter ensemble (PPE of a fully-coupled atmosphere-ocean general circulation model (AOGCM, HadCM3, without requiring flux-adjustment. The aim was to produce an ensemble that samples parametric uncertainty in some key variables and gives a plausible representation of the climate. Six atmospheric parameters, a sea-ice parameter and an ocean parameter were jointly perturbed within a reasonable range to generate an initial group of 200 members. To screen out implausible ensemble members, 20 yr pre-industrial control simulations were run and members whose temperature responses to the parameter perturbations were projected to be outside the range of 13.6 ± 2 °C, i.e. near to the observed pre-industrial global mean, were discarded. Twenty-one members, including the standard unperturbed model, were accepted, covering almost the entire span of the eight parameters, challenging the argument that without flux-adjustment parameter ranges would be unduly restricted. This ensemble was used in 2 experiments; an 800 yr pre-industrial and a 150 yr quadrupled CO2 simulation. The behaviour of the PPE for the pre-industrial control compared well to ERA-40 reanalysis data and the CMIP3 ensemble for a number of surface and atmospheric column variables with the exception of a few members in the Tropics. However, we find that members of the PPE with low values of the entrainment rate coefficient show very large increases in upper tropospheric and stratospheric water vapour concentrations in response to elevated CO2 and one member showed an implausible nonlinear climate response, and as such will be excluded from future experiments with this ensemble. The outcome of this study is a PPE of a fully-coupled AOGCM which samples parametric uncertainty and a simple methodology which would be applicable to other GCMs.

  9. Conceptualizing and measuring illness self-concept: a comparison with self-esteem and optimism in predicting fibromyalgia adjustment.

    Science.gov (United States)

    Morea, Jessica M; Friend, Ronald; Bennett, Robert M

    2008-12-01

    Illness self-concept (ISC), or the extent to which individuals are consumed by their illness, was theoretically described and evaluated with the Illness Self-Concept Scale (ISCS), a new 23-item scale, to predict adjustment in fibromyalgia. To establish convergent and discriminant validity, illness self-concept was compared to self-esteem and optimism in predicting health status, illness intrusiveness, depression, and life satisfaction. The ISCS demonstrated good reliability (alpha = .94; test-retest r = .80) and was a strong predictor of outcomes, even after controlling for optimism or self-esteem. The ISCS predicted unique variance in health-related outcomes; optimism and self-esteem did not, providing construct validation. Illness self-concept may play a significant role in coping with fibromyalgia and may prove useful in the evaluation of other chronic illnesses. (c) 2008 Wiley Periodicals, Inc.

  10. Impact of urine concentration adjustment method on associations between urine metals and estimated glomerular filtration rates (eGFR) in adolescents

    Energy Technology Data Exchange (ETDEWEB)

    Weaver, Virginia M., E-mail: vweaver@jhsph.edu [Department of Environmental Health Sciences, Johns Hopkins Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD (United States); Johns Hopkins University School of Medicine, Baltimore, MD (United States); Welch Center for Prevention, Epidemiology, and Clinical Research, Johns Hopkins Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD (United States); Vargas, Gonzalo García [Faculty of Medicine, University of Juárez of Durango State, Durango (Mexico); Secretaría de Salud del Estado de Coahuila, Coahuila, México (Mexico); Silbergeld, Ellen K. [Department of Environmental Health Sciences, Johns Hopkins Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD (United States); Rothenberg, Stephen J. [Instituto Nacional de Salud Publica, Centro de Investigacion en Salud Poblacional, Cuernavaca, Morelos (Mexico); Fadrowski, Jeffrey J. [Johns Hopkins University School of Medicine, Baltimore, MD (United States); Welch Center for Prevention, Epidemiology, and Clinical Research, Johns Hopkins Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD (United States); Rubio-Andrade, Marisela [Faculty of Medicine, University of Juárez of Durango State, Durango (Mexico); Parsons, Patrick J. [Laboratory of Inorganic and Nuclear Chemistry, Wadsworth Center, New York State Department of Health, Albany, NY (United States); Department of Environmental Health Sciences, School of Public Health, University at Albany, Albany, NY (United States); Steuerwald, Amy J. [Laboratory of Inorganic and Nuclear Chemistry, Wadsworth Center, New York State Department of Health, Albany, NY (United States); and others

    2014-07-15

    Positive associations between urine toxicant levels and measures of glomerular filtration rate (GFR) have been reported recently in a range of populations. The explanation for these associations, in a direction opposite that of traditional nephrotoxicity, is uncertain. Variation in associations by urine concentration adjustment approach has also been observed. Associations of urine cadmium, thallium and uranium in models of serum creatinine- and cystatin-C-based estimated GFR (eGFR) were examined using multiple linear regression in a cross-sectional study of adolescents residing near a lead smelter complex. Urine concentration adjustment approaches compared included urine creatinine, urine osmolality and no adjustment. Median age, blood lead and urine cadmium, thallium and uranium were 13.9 years, 4.0 μg/dL, 0.22, 0.27 and 0.04 g/g creatinine, respectively, in 512 adolescents. Urine cadmium and thallium were positively associated with serum creatinine-based eGFR only when urine creatinine was used to adjust for urine concentration (β coefficient=3.1 mL/min/1.73 m{sup 2}; 95% confidence interval=1.4, 4.8 per each doubling of urine cadmium). Weaker positive associations, also only with urine creatinine adjustment, were observed between these metals and serum cystatin-C-based eGFR and between urine uranium and serum creatinine-based eGFR. Additional research using non-creatinine-based methods of adjustment for urine concentration is necessary. - Highlights: • Positive associations between urine metals and creatinine-based eGFR are unexpected. • Optimal approach to urine concentration adjustment for urine biomarkers uncertain. • We compared urine concentration adjustment methods. • Positive associations observed only with urine creatinine adjustment. • Additional research using non-creatinine-based methods of adjustment needed.

  11. Impact of urine concentration adjustment method on associations between urine metals and estimated glomerular filtration rates (eGFR) in adolescents

    International Nuclear Information System (INIS)

    Weaver, Virginia M.; Vargas, Gonzalo García; Silbergeld, Ellen K.; Rothenberg, Stephen J.; Fadrowski, Jeffrey J.; Rubio-Andrade, Marisela; Parsons, Patrick J.; Steuerwald, Amy J.

    2014-01-01

    Positive associations between urine toxicant levels and measures of glomerular filtration rate (GFR) have been reported recently in a range of populations. The explanation for these associations, in a direction opposite that of traditional nephrotoxicity, is uncertain. Variation in associations by urine concentration adjustment approach has also been observed. Associations of urine cadmium, thallium and uranium in models of serum creatinine- and cystatin-C-based estimated GFR (eGFR) were examined using multiple linear regression in a cross-sectional study of adolescents residing near a lead smelter complex. Urine concentration adjustment approaches compared included urine creatinine, urine osmolality and no adjustment. Median age, blood lead and urine cadmium, thallium and uranium were 13.9 years, 4.0 μg/dL, 0.22, 0.27 and 0.04 g/g creatinine, respectively, in 512 adolescents. Urine cadmium and thallium were positively associated with serum creatinine-based eGFR only when urine creatinine was used to adjust for urine concentration (β coefficient=3.1 mL/min/1.73 m 2 ; 95% confidence interval=1.4, 4.8 per each doubling of urine cadmium). Weaker positive associations, also only with urine creatinine adjustment, were observed between these metals and serum cystatin-C-based eGFR and between urine uranium and serum creatinine-based eGFR. Additional research using non-creatinine-based methods of adjustment for urine concentration is necessary. - Highlights: • Positive associations between urine metals and creatinine-based eGFR are unexpected. • Optimal approach to urine concentration adjustment for urine biomarkers uncertain. • We compared urine concentration adjustment methods. • Positive associations observed only with urine creatinine adjustment. • Additional research using non-creatinine-based methods of adjustment needed

  12. Comparison of seven optical clearing methods for mouse brain

    Science.gov (United States)

    Wan, Peng; Zhu, Jingtan; Yu, Tingting; Zhu, Dan

    2018-02-01

    Recently, a variety of tissue optical clearing techniques have been developed to reduce light scattering for imaging deeper and three-dimensional reconstruction of tissue structures. Combined with optical imaging techniques and diverse labeling methods, these clearing methods have significantly promoted the development of neuroscience. However, most of the protocols were proposed aiming for specific tissue type. Though there are some comparison results, the clearing methods covered are limited and the evaluation indices are lack of uniformity, which made it difficult to select a best-fit protocol for clearing in practical applications. Hence, it is necessary to systematically assess and compare these clearing methods. In this work, we evaluated the performance of seven typical clearing methods, including 3DISCO, uDISCO, SeeDB, ScaleS, ClearT2, CUBIC and PACT, on mouse brain samples. First, we compared the clearing capability on both brain slices and whole-brains by observing brain transparency. Further, we evaluated the fluorescence preservation and the increase of imaging depth. The results showed that 3DISCO, uDISCO and PACT posed excellent clearing capability on mouse brains, ScaleS and SeeDB rendered moderate transparency, while ClearT2 was the worst. Among those methods, ScaleS was the best on fluorescence preservation, and PACT achieved the highest increase of imaging depth. This study is expected to provide important reference for users in choosing most suitable brain optical clearing method.

  13. Comparison of parameter-adapted segmentation methods for fluorescence micrographs.

    Science.gov (United States)

    Held, Christian; Palmisano, Ralf; Häberle, Lothar; Hensel, Michael; Wittenberg, Thomas

    2011-11-01

    Interpreting images from fluorescence microscopy is often a time-consuming task with poor reproducibility. Various image processing routines that can help investigators evaluate the images are therefore useful. The critical aspect for a reliable automatic image analysis system is a robust segmentation algorithm that can perform accurate segmentation for different cell types. In this study, several image segmentation methods were therefore compared and evaluated in order to identify the most appropriate segmentation schemes that are usable with little new parameterization and robustly with different types of fluorescence-stained cells for various biological and biomedical tasks. The study investigated, compared, and enhanced four different methods for segmentation of cultured epithelial cells. The maximum-intensity linking (MIL) method, an improved MIL, a watershed method, and an improved watershed method based on morphological reconstruction were used. Three manually annotated datasets consisting of 261, 817, and 1,333 HeLa or L929 cells were used to compare the different algorithms. The comparisons and evaluations showed that the segmentation performance of methods based on the watershed transform was significantly superior to the performance of the MIL method. The results also indicate that using morphological opening by reconstruction can improve the segmentation of cells stained with a marker that exhibits the dotted surface of cells. Copyright © 2011 International Society for Advancement of Cytometry.

  14. A comparison of Nodal methods in neutron diffusion calculations

    Energy Technology Data Exchange (ETDEWEB)

    Tavron, Barak [Israel Electric Company, Haifa (Israel) Nuclear Engineering Dept. Research and Development Div.

    1996-12-01

    The nuclear engineering department at IEC uses in the reactor analysis three neutron diffusion codes based on nodal methods. The codes, GNOMERl, ADMARC2 and NOXER3 solve the neutron diffusion equation to obtain flux and power distributions in the core. The resulting flux distributions are used for the furl cycle analysis and for fuel reload optimization. This work presents a comparison of the various nodal methods employed in the above codes. Nodal methods (also called Coarse-mesh methods) have been designed to solve problems that contain relatively coarse areas of homogeneous composition. In the nodal method parts of the equation that present the state in the homogeneous area are solved analytically while, according to various assumptions and continuity requirements, a general solution is sought out. Thus efficiency of the method for this kind of problems, is very high compared with the finite element and finite difference methods. On the other hand, using this method one can get only approximate information about the node vicinity (or coarse-mesh area, usually a feel assembly of a 20 cm size). These characteristics of the nodal method make it suitable for feel cycle analysis and reload optimization. This analysis requires many subsequent calculations of the flux and power distributions for the feel assemblies while there is no need for detailed distribution within the assembly. For obtaining detailed distribution within the assembly methods of power reconstruction may be applied. However homogenization of feel assembly properties, required for the nodal method, may cause difficulties when applied to fuel assemblies with many absorber rods, due to exciting strong neutron properties heterogeneity within the assembly. (author).

  15. Should methods of correction for multiple comparisons be applied in pharmacovigilance?

    Directory of Open Access Journals (Sweden)

    Lorenza Scotti

    2015-12-01

    Full Text Available Purpose. In pharmacovigilance, spontaneous reporting databases are devoted to the early detection of adverse event ‘signals’ of marketed drugs. A common limitation of these systems is the wide number of concurrently investigated associations, implying a high probability of generating positive signals simply by chance. However it is not clear if the application of methods aimed to adjust for the multiple testing problems are needed when at least some of the drug-outcome relationship under study are known. To this aim we applied a robust estimation method for the FDR (rFDR particularly suitable in the pharmacovigilance context. Methods. We exploited the data available for the SAFEGUARD project to apply the rFDR estimation methods to detect potential false positive signals of adverse reactions attributable to the use of non-insulin blood glucose lowering drugs. Specifically, the number of signals generated from the conventional disproportionality measures and after the application of the rFDR adjustment method was compared. Results. Among the 311 evaluable pairs (i.e., drug-event pairs with at least one adverse event report, 106 (34% signals were considered as significant from the conventional analysis. Among them 1 resulted in false positive signals according to rFDR method. Conclusions. The results of this study seem to suggest that when a restricted number of drug-outcome pairs is considered and warnings about some of them are known, multiple comparisons methods for recognizing false positive signals are not so useful as suggested by theoretical considerations.

  16. COMPARISON OF TWO METHODS FOR THE DETECTION OF LISTERIA MONOCYTOGENES

    Directory of Open Access Journals (Sweden)

    G. Tantillo

    2013-02-01

    Full Text Available The aim of this study was to compare the performance of the conventional methods for detection of Listeria monocytogenes in food using media Oxford and ALOA (Agar Listeria acc. to Ottaviani & Agosti in according to the ISO 11290-1 to a new chromogenic medium “CHROMagar Listeria” standardized in 2005 AFNOR ( CHR – 21/1-12/01. A total of 40 pre-packed ready-to-eat food samples were examined. Using two methods six samples were found positive for Listeria monocytogenes but the medium “CHROMagar Listeria” was more selective in comparison with the others. In conclusion this study has demonstrated that isolation medium able to target specifically the detection of L. monocytogenes such as “CHROMagar Listeria” is highly recommendable because of that detection time is significantly reduced and the analysis cost is less expensive.

  17. Comparison of three retail data communication and transmission methods

    Directory of Open Access Journals (Sweden)

    MA Yue

    2016-04-01

    Full Text Available With the rapid development of retail trade,the type and complexity of data keep increasing,and single data file size has a great difference between each other.How to realize an accurate,real-time and efficient data transmission based on a fixed cost is an important problem.Regarding the problem of effective transmission for business data files,this article implements analysis and comparison on 3 existing data transmission methods,considering the requirements on aspects like function in enterprise data communication system,we get a conclusion that which method can both meet the enterprise daily business development requirement better and have good extension ability.

  18. Differential and difference equations a comparison of methods of solution

    CERN Document Server

    Maximon, Leonard C

    2016-01-01

    This book, intended for researchers and graduate students in physics, applied mathematics and engineering, presents a detailed comparison of the important methods of solution for linear differential and difference equations - variation of constants, reduction of order, Laplace transforms and generating functions - bringing out the similarities as well as the significant differences in the respective analyses. Equations of arbitrary order are studied, followed by a detailed analysis for equations of first and second order. Equations with polynomial coefficients are considered and explicit solutions for equations with linear coefficients are given, showing significant differences in the functional form of solutions of differential equations from those of difference equations. An alternative method of solution involving transformation of both the dependent and independent variables is given for both differential and difference equations. A comprehensive, detailed treatment of Green’s functions and the associat...

  19. Comparison of three sensory profiling methods based on consumer perception

    DEFF Research Database (Denmark)

    Reinbach, Helene Christine; Giacalone, Davide; Ribeiro, Letícia Machado

    2014-01-01

    The present study compares three profiling methods based on consumer perceptions in their ability to discriminate and describe eight beers. Consumers (N=135) evaluated eight different beers using Check-All-That-Apply (CATA) methodology in two variations, with (n=63) and without (n=73) rating...... the intensity of the checked descriptors. With CATA, consumers rated 38 descriptors grouped in 7 overall categories (berries, floral, hoppy, nutty, roasted, spicy/herbal and woody). Additionally 40 of the consumers evaluated the same samples by partial Napping® followed by Ultra Flash Profiling (UFP). ANOVA...... comparisons the RV coefficients varied between 0.90 and 0.97, indicating a very high similarity between all three methods. These results show that the precision and reproducibility of sensory information obtained by consumers by CATA is comparable to that of Napping. The choice of methodology for consumer...

  20. National comparison on volume sample activity measurement methods

    International Nuclear Information System (INIS)

    Sahagia, M.; Grigorescu, E.L.; Popescu, C.; Razdolescu, C.

    1992-01-01

    A national comparison on volume sample activity measurements methods may be regarded as a step toward accomplishing the traceability of the environmental and food chain activity measurements to national standards. For this purpose, the Radionuclide Metrology Laboratory has distributed 137 Cs and 134 Cs water-equivalent solid standard sources to 24 laboratories having responsibilities in this matter. Every laboratory has to measure the activity of the received source(s) by using its own standards, equipment and methods and report the obtained results to the organizer. The 'measured activities' will be compared with the 'true activities'. A final report will be issued, which plans to evaluate the national level of precision of such measurements and give some suggestions for improvement. (Author)

  1. Comparison of the dose evaluation methods for criticality accident

    International Nuclear Information System (INIS)

    Shimizu, Yoshio; Oka, Tsutomu

    2004-01-01

    The improvement of the dose evaluation method for criticality accidents is important to rationalize design of the nuclear fuel cycle facilities. The source spectrums of neutron and gamma ray of a criticality accident depend on the condition of the source, its materials, moderation, density and so on. The comparison of the dose evaluation methods for a criticality accident is made. Some methods, which are combination of criticality calculation and shielding calculation, are proposed. Prompt neutron and gamma ray doses from nuclear criticality of some uranium systems have been evaluated as the Nuclear Criticality Slide Rule. The uranium metal source (unmoderated system) and the uranyl nitrate solution source (moderated system) in the rule are evaluated by some calculation methods, which are combinations of code and cross section library, as follows: (a) SAS1X (ENDF/B-IV), (b) MCNP4C (ENDF/B-VI)-ANISN (DLC23E or JSD120), (c) MCNP4C-MCNP4C (ENDF/B-VI). They have consisted of criticality calculation and shielding calculation. These calculation methods are compared about the tissue absorbed dose and the spectrums at 2 m from the source. (author)

  2. Technical and economic comparison of irradiation and conventional methods

    International Nuclear Information System (INIS)

    1988-04-01

    Radiation processing is based on the use of ionizing radiation (gamma radiation, high energy electrons) as a source of energy in different industrial applications. For about 30 years of steady growth it became almost routine in a number of industrial processes. The growth resulted, among other things, from the availability of radiation sources matching requirements of industrial production and from the wide basis of fundamental and applied radiation research. The most important, however, was the fact that radiation processing proved in practice to be safe, reliable, easy to control, and economical. In comparison with other competitive techniques, it also offered a better alternative from the viewpoint of occupational safety and environmental considerations. In order to review the current status of comparative analysis of radiation and competitive techniques from the technological and economic viewpoint, the Agency convened an Advisory Group in Dubrovnik, Yugoslavia, 6-8 October 1986. The present publication is based on contributions presented at the meeting, discussions held at the meeting, and subsequent editing work. It is expected that this updated technological and economic comparative analysis will provide useful information for all those who are implementing or considering the introduction of radiation processing to industry. The presented data may be essential for decision makers and could contribute to all feasibility and pre-investment studies. As such, this publication is expected to promote industrially oriented projects based on radiation processing techniques. The actual figures given in the individual reports are as accurate as possible. However, it should be understood that time factor and local conditions may play a significant role and corresponding adjustment may be required. Refs, figs and tabs

  3. The barriers to and enablers of providing reasonably adjusted health services to people with intellectual disabilities in acute hospitals: evidence from a mixed-methods study.

    Science.gov (United States)

    Tuffrey-Wijne, Irene; Goulding, Lucy; Giatras, Nikoletta; Abraham, Elisabeth; Gillard, Steve; White, Sarah; Edwards, Christine; Hollins, Sheila

    2014-04-16

    To identify the factors that promote and compromise the implementation of reasonably adjusted healthcare services for patients with intellectual disabilities in acute National Health Service (NHS) hospitals. A mixed-methods study involving interviews, questionnaires and participant observation (July 2011-March 2013). Six acute NHS hospital trusts in England. Reasonable adjustments for people with intellectual disabilities were identified through the literature. Data were collected on implementation and staff understanding of these adjustments. Data collected included staff questionnaires (n=990), staff interviews (n=68), interviews with adults with intellectual disabilities (n=33), questionnaires (n=88) and interviews (n=37) with carers of patients with intellectual disabilities, and expert panel discussions (n=42). Hospital strategies that supported implementation of reasonable adjustments did not reliably translate into consistent provision of such adjustments. Good practice often depended on the knowledge, understanding and flexibility of individual staff and teams, leading to the delivery of reasonable adjustments being haphazard throughout the organisation. Major barriers included: lack of effective systems for identifying and flagging patients with intellectual disabilities, lack of staff understanding of the reasonable adjustments that may be needed, lack of clear lines of responsibility and accountability for implementing reasonable adjustments, and lack of allocation of additional funding and resources. Key enablers were the Intellectual Disability Liaison Nurse and the ward manager. The evidence suggests that ward culture, staff attitudes and staff knowledge are crucial in ensuring that hospital services are accessible to vulnerable patients. The authors suggest that flagging the need for specific reasonable adjustments, rather than the vulnerable condition itself, may address some of the barriers. Further research is recommended that describes and

  4. Comparison of DNA Quantification Methods for Next Generation Sequencing.

    Science.gov (United States)

    Robin, Jérôme D; Ludlow, Andrew T; LaRanger, Ryan; Wright, Woodring E; Shay, Jerry W

    2016-04-06

    Next Generation Sequencing (NGS) is a powerful tool that depends on loading a precise amount of DNA onto a flowcell. NGS strategies have expanded our ability to investigate genomic phenomena by referencing mutations in cancer and diseases through large-scale genotyping, developing methods to map rare chromatin interactions (4C; 5C and Hi-C) and identifying chromatin features associated with regulatory elements (ChIP-seq, Bis-Seq, ChiA-PET). While many methods are available for DNA library quantification, there is no unambiguous gold standard. Most techniques use PCR to amplify DNA libraries to obtain sufficient quantities for optical density measurement. However, increased PCR cycles can distort the library's heterogeneity and prevent the detection of rare variants. In this analysis, we compared new digital PCR technologies (droplet digital PCR; ddPCR, ddPCR-Tail) with standard methods for the titration of NGS libraries. DdPCR-Tail is comparable to qPCR and fluorometry (QuBit) and allows sensitive quantification by analysis of barcode repartition after sequencing of multiplexed samples. This study provides a direct comparison between quantification methods throughout a complete sequencing experiment and provides the impetus to use ddPCR-based quantification for improvement of NGS quality.

  5. An interactive website for analytical method comparison and bias estimation.

    Science.gov (United States)

    Bahar, Burak; Tuncel, Ayse F; Holmes, Earle W; Holmes, Daniel T

    2017-12-01

    Regulatory standards mandate laboratories to perform studies to ensure accuracy and reliability of their test results. Method comparison and bias estimation are important components of these studies. We developed an interactive website for evaluating the relative performance of two analytical methods using R programming language tools. The website can be accessed at https://bahar.shinyapps.io/method_compare/. The site has an easy-to-use interface that allows both copy-pasting and manual entry of data. It also allows selection of a regression model and creation of regression and difference plots. Available regression models include Ordinary Least Squares, Weighted-Ordinary Least Squares, Deming, Weighted-Deming, Passing-Bablok and Passing-Bablok for large datasets. The server processes the data and generates downloadable reports in PDF or HTML format. Our website provides clinical laboratories a practical way to assess the relative performance of two analytical methods. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  6. Evaluation and Comparison of Extremal Hypothesis-Based Regime Methods

    Directory of Open Access Journals (Sweden)

    Ishwar Joshi

    2018-03-01

    Full Text Available Regime channels are important for stable canal design and to determine river response to environmental changes, e.g., due to the construction of a dam, land use change, and climate shifts. A plethora of methods is available describing the hydraulic geometry of alluvial rivers in the regime. However, comparison of these methods using the same set of data seems lacking. In this study, we evaluate and compare four different extremal hypothesis-based regime methods, namely minimization of Froude number (MFN, maximum entropy and minimum energy dissipation rate (ME and MEDR, maximum flow efficiency (MFE, and Millar’s method, by dividing regime channel data into sand and gravel beds. The results show that for sand bed channels MFN gives a very high accuracy of prediction for regime channel width and depth. For gravel bed channels we find that MFN and ‘ME and MEDR’ give a very high accuracy of prediction for width and depth. Therefore the notion that extremal hypotheses which do not contain bank stability criteria are inappropriate for use is shown false as both MFN and ‘ME and MEDR’ lack bank stability criteria. Also, we find that bank vegetation has significant influence in the prediction of hydraulic geometry by MFN and ‘ME and MEDR’.

  7. Comparison of DUPIC fuel composition heterogeneity control methods

    International Nuclear Information System (INIS)

    Choi, Hang Bok; Ko, Won Il

    1999-08-01

    A method to reduce the fuel composition heterogeneity effect on the core performance parameters has been studied for the DUPIC fuel which is made of spent pressurized water reactor (PWR) fuels by a dry refabrication process. This study focuses on the reactivity control method which uses either slightly enriched, depleted, or natural uranium to minimize the cost rise effect on the manufacturing of DUPIC fuel, when adjusting the excess reactivity control by slightly enriched and depleted uranium, reactivity control by natural uranium for high reactivity spent PWR fuels, and reactivity control by natural uranium for linear reactivity spent PWR fuels. The results of this study have shown that the reactivity control by slightly enriched and depleted uranium, all the spent PWR fuels can be utilized as the DUPIC fuel and the fraction of fresh uranium feed is 3.4% on an average. For the reactivity control by natural uranium, about 88% of spent PWR fuel can be utilized as the DUPIC fuel when the linear reactivity spent PWR fuels are used, and the amount of natural uranium feed needed to control the DUPIC fuel reactivity is negligible. (author). 13 refs., 16 tabs., 6 figs

  8. Comparison of DUPIC fuel composition heterogeneity control methods

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Hang Bok; Ko, Won Il [Korea Atomic Energy Research Institute, Taejon (Korea)

    1999-08-01

    A method to reduce the fuel composition heterogeneity effect on the core performance parameters has been studied for the DUPIC fuel which is made of spent pressurized water reactor (PWR) fuels by a dry refabrication process. This study focuses on the reactivity control method which uses either slightly enriched, depleted, or natural uranium to minimize the cost rise effect on the manufacturing of DUPIC fuel, when adjusting the excess reactivity of the spent PWR fuel. In order to reduce the variation of isotopic composition of the DUPIC fuel, the inter-assembly mixing operation was taken three times. Then, three options have been considered: reactivity control by slightly enriched and depleted uranium, reactivity control by natural uranium for high reactivity spent PWR fuels, and reactivity control by natural uranium for linear reactivity spent PWR fuels. The results of this study have shown that the reactivity of DUPIC fuel can be tightly controlled with the minimum amount of fresh uranium feed. For the reactivity control by slightly enriched and depleted uranium, all the spent PWR fuels can be utilized as the DUPIC fuel and the fraction of fresh uranium feed is 3.4% on an average. For the reactivity control by natural uranium, about 88% of spent PWR fuel can be utilized as the DUPIC fuel when the linear reactivity spent PWR fuels are used, and the amount of natural uranium feed needed to control the DUPIC fuel reactivity is negligible. 13 refs., 6 figs., 16 tabs. (Author)

  9. On the importance of adjusting for distorting factors in benchmarking analysis, as illustrated by a cost comparison of the different forms of implementation of the EU Packaging Directive.

    Science.gov (United States)

    Baum, Heinz-Georg; Schuch, Dieter

    2017-12-01

    Benchmarking is a proven and widely used business tool for identifying best practice. To produce robust results, the objects of comparison used in benchmarking analysis need to be structurally comparable and distorting factors need to be eliminated. We focus on a specific example - a benchmark study commissioned by the European Commission's Directorate-General for Environment on the implementation of Extended Producer Responsibility (EPR) for packaging at the national level - to discuss potential distorting factors and take them into account in the calculation. The cost of compliance per inhabitant and year, which is used as the key cost efficiency indicator in the study, is adjusted to take account of seven factors. The results clearly show that differences in performance may play a role, but the (legal) implementation of EPR - which is highly heterogeneous across countries - is the single most important cost determinant and must be taken into account to avoid misinterpretation and false conclusions.

  10. An empirical comparison of several recent epistatic interaction detection methods.

    Science.gov (United States)

    Wang, Yue; Liu, Guimei; Feng, Mengling; Wong, Limsoon

    2011-11-01

    Many new methods have recently been proposed for detecting epistatic interactions in GWAS data. There is, however, no in-depth independent comparison of these methods yet. Five recent methods-TEAM, BOOST, SNPHarvester, SNPRuler and Screen and Clean (SC)-are evaluated here in terms of power, type-1 error rate, scalability and completeness. In terms of power, TEAM performs best on data with main effect and BOOST performs best on data without main effect. In terms of type-1 error rate, TEAM and BOOST have higher type-1 error rates than SNPRuler and SNPHarvester. SC does not control type-1 error rate well. In terms of scalability, we tested the five methods using a dataset with 100 000 SNPs on a 64 bit Ubuntu system, with Intel (R) Xeon(R) CPU 2.66 GHz, 16 GB memory. TEAM takes ~36 days to finish and SNPRuler reports heap allocation problems. BOOST scales up to 100 000 SNPs and the cost is much lower than that of TEAM. SC and SNPHarvester are the most scalable. In terms of completeness, we study how frequently the pruning techniques employed by these methods incorrectly prune away the most significant epistatic interactions. We find that, on average, 20% of datasets without main effect and 60% of datasets with main effect are pruned incorrectly by BOOST, SNPRuler and SNPHarvester. The software for the five methods tested are available from the URLs below. TEAM: http://csbio.unc.edu/epistasis/download.php BOOST: http://ihome.ust.hk/~eeyang/papers.html. SNPHarvester: http://bioinformatics.ust.hk/SNPHarvester.html. SNPRuler: http://bioinformatics.ust.hk/SNPRuler.zip. Screen and Clean: http://wpicr.wpic.pitt.edu/WPICCompGen/. wangyue@nus.edu.sg.

  11. Specific activity measurement of 64Cu: A comparison of methods

    International Nuclear Information System (INIS)

    Mastren, Tara; Guthrie, James; Eisenbeis, Paul; Voller, Tom; Mebrahtu, Efrem; Robertson, J. David; Lapi, Suzanne E.

    2014-01-01

    Effective specific activity of 64 Cu (amount of radioactivity per µmol metal) is important in order to determine purity of a particular 64 Cu lot and to assist in optimization of the purification process. Metal impurities can affect effective specific activity and therefore it is important to have a simple method that can measure trace amounts of metals. This work shows that ion chromatography (IC) yields similar results to ICP mass spectrometry for copper, nickel and iron contaminants in 64 Cu production solutions. - Highlights: • Comparison of TETA titration, ICP mass spectrometry, and ion chromatography to measure specific activity. • Validates ion chromatography by using ICP mass spectrometry as the “gold standard”. • Shows different types and amounts of metal impurities present in 64 Cu

  12. A Comparison of Moments-Based Logo Recognition Methods

    Directory of Open Access Journals (Sweden)

    Zili Zhang

    2014-01-01

    Full Text Available Logo recognition is an important issue in document image, advertisement, and intelligent transportation. Although there are many approaches to study logos in these fields, logo recognition is an essential subprocess. Among the methods of logo recognition, the descriptor is very vital. The results of moments as powerful descriptors were not discussed before in terms of logo recognition. So it is unclear which moments are more appropriate to recognize which kind of logos. In this paper we find out the relations between logos with different transforms and moments, which moments are fit for logos with different transforms. The open datasets are employed from the University of Maryland. The comparisons based on moments are carried out from the aspects of logos with noise, and rotation, scaling, rotation and scaling.

  13. Comparison of calculational methods for liquid metal reactor shields

    International Nuclear Information System (INIS)

    Carter, L.L.; Moore, F.S.; Morford, R.J.; Mann, F.M.

    1985-09-01

    A one-dimensional comparison is made between Monte Carlo (MCNP), discrete ordinances (ANISN), and diffusion theory (MlDX) calculations of neutron flux and radiation damage from the core of the Fast Flux Test Facility (FFTF) out to the reactor vessel. Diffusion theory was found to be reasonably accurate for the calculation of both total flux and radiation damage. However, for large distances from the core, the calculated flux at very high energies is low by an order of magnitude or more when the diffusion theory is used. Particular emphasis was placed in this study on the generation of multitable cross sections for use in discrete ordinates codes that are self-shielded, consistent with the self-shielding employed in the generation of cross sections for use with diffusion theory. The Monte Carlo calculation, with a pointwise representation of the cross sections, was used as the benchmark for determining the limitations of the other two calculational methods. 12 refs., 33 figs

  14. Comparison of deterministic and Monte Carlo methods in shielding design.

    Science.gov (United States)

    Oliveira, A D; Oliveira, C

    2005-01-01

    In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions.

  15. Comparison of deterministic and Monte Carlo methods in shielding design

    International Nuclear Information System (INIS)

    Oliveira, A. D.; Oliveira, C.

    2005-01-01

    In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions. (authors)

  16. Comparison of optical methods for surface roughness characterization

    International Nuclear Information System (INIS)

    Feidenhans’l, Nikolaj A; Hansen, Poul-Erik; Madsen, Morten H; Petersen, Jan C; Pilný, Lukáš; Bissacco, Giuliano; Taboryski, Rafael

    2015-01-01

    We report a study of the correlation between three optical methods for characterizing surface roughness: a laboratory scatterometer measuring the bi-directional reflection distribution function (BRDF instrument), a simple commercial scatterometer (rBRDF instrument), and a confocal optical profiler. For each instrument, the effective range of spatial surface wavelengths is determined, and the common bandwidth used when comparing the evaluated roughness parameters. The compared roughness parameters are: the root-mean-square (RMS) profile deviation (Rq), the RMS profile slope (Rdq), and the variance of the scattering angle distribution (Aq). The twenty-two investigated samples were manufactured with several methods in order to obtain a suitable diversity of roughness patterns.Our study shows a one-to-one correlation of both the Rq and the Rdq roughness values when obtained with the BRDF and the confocal instruments, if the common bandwidth is applied. Likewise, a correlation is observed when determining the Aq value with the BRDF and the rBRDF instruments.Furthermore, we show that it is possible to determine the Rq value from the Aq value, by applying a simple transfer function derived from the instrument comparisons. The presented method is validated for surfaces with predominantly 1D roughness, i.e. consisting of parallel grooves of various periods, and a reflectance similar to stainless steel. The Rq values are predicted with an accuracy of 38% at the 95% confidence interval. (paper)

  17. A comparison of radiological risk assessment methods for environmental restoration

    International Nuclear Information System (INIS)

    Dunning, D.E. Jr.; Peterson, J.M.

    1993-01-01

    Evaluation of risks to human health from exposure to ionizing radiation at radioactively contaminated sites is an integral part of the decision-making process for determining the need for remediation and selecting remedial actions that may be required. At sites regulated under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), a target risk range of 10 -4 to 10 -6 incremental cancer incidence over a lifetime is specified by the US Environmental Protection Agency (EPA) as generally acceptable, based on the reasonable maximum exposure to any individual under current and future land use scenarios. Two primary methods currently being used in conducting radiological risk assessments at CERCLA sites are compared in this analysis. Under the first method, the radiation dose equivalent (i.e., Sv or rem) to the receptors of interest over the appropriate period of exposure is estimated and multiplied by a risk factor (cancer risk/Sv). Alternatively, incremental cancer risk can be estimated by combining the EPA's cancer slope factors (previously termed potency factors) for radionuclides with estimates of radionuclide intake by ingestion and inhalation, as well as radionuclide concentrations in soil that contribute to external dose. The comparison of the two methods has demonstrated that resulting estimates of lifetime incremental cancer risk under these different methods may differ significantly, even when all other exposure assumptions are held constant, with the magnitude of the discrepancy depending upon the dominant radionuclides and exposure pathways for the site. The basis for these discrepancies, the advantages and disadvantages of each method, and the significance of the discrepant results for environmental restoration decisions are presented

  18. A comparison of methods to determine tannin acyl hydrolase activity

    Directory of Open Access Journals (Sweden)

    Cristóbal Aguilar

    1999-01-01

    Full Text Available Six methods to determine the activity of tannase produced by Aspergillus niger Aa-20 on polyurethane foam by solid state fermentation, which included two titrimetric techniques, three spectrophotometric methods and one HPLC assay were tested and compared. All methods assayed enabled the measurement of extracellular tannase activity. However, only five were useful to evaluate intracellular tannase activity. Studies on the effect of pH on tannase extraction demonstrated that tannase activity was considerably under-estimated when its extraction was carried out at pH values below 5.5 and above 6.0. Results showed that the HPLC technique and the modified Bajpai and Patil methods presented several advantages in comparison to the other methods tested.Seis métodos para determinar a atividade de tannase produzida por Aspergillus niger O Aa-20 em espuma de polyuretano por fermentação em estado sólido foram estudados. Duas técnicas titulométricas , três métodos spectrofotométricos e um método por HPLC foram testados e comparados. Todos os métodos testados permitiram determinar a atividade da tannase produzida extracelularmente. Entretanto, somente cinco se mostraram úteis para avaliar a atividade da tannase produzida intracelularmente. Os estudos do efeito do pH na extração de tannase demonstraram que a atividade de tannase era consideravelmente subestimada quando sua extração foi executada em valores de pH inferiores a 5.5 e superior a pH 6.0. Os resultados demostraram que a técnica de HPLC o método Bajpai and Patil modificado apresentam várias vantagens em comparação aos outros métodos testados.

  19. Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files. SG39 meeting, November 2013

    International Nuclear Information System (INIS)

    De Saint Jean, C.; Dupont, E.; ); Dyrda, J.; Hursin, M.; Pelloni, S.; Ishikawa, M.; Ivanov, E.; Ivanova, T.; Kim, D.H.; Ee, Y.O.; Kodeli, I.; Leal, L.; Leichtle, D.; Palmiotti, G.; Salvatores, M.; Pronyaev, V.; Simakov, S.; )

    2013-11-01

    The aim of WPEC subgroup 39 'Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files' is to provide criteria and practical approaches to use effectively the results of sensitivity analyses and cross section adjustments for feedback to evaluators and differential measurement experimentalists in order to improve the knowledge of neutron cross sections, uncertainties, and correlations to be used in a wide range of applications. This document is the proceedings of the first formal Subgroup 39 meeting held at the NEA, Issy-les-Moulineaux, France, on 28-29 November 2013. It comprises a Summary Record of the meeting and all the available presentations (slides) given by the participants: A - Recent data adjustments performances and trends: 1 - Recommendations from ADJ2010 adjustment (M. Ishikawa); 2 - Feedback on CIELO isotopes from ENDF/B-VII.0 adjustment (G. Palmiotti); 3 - Sensitivity and uncertainty results on FLATTOP-Pu (I. Kodeli); 4 - SG33 benchmark: Comparative adjustment results (S. Pelloni) 5 - Integral benchmarks for data assimilation: selection of a consistent set and establishment of integral correlations (E. Ivanov); 6 - PROTEUS experimental data (M. Hursin); 7 - Additional information on High Conversion Light Water Reactor (HCLWR aka FDWR-II) experiments (14 January 2014); 8 - Data assimilation of benchmark experiments for homogenous thermal/epithermal uranium systems (J. Dyrda); B - Methodology issues: 1 - Adjustment methodology issues (G. Palmiotti); 2 - Marginalisation, methodology issues and nuclear data parameter adjustment (C. De Saint Jean); 3 - Nuclear data parameter adjustment (G. Palmiotti). A list of issues and actions conclude the document

  20. Comparison of three methods of exposing rats to cigarette smoke

    Energy Technology Data Exchange (ETDEWEB)

    Mauderly, J L; Bechtold, W E; Bond, J A; Brooks, A L; Chen, B T; Cuddihy, R G; Harkema, J R; Henderson, R F; Johnson, N F; Ritchideh, K; Thomassen, D G

    1988-12-01

    We compared smoke composition and biological effects resulting from exposures of rats for 5 wk to cigarette smoke by nose-only intermittent (NOI), nose-only continuous (NOC) and whole-body continuous (WBC) exposure methods. Exposure concentrations and times were adjusted to achieve the same daily concentration x time product for particulate matter. There were few differences in smoke composition or biological effects among exposure modes. WBC smoke was lower in particle-borne nicotine and higher in some organic vapors and carbon monoxide than smoke in nose-only modes. Body weight was depressed less by WBC than by NOI or NOC exposures. Plasma and urine nicotine levels were higher for WBC than for NOI or NOC, suggesting greater absorption from body surfaces or by grooming. Smoke exposures increased nasal epithelial proliferation, tracheal epithelial cell transformation, chromosomal aberrations in alveolar macrophages, and lung DNA adduct levels, and caused inflammatory changes in airway fluid and slight alterations of respiratory function, but there were no significant differences among exposure modes. The results indicate that WBC exposures should produce long-term effects similar to those of nose-only exposures, but might allow increased delivery of smoke to lungs while reducing stress, acute toxicity and the manpower requirements associated with performing these experiments. (author)

  1. Comparison of three methods of exposing rats to cigarette smoke

    International Nuclear Information System (INIS)

    Mauderly, J.L.; Bechtold, W.E.; Bond, J.A.; Brooks, A.L.; Chen, B.T.; Cuddihy, R.G.; Harkema, J.R.; Henderson, R.F.; Johnson, N.F.; Ritchideh, K.; Thomassen, D.G.

    1988-01-01

    We compared smoke composition and biological effects resulting from exposures of rats for 5 wk to cigarette smoke by nose-only intermittent (NOI), nose-only continuous (NOC) and whole-body continuous (WBC) exposure methods. Exposure concentrations and times were adjusted to achieve the same daily concentration x time product for particulate matter. There were few differences in smoke composition or biological effects among exposure modes. WBC smoke was lower in particle-borne nicotine and higher in some organic vapors and carbon monoxide than smoke in nose-only modes. Body weight was depressed less by WBC than by NOI or NOC exposures. Plasma and urine nicotine levels were higher for WBC than for NOI or NOC, suggesting greater absorption from body surfaces or by grooming. Smoke exposures increased nasal epithelial proliferation, tracheal epithelial cell transformation, chromosomal aberrations in alveolar macrophages, and lung DNA adduct levels, and caused inflammatory changes in airway fluid and slight alterations of respiratory function, but there were no significant differences among exposure modes. The results indicate that WBC exposures should produce long-term effects similar to those of nose-only exposures, but might allow increased delivery of smoke to lungs while reducing stress, acute toxicity and the manpower requirements associated with performing these experiments. (author)

  2. The choice of statistical methods for comparisons of dosimetric data in radiotherapy

    International Nuclear Information System (INIS)

    Chaikh, Abdulhamid; Giraud, Jean-Yves; Perrin, Emmanuel; Bresciani, Jean-Pierre; Balosso, Jacques

    2014-01-01

    Novel irradiation techniques are continuously introduced in radiotherapy to optimize the accuracy, the security and the clinical outcome of treatments. These changes could raise the question of discontinuity in dosimetric presentation and the subsequent need for practice adjustments in case of significant modifications. This study proposes a comprehensive approach to compare different techniques and tests whether their respective dose calculation algorithms give rise to statistically significant differences in the treatment doses for the patient. Statistical investigation principles are presented in the framework of a clinical example based on 62 fields of radiotherapy for lung cancer. The delivered doses in monitor units were calculated using three different dose calculation methods: the reference method accounts the dose without tissues density corrections using Pencil Beam Convolution (PBC) algorithm, whereas new methods calculate the dose with tissues density correction for 1D and 3D using Modified Batho (MB) method and Equivalent Tissue air ratio (ETAR) method, respectively. The normality of the data and the homogeneity of variance between groups were tested using Shapiro-Wilks and Levene test, respectively, then non-parametric statistical tests were performed. Specifically, the dose means estimated by the different calculation methods were compared using Friedman’s test and Wilcoxon signed-rank test. In addition, the correlation between the doses calculated by the three methods was assessed using Spearman’s rank and Kendall’s rank tests. The Friedman’s test showed a significant effect on the calculation method for the delivered dose of lung cancer patients (p <0.001). The density correction methods yielded to lower doses as compared to PBC by on average (−5 ± 4.4 SD) for MB and (−4.7 ± 5 SD) for ETAR. Post-hoc Wilcoxon signed-rank test of paired comparisons indicated that the delivered dose was significantly reduced using density

  3. Comparison of methods to identify crop productivity constraints in developing countries. A review

    NARCIS (Netherlands)

    Kraaijvanger, R.G.M.; Sonneveld, M.P.W.; Almekinders, C.J.M.; Veldkamp, T.

    2015-01-01

    Selecting a method for identifying actual crop productivity constraints is an important step for triggering innovation processes. Applied methods can be diverse and although such methods have consequences for the design of intervention strategies, documented comparisons between various methods are

  4. A Comparison of underground opening support design methods in jointed rock mass

    International Nuclear Information System (INIS)

    Gharavi, M.; Shafiezadeh, N.

    2008-01-01

    It is of great importance to consider long-term stability of rock mass around the openings of underground structure. during design, construction and operation of the said structures in rock. In this context. three methods namely. empirical. analytical and numerical have been applied to design and analyze the stability of underground infrastructure at the Siah Bisheh Pumping Storage Hydro-Electric Power Project in Iran. The geological and geotechnical data utilized in this article were selected and based on the preliminary studies of this project. In the initial stages of design. it was recommended that, two methods of rock mass classification Q and rock mass rating should be utilized for the support system of the underground cavern. Next, based on the structural instability, the support system was adjusted by the analytical method. The performance of the recommended support system was reviewed by the comparison of the ground response curve and rock support interactions with surrounding rock mass, using FEST03 software. Moreover, for further assessment of the realistic rock mass behavior and support system, the numerical modeling was performed utilizing FEST03 software. Finally both the analytical and numerical methods were compared, to obtain satisfactory results complimenting each other

  5. Comparison of measurement methods for benzene and toluene

    Science.gov (United States)

    Wideqvist, U.; Vesely, V.; Johansson, C.; Potter, A.; Brorström-Lundén, E.; Sjöberg, K.; Jonsson, T.

    Diffusive sampling and active (pumped) sampling (tubes filled with Tenax TA or Carbopack B) were compared with an automatic BTX instrument (Chrompack, GC/FID) for measurements of benzene and toluene. The measurements were made during differing pollution levels and different weather conditions at a roof-top site and in a densely trafficked street canyon in Stockholm, Sweden. The BTX instrument was used as the reference method for comparison with the other methods. Considering all data the Perkin-Elmer diffusive samplers, containing Tenax TA and assuming a constant uptake rate of 0.406 cm3 min-1, showed about 30% higher benzene values compared to the BTX instrument. This discrepancy may be explained by a dose-dependent uptake rate with higher uptake rates at lower dose as suggested by laboratory experiments presented in the literature. After correction by applying the relationship between uptake rate and dose as suggested by Roche et al. (Atmos. Environ. 33 (1999) 1905), the two methods agreed almost perfectly. For toluene there was much better agreement between the two methods. No sign of a dose-dependent uptake could be seen. The mean concentrations and 95% confidence intervals of all toluene measurements (67 values) were (10.80±1.6) μg m -3 for diffusive sampling and (11.3±1.6) μg m -3 for the BTX instrument, respectively. The overall ratio between the concentrations obtained using diffusive sampling and the BTX instrument was 0.91±0.07 (95% confidence interval). Tenax TA was found to be equal to Carbopack B for measuring benzene and toluene in this concentration range, although it has been proposed not to be optimal for benzene. There was also good agreement between the active samplers and the BTX instrument.

  6. How Many Alternatives Can Be Ranked? A Comparison of the Paired Comparison and Ranking Methods.

    Science.gov (United States)

    Ock, Minsu; Yi, Nari; Ahn, Jeonghoon; Jo, Min-Woo

    2016-01-01

    To determine the feasibility of converting ranking data into paired comparison (PC) data and suggest the number of alternatives that can be ranked by comparing a PC and a ranking method. Using a total of 222 health states, a household survey was conducted in a sample of 300 individuals from the general population. Each respondent performed a PC 15 times and a ranking method 6 times (two attempts of ranking three, four, and five health states, respectively). The health states of the PC and the ranking method were constructed to overlap each other. We converted the ranked data into PC data and examined the consistency of the response rate. Applying probit regression, we obtained the predicted probability of each method. Pearson correlation coefficients were determined between the predicted probabilities of those methods. The mean absolute error was also assessed between the observed and the predicted values. The overall consistency of the response rate was 82.8%. The Pearson correlation coefficients were 0.789, 0.852, and 0.893 for ranking three, four, and five health states, respectively. The lowest mean absolute error was 0.082 (95% confidence interval [CI] 0.074-0.090) in ranking five health states, followed by 0.123 (95% CI 0.111-0.135) in ranking four health states and 0.126 (95% CI 0.113-0.138) in ranking three health states. After empirically examining the consistency of the response rate between a PC and a ranking method, we suggest that using five alternatives in the ranking method may be superior to using three or four alternatives. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  7. Comparison of the performance of the CMS Hierarchical Condition Category (CMS-HCC) risk adjuster with the Charlson and Elixhauser comorbidity measures in predicting mortality.

    Science.gov (United States)

    Li, Pengxiang; Kim, Michelle M; Doshi, Jalpa A

    2010-08-20

    The Centers for Medicare and Medicaid Services (CMS) has implemented the CMS-Hierarchical Condition Category (CMS-HCC) model to risk adjust Medicare capitation payments. This study intends to assess the performance of the CMS-HCC risk adjustment method and to compare it to the Charlson and Elixhauser comorbidity measures in predicting in-hospital and six-month mortality in Medicare beneficiaries. The study used the 2005-2006 Chronic Condition Data Warehouse (CCW) 5% Medicare files. The primary study sample included all community-dwelling fee-for-service Medicare beneficiaries with a hospital admission between January 1st, 2006 and June 30th, 2006. Additionally, four disease-specific samples consisting of subgroups of patients with principal diagnoses of congestive heart failure (CHF), stroke, diabetes mellitus (DM), and acute myocardial infarction (AMI) were also selected. Four analytic files were generated for each sample by extracting inpatient and/or outpatient claims for each patient. Logistic regressions were used to compare the methods. Model performance was assessed using the c-statistic, the Akaike's information criterion (AIC), the Bayesian information criterion (BIC) and their 95% confidence intervals estimated using bootstrapping. The CMS-HCC had statistically significant higher c-statistic and lower AIC and BIC values than the Charlson and Elixhauser methods in predicting in-hospital and six-month mortality across all samples in analytic files that included claims from the index hospitalization. Exclusion of claims for the index hospitalization generally led to drops in model performance across all methods with the highest drops for the CMS-HCC method. However, the CMS-HCC still performed as well or better than the other two methods. The CMS-HCC method demonstrated better performance relative to the Charlson and Elixhauser methods in predicting in-hospital and six-month mortality. The CMS-HCC model is preferred over the Charlson and Elixhauser methods

  8. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    Science.gov (United States)

    Lynnes, Chris; Little, Mike; Huang, Thomas; Jacob, Joseph; Yang, Phil; Kuo, Kwo-Sen

    2016-01-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based file systems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  9. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    Science.gov (United States)

    Lynnes, C.; Little, M. M.; Huang, T.; Jacob, J. C.; Yang, C. P.; Kuo, K. S.

    2016-12-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based filesystems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  10. Monitoring uterine activity during labor: a comparison of three methods

    Science.gov (United States)

    EULIANO, Tammy Y.; NGUYEN, Minh Tam; DARMANJIAN, Shalom; MCGORRAY, Susan P.; EULIANO, Neil; ONKALA, Allison; GREGG, Anthony R.

    2012-01-01

    Objective Tocodynamometry (Toco—strain gauge technology) provides contraction frequency and approximate duration of labor contractions, but suffers frequent signal dropout necessitating re-positioning by a nurse, and may fail in obese patients. The alternative invasive intrauterine pressure catheter (IUPC) is more reliable and adds contraction pressure information, but requires ruptured membranes and introduces small risks of infection and abruption. Electrohysterography (EHG) reports the electrical activity of the uterus through electrodes placed on the maternal abdomen. This study compared all three methods of contraction detection simultaneously in laboring women. Study Design Upon consent, laboring women were monitored simultaneously with Toco, EHG, and IUPC. Contraction curves were generated in real-time for the EHG and all three curves were stored electronically. A contraction detection algorithm was used to compare frequency and timing between methods. Seventy-three subjects were enrolled in the study; 14 were excluded due to hardware failure of one or more of the devices (12) or inadequate data collection duration(2). Results In comparison with the gold-standard IUPC, EHG performed significantly better than Toco with regard to Contractions Consistency Index (CCI). The mean CCI for EHG was 0.88 ± 0.17 compared to 0.69 ± 0.27 for Toco (pToco, EHG was not significantly affected by obesity. Conclusion Toco does not correlate well with the gold-standard IUPC and fails more frequently in obese patients. EHG provides a reliable non-invasive alternative regardless of body habitus. PMID:23122926

  11. Monitoring uterine activity during labor: a comparison of 3 methods.

    Science.gov (United States)

    Euliano, Tammy Y; Nguyen, Minh Tam; Darmanjian, Shalom; McGorray, Susan P; Euliano, Neil; Onkala, Allison; Gregg, Anthony R

    2013-01-01

    Tocodynamometry (Toco; strain gauge technology) provides contraction frequency and approximate duration of labor contractions but suffers frequent signal dropout, necessitating repositioning by a nurse, and may fail in obese patients. The alternative invasive intrauterine pressure catheter (IUPC) is more reliable and adds contraction pressure information but requires ruptured membranes and introduces small risks of infection and abruption. Electrohysterography (EHG) reports the electrical activity of the uterus through electrodes placed on the maternal abdomen. This study compared all 3 methods of contraction detection simultaneously in laboring women. Upon consent, laboring women were monitored simultaneously with Toco, EHG, and IUPC. Contraction curves were generated in real-time for the EHG, and all 3 curves were stored electronically. A contraction detection algorithm was used to compare frequency and timing between methods. Seventy-three subjects were enrolled in the study; 14 were excluded due to hardware failure of 1 or more of the devices (n = 12) or inadequate data collection duration (n = 2). In comparison with the gold-standard IUPC, EHG performed significantly better than Toco with regard to the Contractions Consistency Index (CCI). The mean CCI for EHG was 0.88 ± 0.17 compared with 0.69 ± 0.27 for Toco (P Toco, EHG was not significantly affected by obesity. Toco does not correlate well with the gold-standard IUPC and fails more frequently in obese patients. EHG provides a reliable noninvasive alternative, regardless of body habitus. Copyright © 2013 Mosby, Inc. All rights reserved.

  12. Differences in case-mix can influence the comparison of standardised mortality ratios even with optimal risk adjustment: an analysis of data from paediatric intensive care.

    Science.gov (United States)

    Manktelow, Bradley N; Evans, T Alun; Draper, Elizabeth S

    2014-09-01

    The publication of clinical outcomes for consultant surgeons in 10 specialties within the NHS has, along with national clinical audits, highlighted the importance of measuring and reporting outcomes with the aim of monitoring quality of care. Such information is vital to be able to identify good and poor practice and to inform patient choice. The need to adequately adjust outcomes for differences in case-mix has long been recognised as being necessary to provide 'like-for-like' comparisons between providers. However, directly comparing values of the standardised mortality ratio (SMR) between different healthcare providers can be misleading even when the risk-adjustment perfectly quantifies the risk of a poor outcome in the reference population. An example is shown from paediatric intensive care. Using observed case-mix differences for 33 paediatric intensive care units (PICUs) in the UK and Ireland for 2009-2011, SMRs were calculated under four different scenarios where, in each scenario, all of the PICUs were performing identically for each patient type. Each scenario represented a clinically plausible difference in outcome from the reference population. Despite the fact that the outcome for any patient was the same no matter which PICU they were to be admitted to, differences between the units were seen when compared using the SMR: scenario 1, 1.07-1.21; scenario 2, 1.00-1.14; scenario 3, 1.04-1.13; scenario 4, 1.00-1.09. Even if two healthcare providers are performing equally for each type of patient, if their patient populations differ in case-mix their SMRs will not necessarily take the same value. Clinical teams and commissioners must always keep in mind this weakness of the SMR when making decisions. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  13. COMPARISON OF DETERMINING METHODS REGARDING SELENIUM CONTENT IN WHEAT PLANT

    Directory of Open Access Journals (Sweden)

    Mihaela Monica Stanciu-Burileanu

    2010-01-01

    Full Text Available As a metallic chemical element, selenium has received special attention from biologists because of its dual role as a trace element essential and toxic. The important part of enzymes that protect cells against the effects of free radicals that are produced during normal metabolism of oxygen. Also, selenium is essential for normal immune system and thyroid gland, The concentration of selenium in the soil, which varies by region, determines the default concentration of selenium in plants growing in the soil.The purpose of this paper is to present methods of comparison, dry oxidation at 450ºC and wet digestion – digestion with acids in high concentrations at microwave system digestion, for determining selenium content from wheat samples collected from the south-eastern part of Romania, namely Bărăgan Plain and Central-South Dobrogea. Selenium separation and dosage from obtained extracts carry out through a selective hydride generation atomic absorption spectrophotometry. With the software SURFER, a tendency map of selenium distribution was drawn.

  14. A comparison of methods in estimating soil water erosion

    Directory of Open Access Journals (Sweden)

    Marisela Pando Moreno

    2012-02-01

    Full Text Available A comparison between direct field measurements and predictions of soil water erosion using two variant; (FAO and R/2 index of the Revised Universal Soil Loss Equation (RUSLE was carried out in a microcatchment o 22.32 km2 in Northeastern Mexico. Direct field measurements were based on a geomorphologic classification of the area; while environmental units were defined for applying the equation. Environmental units were later grouped withir geomorphologic units to compare results. For the basin as a whole, erosion rates from FAO index were statistical!; equal to those measured on the field, while values obtained from the R/2 index were statistically different from the res and overestimated erosion. However, when comparing among geomorphologic units, erosion appeared overestimate! in steep units and underestimated in more flat areas. The most remarkable differences on erosion rates, between th( direct and FAO methods, were for those units where gullies have developed, fn these cases, erosion was underestimated by FAO index. Hence, it is suggested that a weighted factor for presence of gullies should be developed and included in RUSLE equation.

  15. Pyrrolizidine alkaloids in honey: comparison of analytical methods.

    Science.gov (United States)

    Kempf, M; Wittig, M; Reinhard, A; von der Ohe, K; Blacquière, T; Raezke, K-P; Michel, R; Schreier, P; Beuerle, T

    2011-03-01

    ragwort pollen were detected (0-6.3%), in some cases extremely high PA values were detected (n = 31; ranging from 0 to 13019 µg kg(-1), average = 1261 or 76 µg kg(-1) for GC-MS and LC-MS, respectively). Here the results showed significantly different quantification results. The GC-MS sum parameter showed in average higher values (on average differing by a factor 17). The main reason for the discrepancy is most likely the incomplete coverage of the J. vulgaris PA pattern. Major J. vulgaris PAs like jacobine-type PAs or erucifoline/acetylerucifoline were not available as reference compounds for the LC-MS target approach. Based on the direct comparison, both methods are considered from various perspectives and the respective individual strengths and weaknesses for each method are presented in detail.

  16. a comparison of methods in a behaviour study of the south african ...

    African Journals Online (AJOL)

    A COMPARISON OF METHODS IN A BEHAVIOUR STUDY OF THE ... Three methods are outlined in this paper and the results obtained from each method were .... There was definitely no aggressive response towards the Sky pointing mate.

  17. A comparison of confirmatory factor analysis methods : Oblique multiple group method versus confirmatory common factor method

    NARCIS (Netherlands)

    Stuive, Ilse

    2007-01-01

    Confirmatieve Factor Analyse (CFA) is een vaak gebruikte methode wanneer onderzoekers een bepaalde veronderstelling hebben over de indeling van items in één of meerdere subtests en willen onderzoeken of deze indeling ook wordt ondersteund door verzamelde onderzoeksgegevens. De meest gebruikte

  18. Comparison of Memory Function and MMPI-2 Profile between Post-traumatic Stress Disorder and Adjustment Disorder after a Traffic Accident

    Science.gov (United States)

    Bae, Sung-Man; Hyun, Myoung-Ho

    2014-01-01

    Objective Differential diagnosis between post-traumatic stress disorder (PTSD) and adjustment disorder (AD) is rather difficult, but very important to the assignment of appropriate treatment and prognosis. This study investigated methods to differentiate PTSD and AD. Methods Twenty-five people with PTSD and 24 people with AD were recruited. Memory tests, the Minnesota Multiphasic Personality Inventory 2 (MMPI-2), and Beck's Depression Inventory were administered. Results There were significant decreases in immediate verbal recall and delayed verbal recognition in the participants with PTSD. The reduced memory functions of participants with PTSD were significantly influenced by depressive symptoms. Hypochondriasis, hysteria, psychopathic deviate, paranoia, schizophrenia, post-traumatic stress disorder scale of MMPI-2 classified significantly PTSD and AD group. Conclusion Our results suggest that verbal memory assessments and the MMPI-2 could be useful for discriminating between PTSD and AD. PMID:24851120

  19. Corporate Cash Holdings and Adjustment Behaviour in Chinese Firms: An Empirical Analysis Using Generalized Method of Moments

    Directory of Open Access Journals (Sweden)

    Ajid ur Rehman

    2016-05-01

    Full Text Available This study is intended to find out the motives of cash holding in Chinese firms and theories associated with these motives. The study is unique because it not only estimates the adjustment speed of corporate cash holdings but also discuss several firm specific factors that affects cash holdings in Chinese firms with special reference to Chinese SOEs and NSOEs. An extensive set of panel data comprising 1632 A listed Chines firms, over a period from 2001 to 2013 are taken for analysis. The study reports a lower adjustment coefficient for Chinese firms compared to other developed nations. The study finds that target level of cash holdings in Chinese firms is better explained by Trade off and Pecking order theories. To cope with issues of endogeneity and serial correlation the study apply GMM and random effects model with an added AR (autoregressive term.

  20. Comparison of different surface quantitative analysis methods. Application to corium

    International Nuclear Information System (INIS)

    Guilbaud, N.; Blin, D.; Perodeaud, Ph.; Dugne, O.; Gueneau, Ch.

    2000-01-01

    very different morphologies and compositions. The upper oxide zone is fairly homogeneous, with three distinct and well contrasted phases: U 6 Fe (white), (U Zr)O 2-x (grey) and (Zr(O) (black). The lower metallic phase is very heterogeneous and composed of a large number of phases: U 6 Fe, (U,Zr)O 2-x , (Zr(O), Fe 2 (UZr) and many phases with different compositions of the (Fe, U,Zr) ternary. The EDS and WDS global analysis methods were compared with the coupling image analysis method and with point spectroscopic analysis, which is considered to be reliable and accurate, being based on accurate rules. Global EDS and WDS could be applied to both zones. However, the coupling of image analysis and point analyses could not be accurately applied to the metallic zone because of its many phases. To obtain a valid comparison, it was necessary for every method to employ similar conditions such as sample preparation, choice of analyzed zone and magnification and global analysis parameters. A corrective method was also applied to the global results to eliminate the influence of surface oxidation. A 10 % atomic oxygen content was in fact observed in the white phase, which turned out not to be a UaFebOc oxide, but an oxidized U 6 Fe phase. (authors)

  1. Comparison results on preconditioned SOR-type iterative method for Z-matrices linear systems

    Science.gov (United States)

    Wang, Xue-Zhong; Huang, Ting-Zhu; Fu, Ying-Ding

    2007-09-01

    In this paper, we present some comparison theorems on preconditioned iterative method for solving Z-matrices linear systems, Comparison results show that the rate of convergence of the Gauss-Seidel-type method is faster than the rate of convergence of the SOR-type iterative method.

  2. An enquiry into the method of paired comparison: reliability, scaling, and Thurstone's Law of Comparative Judgment

    Science.gov (United States)

    Thomas C. Brown; George L. Peterson

    2009-01-01

    The method of paired comparisons is used to measure individuals' preference orderings of items presented to them as discrete binary choices. This paper reviews the theory and application of the paired comparison method, describes a new computer program available for eliciting the choices, and presents an analysis of methods for scaling paired choice data to...

  3. Identifying the contents of a type 1 diabetes outpatient care program based on the self-adjustment of insulin using the Delphi method.

    Science.gov (United States)

    Kubota, Mutsuko; Shindo, Yukari; Kawaharada, Mariko

    2014-10-01

    The objective of this study is to identify the items necessary for an outpatient care program based on the self-adjustment of insulin for type 1 diabetes patients. Two surveys based on the Delphi method were conducted. The survey participants were 41 certified diabetes nurses in Japan. An outpatient care program based on the self-adjustment of insulin was developed based on pertinent published work and expert opinions. There were a total of 87 survey items in the questionnaire, which was developed based on the care program mentioned earlier, covering matters such as the establishment of prerequisites and a cooperative relationship, the basics of blood glucose pattern management, learning and practice sessions for the self-adjustment of insulin, the implementation of the self-adjustment of insulin, and feedback. The participants' approval on items in the questionnaires was defined at 70%. Participants agreed on all of the items in the first survey. Four new parameters were added to make a total of 91 items for the second survey and participants agreed on the inclusion of 84 of them. Items necessary for a type 1 diabetes outpatient care program based on self-adjustment of insulin were subsequently selected. It is believed that this care program received a fairly strong approval from certified diabetes nurses; however, it will be necessary to have the program further evaluated in conjunction with intervention studies in the future. © 2014 The Authors. Japan Journal of Nursing Science © 2014 Japan Academy of Nursing Science.

  4. A Comparison of Affect Ratings Obtained with Ecological Momentary Assessment and the Day Reconstruction Method

    Science.gov (United States)

    Dockray, Samantha; Grant, Nina; Stone, Arthur A.; Kahneman, Daniel; Wardle, Jane

    2010-01-01

    Measurement of affective states in everyday life is of fundamental importance in many types of quality of life, health, and psychological research. Ecological momentary assessment (EMA) is the recognized method of choice, but the respondent burden can be high. The day reconstruction method (DRM) was developed by Kahneman and colleagues (Science, 2004, 306, 1776–1780) to assess affect, activities and time use in everyday life. We sought to validate DRM affect ratings by comparison with contemporaneous EMA ratings in a sample of 94 working women monitored over work and leisure days. Six EMA ratings of happiness, tiredness, stress, and anger/frustration were obtained over each 24 h period, and were compared with DRM ratings for the same hour, recorded retrospectively at the end of the day. Similar profiles of affect intensity were recorded with the two techniques. The between-person correlations adjusted for attenuation ranged from 0.58 (stress, working day) to 0.90 (happiness, leisure day). The strength of associations was not related to age, educational attainment, or depressed mood. We conclude that the DRM provides reasonably reliable estimates both of the intensity of affect and variations in affect over the day, so is a valuable instrument for the measurement of everyday experience in health and social research. PMID:21113328

  5. Comparison of two solution ways of district heating control: Using analysis methods, using artificial intelligence methods

    Energy Technology Data Exchange (ETDEWEB)

    Balate, J.; Sysala, T. [Technical Univ., Zlin (Czech Republic). Dept. of Automation and Control Technology

    1997-12-31

    The District Heating Systems - DHS (Centralized Heat Supply Systems - CHSS) are being developed in large cities in accordance with their growth. The systems are formed by enlarging networks of heat distribution to consumers and at the same time they interconnect the heat sources gradually built. The heat is distributed to the consumers through the circular networks, that are supplied by several cooperating heat sources, that means by power and heating plants and heating plants. The complicated process of heat production technology and supply requires the system approach when solving the concept of automatized control. The paper deals with comparison of the solution way using the analysis methods and using the artificial intelligence methods. (orig.)

  6. Comparison of two uncertainty dressing methods: SAD VS DAD

    Science.gov (United States)

    Chardon, Jérémy; Mathevet, Thibault; Le-Lay, Matthieu; Gailhard, Joël

    2014-05-01

    Hydrological Ensemble Prediction Systems (HEPSs) allow a better representation of meteorological and hydrological forecast uncertainties and improve human expertise of hydrological forecasts. An operational HEPS has been developed at EDF (French Producer of Electricity) since 2008 and is being used since 2010 on a hundred of watersheds in France. Depending on the hydro-meteorological situation, streamflow forecasts could be issued on a daily basis and are used to help dam management operations during floods or dam works within the river. A part of this HEPS is characterized by a streamflow ensemble post-processing, where a large human expertise is solicited. The aim of post-processing methods is to achieve better overall performances, by dressing hydrological ensemble forecasts with hydrological model uncertainties. The present study compares two post-processing methods, which are based on a logarithmic representation of the residuals distribution of the Rainfall-Runoff (RR) model, based on "perfect" forcing forecasts - i.e. forecasts with observed meteorological variables as inputs. The only difference between the two post-processing methods lies in the sampling of the perfect forcing forecasts for the estimation of the residuals statistics: (i) a first method, referred here as Statistical Analogy Dressing (SAD) model and used for operational HEPS, estimates beforehand the statistics of the residuals by streamflow sub-samples of quantile class and lead-time, since RR model residuals are not homoscedastic. (ii) an alternative method, referred as Dynamical Analogy Dressing (DAD) model, estimates the statistics of the residuals using the N most similar perfect forcing forecasts. The selection of this N forecasts is based on streamflow range and variation. On a set of 20 watersheds used for operational forecasts, both models were evaluated with perfect forcing forecasts and with ensemble forecasts. Results show that both approaches ensure a good post-processing of

  7. The comparison of placental removal methods on operative blood loss

    International Nuclear Information System (INIS)

    Waqar, F.; Fawad, A.

    2008-01-01

    On an average 1 litre of blood is lost during Caesarean Section. Many variable techniques have been tried to reduce this blood loss. Many study trials have shown the spontaneous delivery of placenta method to be superior over manual method because of reduced intra operative blood loss and reduced incidence of post operative endometritis. The main objective of our study was to compare the risk of blood loss associated with spontaneous and manual removal of the placenta during caesarean section. This study was conducted at Department of Obstetrics and Gynaecology, Islamic International Medical Complex, Islamabad from September 2004 to September 2005. All Women undergoing elective or emergency caesarean section were included in the study. Exclusion criteria were pregnancy below 37 weeks, severe maternal anaemia, and prolonged rupture of the membranes with fever, placenta praevia, placenta accreta and clotting disorders. Patients were allocated to the two groups randomly. Group A comprised of women in whom the obstetrician waited a maximum of 5 minutes till the placenta delivered spontaneously. In group B the obstetrician manually cleaved out the placenta as soon as the infant was delivered. The primary outcome measures noted were difference in haemoglobin of >2 gm/dl (preoperatively and postoperatively), time interval between delivery of baby and placenta, significant blood loss (>1000 cc), additional use of oxytocics, total operating time and blood transfusions. Data was analysed by SPSS. Statistical tests used for specific comparison were chi square-test and Student's t-test. One hundred and forty-five patients were allocated to two groups randomly. Seventy-eight patients were allocated to group A and 67 patients allocated to group B. Mean maternal age, birth weight, and total operating time were the same in two groups, but blood loss as measured by a difference in haemoglobin of greater then 2 grams/dl was statistically significant. Significant blood loss (>1000 cc

  8. Salary adjustments

    CERN Multimedia

    HR Department

    2008-01-01

    In accordance with decisions taken by the Finance Committee and Council in December 2007, salaries are adjusted with effect from 1 January 2008. Scale of basic salaries and scale of stipends paid to fellows (Annex R A 5 and R A 6 respectively): increased by 0.71% with effect from 1 January 2008. As a result of the stability of the Geneva consumer price index, following elements do not increase: a) Family Allowance, Child Allowance and Infant Allowance (Annex R A 3). b) Reimbursement of education fees: maximum amounts of reimbursement (Annex R A 4.01) for the academic year 2007/2008. Related adjustments will be implemented, wherever applicable, to Paid Associates and Students. As in the past, the actual percentage increase of each salary position may vary, due to the application of a constant step value and the rounding effects. Human Resources Department Tel. 73566

  9. Salary adjustments

    CERN Multimedia

    HR Department

    2008-01-01

    In accordance with decisions taken by the Finance Committee and Council in December 2007, salaries are adjusted with effect from 1 January 2008. Scale of basic salaries and scale of stipends paid to fellows (Annex R A 5 and R A 6 respectively): increased by 0.71% with effect from 1 January 2008. As a result of the stability of the Geneva consumer price index, the following elements do not increase: a)\tFamily Allowance, Child Allowance and Infant Allowance (Annex R A 3); b)\tReimbursement of education fees: maximum amounts of reimbursement (Annex R A 4.01) for the academic year 2007/2008. Related adjustments will be applied, wherever applicable, to Paid Associates and Students. As in the past, the actual percentage increase of each salary position may vary, due to the application of a constant step value and rounding effects. Human Resources Department Tel. 73566

  10. Comparison of gas spring designs with adjustable spring characteristic for a free-piston engine; Vergleich von Gasfedervarianten mit variabler Kennlinie fuer einen Freikolbenmotor

    Energy Technology Data Exchange (ETDEWEB)

    Pohl, S.E.; Ferrari, C. [Deutsches Zentrum fuer Luft- und Raumfahrt (DLR), Stuttgart (Germany). Inst. fuer Fahrzeugkonzepte

    2007-12-15

    In this paper two different gas spring designs for a free-piston application are introduced. On the basis of thermodynamic calculations the spring characteristics of a mass-variable and a volume-variable gas spring are analyzed for different operating points. A comparison of the spring performances indicates that the spring characteristics of the two designs only match at one operation point. Therefore, a calculation method minimizing the difference between the two spring characteristics over the entire operating range of a free piston engine is introduced. The theoretical examination is confirmed by measurements on a gas spring test stand. (orig.)

  11. A Simplified Version of the Fuzzy Decision Method and its Comparison with the Paraconsistent Decision Method

    Science.gov (United States)

    de Carvalho, Fábio Romeu; Abe, Jair Minoro

    2010-11-01

    Two recent non-classical logics have been used to make decision: fuzzy logic and paraconsistent annotated evidential logic Et. In this paper we present a simplified version of the fuzzy decision method and its comparison with the paraconsistent one. Paraconsistent annotated evidential logic Et, introduced by Da Costa, Vago and Subrahmanian (1991), is capable of handling uncertain and contradictory data without becoming trivial. It has been used in many applications such as information technology, robotics, artificial intelligence, production engineering, decision making etc. Intuitively, one Et logic formula is type p(a, b), in which a and b belong to [0, 1] (real interval) and represent respectively the degree of favorable evidence (or degree of belief) and the degree of contrary evidence (or degree of disbelief) found in p. The set of all pairs (a; b), called annotations, when plotted, form the Cartesian Unitary Square (CUS). This set, containing a similar order relation of real number, comprises a network, called lattice of the annotations. Fuzzy logic was introduced by Zadeh (1965). It tries to systematize the knowledge study, searching mainly to study the fuzzy knowledge (you don't know what it means) and distinguish it from the imprecise one (you know what it means, but you don't know its exact value). This logic is similar to paraconsistent annotated one, since it attributes a numeric value (only one, not two values) to each proposition (then we can say that it is an one-valued logic). This number translates the intensity (the degree) with which the preposition is true. Let's X a set and A, a subset of X, identified by the function f(x). For each element x∈X, you have y = f(x)∈[0, 1]. The number y is called degree of pertinence of x in A. Decision making theories based on these logics have shown to be powerful in many aspects regarding more traditional methods, like the one based on Statistics. In this paper we present a first study for a simplified

  12. Adjustable collimator

    International Nuclear Information System (INIS)

    Carlson, R.W.; Covic, J.; Leininger, G.

    1981-01-01

    In a rotating fan beam tomographic scanner there is included an adjustable collimator and shutter assembly. The assembly includes a fan angle collimation cylinder having a plurality of different length slots through which the beam may pass for adjusting the fan angle of the beam. It also includes a beam thickness cylinder having a plurality of slots of different widths for adjusting the thickness of the beam. Further, some of the slots have filter materials mounted therein so that the operator may select from a plurality of filters. Also disclosed is a servo motor system which allows the operator to select the desired fan angle, beam thickness and filter from a remote location. An additional feature is a failsafe shutter assembly which includes a spring biased shutter cylinder mounted in the collimation cylinders. The servo motor control circuit checks several system conditions before the shutter is rendered openable. Further, the circuit cuts off the radiation if the shutter fails to open or close properly. A still further feature is a reference radiation intensity monitor which includes a tuning-fork shaped light conducting element having a scintillation crystal mounted on each tine. The monitor is placed adjacent the collimator between it and the source with the pair of crystals to either side of the fan beam

  13. Labor productivity adjustment factors. A method for estimating labor construction costs associated with physical modifications to nuclear power plants

    International Nuclear Information System (INIS)

    Riordan, B.J.

    1986-03-01

    This report develops quantitative labor productivity adjustment factors for the performance of regulatory impact analyses (RIAs). These factors will allow analysts to modify ''new construction'' labor costs to account for changes in labor productivity due to differing work environments at operating reactors and at reactors with construction in progress. The technique developed in this paper relies on the Energy Economic Data Base (EEDB) for baseline estimates of the direct labor hours and/or labor costs required to perform specific tasks in a new construction environment. The labor productivity cost factors adjust for constraining conditions such as working in a radiation environment, poor access, congestion and interference, etc., which typically occur on construction tasks at operating reactors and can occur under certain circumstances at reactors under construction. While the results do not portray all aspects of labor productivity, they encompass the major work place conditions generally discernible by the NRC analysts and assign values that appear to be reasonable within the context of industry experience. 18 refs

  14. Critical comparison between equation of motion-Green's function methods and configuration interaction methods: analysis of methods and applications

    International Nuclear Information System (INIS)

    Freed, K.F.; Herman, M.F.; Yeager, D.L.

    1980-01-01

    A description is provided of the common conceptual origins of many-body equations of motion and Green's function methods in Liouville operator formulations of the quantum mechanics of atomic and molecular electronic structure. Numerical evidence is provided to show the inadequacies of the traditional strictly perturbative approaches to these methods. Nonperturbative methods are introduced by analogy with techniques developed for handling large configuration interaction calculations and by evaluating individual matrix elements to higher accuracy. The important role of higher excitations is exhibited by the numerical calculations, and explicit comparisons are made between converged equations of motion and configuration interaction calculations for systems where a fundamental theorem requires the equality of the energy differences produced by these different approaches. (Auth.)

  15. Parametric methods outperformed non-parametric methods in comparisons of discrete numerical variables

    Directory of Open Access Journals (Sweden)

    Sandvik Leiv

    2011-04-01

    Full Text Available Abstract Background The number of events per individual is a widely reported variable in medical research papers. Such variables are the most common representation of the general variable type called discrete numerical. There is currently no consensus on how to compare and present such variables, and recommendations are lacking. The objective of this paper is to present recommendations for analysis and presentation of results for discrete numerical variables. Methods Two simulation studies were used to investigate the performance of hypothesis tests and confidence interval methods for variables with outcomes {0, 1, 2}, {0, 1, 2, 3}, {0, 1, 2, 3, 4}, and {0, 1, 2, 3, 4, 5}, using the difference between the means as an effect measure. Results The Welch U test (the T test with adjustment for unequal variances and its associated confidence interval performed well for almost all situations considered. The Brunner-Munzel test also performed well, except for small sample sizes (10 in each group. The ordinary T test, the Wilcoxon-Mann-Whitney test, the percentile bootstrap interval, and the bootstrap-t interval did not perform satisfactorily. Conclusions The difference between the means is an appropriate effect measure for comparing two independent discrete numerical variables that has both lower and upper bounds. To analyze this problem, we encourage more frequent use of parametric hypothesis tests and confidence intervals.

  16. Using Multilevel Modeling to Assess Case-Mix Adjusters in Consumer Experience Surveys in Health Care

    NARCIS (Netherlands)

    Damman, Olga C.; Stubbe, Janine H.; Hendriks, Michelle; Arah, Onyebuchi A.; Spreeuwenberg, Peter; Delnoij, Diana M. J.; Groenewegen, Peter P.

    2009-01-01

    Background: Ratings on the quality of healthcare from the consumer's perspective need to be adjusted for consumer characteristics to ensure fair and accurate comparisons between healthcare providers or health plans. Although multilevel analysis is already considered an appropriate method for

  17. Stochastic spectral Galerkin and collocation methods for PDEs with random coefficients: A numerical comparison

    KAUST Repository

    Bä ck, Joakim; Nobile, Fabio; Tamellini, Lorenzo; Tempone, Raul

    2010-01-01

    Much attention has recently been devoted to the development of Stochastic Galerkin (SG) and Stochastic Collocation (SC) methods for uncertainty quantification. An open and relevant research topic is the comparison of these two methods

  18. An experimental detrending approach to attributing change of pan evaporation in comparison with the traditional partial differential method

    Science.gov (United States)

    Wang, Tingting; Sun, Fubao; Xia, Jun; Liu, Wenbin; Sang, Yanfang

    2017-04-01

    In predicting how droughts and hydrological cycles would change in a warming climate, change of atmospheric evaporative demand measured by pan evaporation (Epan) is one crucial element to be understood. Over the last decade, the derived partial differential (PD) form of the PenPan equation is a prevailing attribution approach to attributing changes to Epan worldwide. However, the independency among climatic variables required by the PD approach cannot be met using long term observations. Here we designed a series of numerical experiments to attribute changes of Epan over China by detrending each climatic variable, i.e., an experimental detrending approach, to address the inter-correlation among climate variables, and made comparison with the traditional PD method. The results show that the detrending approach is superior not only to a complicate system with multi-variables and mixing algorithm like aerodynamic component (Ep,A) and Epan, but also to a simple case like radiative component (Ep,R), when compared with traditional PD method. The major reason for this is the strong and significant inter-correlation of input meteorological forcing. Very similar and fine attributing results have been achieved based on detrending approach and PD method after eliminating the inter-correlation of input through a randomize approach. The contribution of Rh and Ta in net radiation and thus Ep,R, which has been overlooked based on the PD method but successfully detected by detrending approach, provides some explanation to the comparing results. We adopted the control run from the detrending approach and applied it to made adjustment of PD method. Much improvement has been made and thus proven this adjustment an effective way in attributing changes to Epan. Hence, the detrending approach and the adjusted PD method are well recommended in attributing changes in hydrological models to better understand and predict water and energy cycle.

  19. Ares I-X Launch Abort System, Crew Module, and Upper Stage Simulator Vibroacoustic Flight Data Evaluation, Comparison to Predictions, and Recommendations for Adjustments to Prediction Methodology and Assumptions

    Science.gov (United States)

    Smith, Andrew; Harrison, Phil

    2010-01-01

    The National Aeronautics and Space Administration (NASA) Constellation Program (CxP) has identified a series of tests to provide insight into the design and development of the Crew Launch Vehicle (CLV) and Crew Exploration Vehicle (CEV). Ares I-X was selected as the first suborbital development flight test to help meet CxP objectives. The Ares I-X flight test vehicle (FTV) is an early operational model of CLV, with specific emphasis on CLV and ground operation characteristics necessary to meet Ares I-X flight test objectives. The in-flight part of the test includes a trajectory to simulate maximum dynamic pressure during flight and perform a stage separation of the Upper Stage Simulator (USS) from the First Stage (FS). The in-flight test also includes recovery of the FS. The random vibration response from the ARES 1-X flight will be reconstructed for a few specific locations that were instrumented with accelerometers. This recorded data will be helpful in validating and refining vibration prediction tools and methodology. Measured vibroacoustic environments associated with lift off and ascent phases of the Ares I-X mission will be compared with pre-flight vibration predictions. The measured flight data was given as time histories which will be converted into power spectral density plots for comparison with the maximum predicted environments. The maximum predicted environments are documented in the Vibroacoustics and Shock Environment Data Book, AI1-SYS-ACOv4.10 Vibration predictions made using statistical energy analysis (SEA) VAOne computer program will also be incorporated in the comparisons. Ascent and lift off measured acoustics will also be compared to predictions to assess whether any discrepancies between the predicted vibration levels and measured vibration levels are attributable to inaccurate acoustic predictions. These comparisons will also be helpful in assessing whether adjustments to prediction methodologies are needed to improve agreement between the

  20. Quantifying the indirect impacts of climate on agriculture: an inter-method comparison

    Science.gov (United States)

    Calvin, Kate; Fisher-Vanden, Karen

    2017-11-01

    Climate change and increases in CO2 concentration affect the productivity of land, with implications for land use, land cover, and agricultural production. Much of the literature on the effect of climate on agriculture has focused on linking projections of changes in climate to process-based or statistical crop models. However, the changes in productivity have broader economic implications that cannot be quantified in crop models alone. How important are these socio-economic feedbacks to a comprehensive assessment of the impacts of climate change on agriculture? In this paper, we attempt to measure the importance of these interaction effects through an inter-method comparison between process models, statistical models, and integrated assessment model (IAMs). We find the impacts on crop yields vary widely between these three modeling approaches. Yield impacts generated by the IAMs are 20%-40% higher than the yield impacts generated by process-based or statistical crop models, with indirect climate effects adjusting yields by between -12% and +15% (e.g. input substitution and crop switching). The remaining effects are due to technological change.

  1. Automation of the method gamma of comparison dosimetry images

    International Nuclear Information System (INIS)

    Moreno Reyes, J. C.; Macias Jaen, J.; Arrans Lara, R.

    2013-01-01

    The objective of this work was the development of JJGAMMA application analysis software, which enables this task systematically, minimizing intervention specialist and therefore the variability due to the observer. Both benefits, allow comparison of images is done in practice with the required frequency and objectivity. (Author)

  2. Simulation of a method for determining one-dimensional {sup 137}Cs distribution using multiple gamma spectroscopic measurements with an adjustable cylindrical collimator and center shield

    Energy Technology Data Exchange (ETDEWEB)

    Whetstone, Z.D.; Dewey, S.C. [Radiological Health Engineering Laboratory, Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, 1906 Cooley Building, Ann Arbor, MI 48109-2104 (United States); Kearfott, K.J., E-mail: kearfott@umich.ed [Radiological Health Engineering Laboratory, Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, 1906 Cooley Building, Ann Arbor, MI 48109-2104 (United States)

    2011-05-15

    With multiple in situ gamma spectroscopic measurements obtained with an adjustable cylindrical collimator and a circular shield, the arbitrary one-dimensional distribution of radioactive material can be determined. The detector responses are theoretically calculated, field measurements obtained, and a system of equations relating detector response to measurement geometry and activity distribution solved to estimate the distribution. This paper demonstrates the method by simulating multiple scenarios and providing analysis of the system conditioning.

  3. Effects of Statin Treatment on Inflammation and Cardiac Function in Heart Failure: An Adjusted Indirect Comparison Meta-Analysis of Randomized Trials.

    Science.gov (United States)

    Bonsu, Kwadwo Osei; Reidpath, Daniel Diamond; Kadirvelu, Amudha

    2015-12-01

    Statins are known to prevent heart failure (HF). However, it is unclear whether statins as class or type (lipophilic or hydrophilic) improve outcomes of established HF. The current meta-analysis was performed to compare the treatment effects of lipophilic and hydrophilic statins on inflammation and cardiac function in HF. Outcomes were indicators of cardiac function [changes in left ventricular ejection fraction (LVEF) and B-type natriuretic peptide (BNP)] and inflammation [changes in highly sensitive C-reactive protein (hsCRP) and interluekin-6 (IL-6)]. We conducted a search of PubMed, EMBASE, and the Cochrane databases until December 31, 2014 for randomized control trials (RCTs) of statin versus placebo in patients with HF. RCTs with their respective extracted information were dichotomized into statin type evaluated and analyzed separately. Outcomes were pooled with random effect approach, producing standardized mean differences (SMD) for each statin type. Using these pooled estimates, we performed adjusted indirect comparisons for each outcome. Data from 6214 patients from 19 trials were analyzed. Lipophilic statin was superior to hydrophilic statin treatment regarding follow-up LVEF (SMD, 4.54; 95% CI, 4.16-4.91; P statin produces greater treatment effects on cardiac function and inflammation compared with hydrophilic statin in patients with HF. Until data from adequately powered head-to-head trial of the statin types are available, our meta-analysis brings clinicians and researchers a step closer to the quest on which statin--lipophilic or hydrophilic--is associated with better outcomes in HF. © 2015 John Wiley & Sons Ltd.

  4. Method comparison of ultrasound and kilovoltage x-ray fiducial marker imaging for prostate radiotherapy targeting

    International Nuclear Information System (INIS)

    Fuller, Clifton David; Jr, Charles R Thomas; Schwartz, Scott; Golden, Nanalei; Ting, Joe; Wong, Adrian; Erdogmus, Deniz; Scarbrough, Todd J

    2006-01-01

    Several measurement techniques have been developed to address the capability for target volume reduction via target localization in image-guided radiotherapy; among these have been ultrasound (US) and fiducial marker (FM) software-assisted localization. In order to assess interchangeability between methods, US and FM localization were compared using established techniques for determination of agreement between measurement methods when a 'gold-standard' comparator does not exist, after performing both techniques daily on a sequential series of patients. At least 3 days prior to CT simulation, four gold seeds were placed within the prostate. FM software-assisted localization utilized the ExacTrac X-Ray 6D (BrainLab AG, Germany) kVp x-ray image acquisition system to determine prostate position; US prostate targeting was performed on each patient using the SonArray (Varian, Palo Alto, CA). Patients were aligned daily using laser alignment of skin marks. Directional shifts were then calculated by each respective system in the X, Y and Z dimensions before each daily treatment fraction, previous to any treatment or couch adjustment, as well as a composite vector of displacement. Directional shift agreement in each axis was compared using Altman-Bland limits of agreement, Lin's concordance coefficient with Partik's grading schema, and Deming orthogonal bias-weighted correlation methodology. 1019 software-assisted shifts were suggested by US and FM in 39 patients. The 95% limits of agreement in X, Y and Z axes were ±9.4 mm, ±11.3 mm and ±13.4, respectively. Three-dimensionally, measurements agreed within 13.4 mm in 95% of all paired measures. In all axes, concordance was graded as 'poor' or 'unacceptable'. Deming regression detected proportional bias in both directional axes and three-dimensional vectors. Our data suggest substantial differences between US and FM image-guided measures and subsequent suggested directional shifts. Analysis reveals that the vast majority of

  5. Comparison of System Identification Methods using Ambient Bridge Test Data

    DEFF Research Database (Denmark)

    Andersen, P.; Brincker, Rune; Peeters, B.

    1999-01-01

    In this paper the performance of four different system identification methods is compared using operational data obtained from an ambient vibration test of the Swiss Z24 highway bridge. The four methods are the frequency domain based peak-picking methods, the polyreference LSCE method, the stocha......In this paper the performance of four different system identification methods is compared using operational data obtained from an ambient vibration test of the Swiss Z24 highway bridge. The four methods are the frequency domain based peak-picking methods, the polyreference LSCE method...

  6. A comparison theorem for the SOR iterative method

    Science.gov (United States)

    Sun, Li-Ying

    2005-09-01

    In 1997, Kohno et al. have reported numerically that the improving modified Gauss-Seidel method, which was referred to as the IMGS method, is superior to the SOR iterative method. In this paper, we prove that the spectral radius of the IMGS method is smaller than that of the SOR method and Gauss-Seidel method, if the relaxation parameter [omega][set membership, variant](0,1]. As a result, we prove theoretically that this method is succeeded in improving the convergence of some classical iterative methods. Some recent results are improved.

  7. Estimating HIV incidence among adults in Kenya and Uganda: a systematic comparison of multiple methods.

    Directory of Open Access Journals (Sweden)

    Andrea A Kim

    2011-03-01

    Full Text Available Several approaches have been used for measuring HIV incidence in large areas, yet each presents specific challenges in incidence estimation.We present a comparison of incidence estimates for Kenya and Uganda using multiple methods: 1 Epidemic Projections Package (EPP and Spectrum models fitted to HIV prevalence from antenatal clinics (ANC and national population-based surveys (NPS in Kenya (2003, 2007 and Uganda (2004/2005; 2 a survey-derived model to infer age-specific incidence between two sequential NPS; 3 an assay-derived measurement in NPS using the BED IgG capture enzyme immunoassay, adjusted for misclassification using a locally derived false-recent rate (FRR for the assay; (4 community cohorts in Uganda; (5 prevalence trends in young ANC attendees. EPP/Spectrum-derived and survey-derived modeled estimates were similar: 0.67 [uncertainty range: 0.60, 0.74] and 0.6 [confidence interval: (CI 0.4, 0.9], respectively, for Uganda (2005 and 0.72 [uncertainty range: 0.70, 0.74] and 0.7 [CI 0.3, 1.1], respectively, for Kenya (2007. Using a local FRR, assay-derived incidence estimates were 0.3 [CI 0.0, 0.9] for Uganda (2004/2005 and 0.6 [CI 0, 1.3] for Kenya (2007. Incidence trends were similar for all methods for both Uganda and Kenya.Triangulation of methods is recommended to determine best-supported estimates of incidence to guide programs. Assay-derived incidence estimates are sensitive to the level of the assay's FRR, and uncertainty around high FRRs can significantly impact the validity of the estimate. Systematic evaluations of new and existing incidence assays are needed to the study the level, distribution, and determinants of the FRR to guide whether incidence assays can produce reliable estimates of national HIV incidence.

  8. A comparison between progressive extension method (PEM) and iterative method (IM) for magnetic field extrapolations in the solar atmosphere

    Science.gov (United States)

    Wu, S. T.; Sun, M. T.; Sakurai, Takashi

    1990-01-01

    This paper presents a comparison between two numerical methods for the extrapolation of nonlinear force-free magnetic fields, viz the Iterative Method (IM) and the Progressive Extension Method (PEM). The advantages and disadvantages of these two methods are summarized, and the accuracy and numerical instability are discussed. On the basis of this investigation, it is claimed that the two methods do resemble each other qualitatively.

  9. A Robust and Fast Method to Compute Shallow States without Adjustable Parameters: Simulations for a Silicon-Based Qubit

    Science.gov (United States)

    Debernardi, Alberto; Fanciulli, Marco

    Within the framework of the envelope function approximation we have computed - without adjustable parameters and with a reduced computational effort due to analytical expression of relevant Hamiltonian terms - the energy levels of the shallow P impurity in silicon and the hyperfine and superhyperfine splitting of the ground state. We have studied the dependence of these quantities on the applied external electric field along the [001] direction. Our results reproduce correctly the experimental splitting of the impurity ground states detected at zero electric field and provide reliable predictions for values of the field where experimental data are lacking. Further, we have studied the effect of confinement of a shallow state of a P atom at the center of a spherical Si-nanocrystal embedded in a SiO2 matrix. In our simulations the valley-orbit interaction of a realistically screened Coulomb potential and of the core potential are included exactly, within the numerical accuracy due to the use of a finite basis set, while band-anisotropy effects are taken into account within the effective-mass approximation.

  10. Paired comparisons analysis: an axiomatic approach to ranking methods

    NARCIS (Netherlands)

    Gonzalez-Diaz, J.; Hendrickx, Ruud; Lohmann, E.R.M.A.

    2014-01-01

    In this paper we present an axiomatic analysis of several ranking methods for general tournaments. We find that the ranking method obtained by applying maximum likelihood to the (Zermelo-)Bradley-Terry model, the most common method in statistics and psychology, is one of the ranking methods that

  11. Adjustment Criterion and Algorithm in Adjustment Model with Uncertain

    Directory of Open Access Journals (Sweden)

    SONG Yingchun

    2015-02-01

    Full Text Available Uncertainty often exists in the process of obtaining measurement data, which affects the reliability of parameter estimation. This paper establishes a new adjustment model in which uncertainty is incorporated into the function model as a parameter. A new adjustment criterion and its iterative algorithm are given based on uncertainty propagation law in the residual error, in which the maximum possible uncertainty is minimized. This paper also analyzes, with examples, the different adjustment criteria and features of optimal solutions about the least-squares adjustment, the uncertainty adjustment and total least-squares adjustment. Existing error theory is extended with new observational data processing method about uncertainty.

  12. Adjustment of a direct method for the determination of man body burden in Pu-239 on by X-ray detection of U-235

    International Nuclear Information System (INIS)

    Boulay, P.

    1968-04-01

    The use of Pu-239 on a larger scale sets a problem about the contamination measurement by aerosol at lung level. A method of direct measurement of Pu-239 lung burden is possible, thanks to the use of a large area window proportional counter. A counter of such pattern, has been especially carried out for this purpose. The adjustment of the apparatus allows an adequate sensibility to detect a contamination at the maximum permissible body burden level. Besides, a method for individual 'internal calibration', with a plutonium mock: the protactinium-233, is reported. (author) [fr

  13. On performing of interference technique based on self-adjusting Zernike filters (SA-AVT method) to investigate flows and validate 3D flow numerical simulations

    Science.gov (United States)

    Pavlov, Al. A.; Shevchenko, A. M.; Khotyanovsky, D. V.; Pavlov, A. A.; Shmakov, A. S.; Golubev, M. P.

    2017-10-01

    We present a method for and results of determination of the field of integral density in the structure of flow corresponding to the Mach interaction of shock waves at Mach number M = 3. The optical diagnostics of flow was performed using an interference technique based on self-adjusting Zernike filters (SA-AVT method). Numerical simulations were carried out using the CFS3D program package for solving the Euler and Navier-Stokes equations. Quantitative data on the distribution of integral density on the path of probing radiation in one direction of 3D flow transillumination in the region of Mach interaction of shock waves were obtained for the first time.

  14. Comparison of different dose calculation methods for irregular photon fields

    International Nuclear Information System (INIS)

    Zakaria, G.A.; Schuette, W.

    2000-01-01

    In this work, 4 calculation methods (Wrede method, Clarskon method of sector integration, beam-zone method of Quast and pencil-beam method of Ahnesjoe) are introduced to calculate point doses in different irregular photon fields. The calculations cover a typical mantle field, an inverted Y-field and different blocked fields for 4 and 10 MV photon energies. The results are compared to those of measurements in a water phantom. The Clarkson and the pencil-beam method have been proved to be the methods of equal standard in relation to accuracy. Both of these methods are being distinguished by minimum deviations and applied in our clinical routine work. The Wrede and beam-zone methods deliver useful results to central beam and yet provide larger deviations in calculating points beyond the central axis. (orig.) [de

  15. Comparison of Electronic Data Capture (EDC) with the Standard Data Capture Method for Clinical Trial Data

    Science.gov (United States)

    Walther, Brigitte; Hossin, Safayet; Townend, John; Abernethy, Neil; Parker, David; Jeffries, David

    2011-01-01

    Background Traditionally, clinical research studies rely on collecting data with case report forms, which are subsequently entered into a database to create electronic records. Although well established, this method is time-consuming and error-prone. This study compares four electronic data capture (EDC) methods with the conventional approach with respect to duration of data capture and accuracy. It was performed in a West African setting, where clinical trials involve data collection from urban, rural and often remote locations. Methodology/Principal Findings Three types of commonly available EDC tools were assessed in face-to-face interviews; netbook, PDA, and tablet PC. EDC performance during telephone interviews via mobile phone was evaluated as a fourth method. The Graeco Latin square study design allowed comparison of all four methods to standard paper-based recording followed by data double entry while controlling simultaneously for possible confounding factors such as interview order, interviewer and interviewee. Over a study period of three weeks the error rates decreased considerably for all EDC methods. In the last week of the study the data accuracy for the netbook (5.1%, CI95%: 3.5–7.2%) and the tablet PC (5.2%, CI95%: 3.7–7.4%) was not significantly different from the accuracy of the conventional paper-based method (3.6%, CI95%: 2.2–5.5%), but error rates for the PDA (7.9%, CI95%: 6.0–10.5%) and telephone (6.3%, CI95% 4.6–8.6%) remained significantly higher. While EDC-interviews take slightly longer, data become readily available after download, making EDC more time effective. Free text and date fields were associated with higher error rates than numerical, single select and skip fields. Conclusions EDC solutions have the potential to produce similar data accuracy compared to paper-based methods. Given the considerable reduction in the time from data collection to database lock, EDC holds the promise to reduce research-associated costs

  16. Comparison of electronic data capture (EDC with the standard data capture method for clinical trial data.

    Directory of Open Access Journals (Sweden)

    Brigitte Walther

    Full Text Available BACKGROUND: Traditionally, clinical research studies rely on collecting data with case report forms, which are subsequently entered into a database to create electronic records. Although well established, this method is time-consuming and error-prone. This study compares four electronic data capture (EDC methods with the conventional approach with respect to duration of data capture and accuracy. It was performed in a West African setting, where clinical trials involve data collection from urban, rural and often remote locations. METHODOLOGY/PRINCIPAL FINDINGS: Three types of commonly available EDC tools were assessed in face-to-face interviews; netbook, PDA, and tablet PC. EDC performance during telephone interviews via mobile phone was evaluated as a fourth method. The Graeco Latin square study design allowed comparison of all four methods to standard paper-based recording followed by data double entry while controlling simultaneously for possible confounding factors such as interview order, interviewer and interviewee. Over a study period of three weeks the error rates decreased considerably for all EDC methods. In the last week of the study the data accuracy for the netbook (5.1%, CI95%: 3.5-7.2% and the tablet PC (5.2%, CI95%: 3.7-7.4% was not significantly different from the accuracy of the conventional paper-based method (3.6%, CI95%: 2.2-5.5%, but error rates for the PDA (7.9%, CI95%: 6.0-10.5% and telephone (6.3%, CI95% 4.6-8.6% remained significantly higher. While EDC-interviews take slightly longer, data become readily available after download, making EDC more time effective. Free text and date fields were associated with higher error rates than numerical, single select and skip fields. CONCLUSIONS: EDC solutions have the potential to produce similar data accuracy compared to paper-based methods. Given the considerable reduction in the time from data collection to database lock, EDC holds the promise to reduce research

  17. Response Adjusted for Days of Antibiotic Risk (RADAR): evaluation of a novel method to compare strategies to optimize antibiotic use.

    Science.gov (United States)

    Schweitzer, V A; van Smeden, M; Postma, D F; Oosterheert, J J; Bonten, M J M; van Werkhoven, C H

    2017-12-01

    The Response Adjusted for Days of Antibiotic Risk (RADAR) statistic was proposed to improve the efficiency of trials comparing antibiotic stewardship strategies to optimize antibiotic use. We studied the behaviour of RADAR in a non-inferiority trial in which a β-lactam monotherapy strategy (n = 656) was non-inferior to fluoroquinolone monotherapy (n = 888) for patients with moderately severe community-acquired pneumonia. Patients were ranked according to clinical outcome, using five or eight categories, and antibiotic use. RADAR was calculated as the probability that the β-lactam group had a more favourable ranking than the fluoroquinolone group. To investigate the sensitivity of RADAR to detrimental clinical outcome we simulated increasing rates of 90-day mortality in the β-lactam group and performed the RADAR and non-inferiority analysis. The RADAR of the β-lactam group compared with the fluoroquinolone group was 60.3% (95% CI 57.9%-62.7%) using five and 58.4% (95% CI 56.0%-60.9%) using eight clinical outcome categories, all in favour of β-lactam. Sample sizes for RADAR were 38% (250/653) and 89% (580/653) of the non-inferiority sample size calculation, using five or eight clinical outcome categories, respectively. With simulated mortality rates, loss of non-inferiority of the β-lactam group occurred at a relative risk of 1.125 in the conventional analysis, whereas using RADAR the β-lactam group lost superiority at a relative risk of mortality of 1.25 and 1.5, with eight and five clinical outcome categories, respectively. RADAR favoured β-lactam over fluoroquinolone therapy for community-acquired pneumonia. Although RADAR required fewer patients than conventional non-inferiority analysis, the statistic was less sensitive to detrimental outcomes. Copyright © 2017 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  18. Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files. SG39 meeting, May 2016

    International Nuclear Information System (INIS)

    Herman, Michal Wladyslaw; Cabellos De Francisco, Oscar; Beck, Bret; Ignatyuk, Anatoly V.; Palmiotti, Giuseppe; Grudzevich, Oleg T.; Salvatores, Massimo; Chadwick, Mark; Pelloni, Sandro; Diez De La Obra, Carlos Javier; Wu, Haicheng; Sobes, Vladimir; Rearden, Bradley T.; Yokoyama, Kenji; Hursin, Mathieu; Penttila, Heikki; Kodeli, Ivan-Alexander; Plevnik, Lucijan; Plompen, Arjan; Gabrielli, Fabrizio; Leal, Luiz Carlos; Aufiero, Manuele; Fiorito, Luca; Hummel, Andrew; Siefman, Daniel; Leconte, Pierre

    2016-05-01

    The aim of WPEC subgroup 39 'Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files' is to provide criteria and practical approaches to use effectively the results of sensitivity analyses and cross section adjustments for feedback to evaluators and differential measurement experimentalists in order to improve the knowledge of neutron cross sections, uncertainties, and correlations to be used in a wide range of applications. WPEC subgroup 40-CIELO (Collaborative International Evaluated Library Organization) provides a new working paradigm to facilitate evaluated nuclear reaction data advances. It brings together experts from across the international nuclear reaction data community to identify and document discrepancies among existing evaluated data libraries, measured data, and model calculation interpretations, and aims to make progress in reconciling these discrepancies to create more accurate ENDF-formatted files. SG40-CIELO focusses on 6 important isotopes: "1H, "1"6O, "5"6Fe, "2"3"5","2"3"8U, "2"3"9Pu. This document is the proceedings of the seventh formal Subgroup 39 meeting and of the Joint SG39+SG40 Session held at the NEA, OECD Conference Center, Paris, France on 10-11 May 2016. It comprises a Summary Record of the meeting, and all the available presentations (slides) given by the participants: A - Welcome and actions review (Oscar CABELLOS); B - Methods: - XGPT: uncertainty propagation and data assimilation from continuous energy covariance matrix and resonance parameters covariances (Manuele AUFIERO); - Optimal experiment utilization (REWINDing PIA), (G. Palmiotti); C - Experiment analysis, sensitivity calculations and benchmarks: - Tripoli-4 analysis of SEG experiments (Andrew HUMMEL); - Tripoli-4 analysis of BERENICE experiments (P. DUFAY, Cyrille DE SAINT JEAN); - Preparation of sensitivities of k-eff, beta-eff and shielding benchmarks for adjustment exercise (Ivo KODELI); - SA and

  19. Comparison of five actigraphy scoring methods with bipolar disorder.

    Science.gov (United States)

    Boudebesse, Carole; Leboyer, Marion; Begley, Amy; Wood, Annette; Miewald, Jean; Hall, Martica; Frank, Ellen; Kupfer, David; Germain, Anne

    2013-01-01

    The goal of this study was to compare 5 actigraphy scoring methods in a sample of 18 remitted patients with bipolar disorder. Actigraphy records were processed using five different scoring methods relying on the sleep diary; the event-marker; the software-provided automatic algorithm; the automatic algorithm supplemented by the event-marker; visual inspection (VI) only. The algorithm and the VI methods differed from the other methods for many actigraphy parameters of interest. Particularly, the algorithm method yielded longer sleep duration, and the VI method yielded shorter sleep latency compared to the other methods. The present findings provide guidance for the selection of signal processing method based on sleep parameters of interest, time-cue sources and availability, and related scoring time costs for the study.

  20. Comparison of methods for estimating carbon in harvested wood products

    International Nuclear Information System (INIS)

    Claudia Dias, Ana; Louro, Margarida; Arroja, Luis; Capela, Isabel

    2009-01-01

    There is a great diversity of methods for estimating carbon storage in harvested wood products (HWP) and, therefore, it is extremely important to agree internationally on the methods to be used in national greenhouse gas inventories. This study compares three methods for estimating carbon accumulation in HWP: the method suggested by Winjum et al. (Winjum method), the tier 2 method proposed by the IPCC Good Practice Guidance for Land Use, Land-Use Change and Forestry (GPG LULUCF) (GPG tier 2 method) and a method consistent with GPG LULUCF tier 3 methods (GPG tier 3 method). Carbon accumulation in HWP was estimated for Portugal under three accounting approaches: stock-change, production and atmospheric-flow. The uncertainty in the estimates was also evaluated using Monte Carlo simulation. The estimates of carbon accumulation in HWP obtained with the Winjum method differed substantially from the estimates obtained with the other methods, because this method tends to overestimate carbon accumulation with the stock-change and the production approaches and tends to underestimate carbon accumulation with the atmospheric-flow approach. The estimates of carbon accumulation provided by the GPG methods were similar, but the GPG tier 3 method reported the lowest uncertainties. For the GPG methods, the atmospheric-flow approach produced the largest estimates of carbon accumulation, followed by the production approach and the stock-change approach, by this order. A sensitivity analysis showed that using the ''best'' available data on production and trade of HWP produces larger estimates of carbon accumulation than using data from the Food and Agriculture Organization. (author)

  1. Preliminary comparison of different reduction methods of graphene ...

    Indian Academy of Sciences (India)

    diverse applications and developing a simple, green, and efficient method for the mass production of ... properties of graphene have driven the search to find methods ... Chemical reduction of GO sheets has been performed with ... efficient method for the mass production of graphene. 2. ... temperature was raised to 35.

  2. Qualitative Comparison of Contraction-Based Curve Skeletonization Methods

    NARCIS (Netherlands)

    Sobiecki, André; Yasan, Haluk C.; Jalba, Andrei C.; Telea, Alexandru C.

    2013-01-01

    In recent years, many new methods have been proposed for extracting curve skeletons of 3D shapes, using a mesh-contraction principle. However, it is still unclear how these methods perform with respect to each other, and with respect to earlier voxel-based skeletonization methods, from the viewpoint

  3. Comparison of three methods to assess individual skeletal maturity.

    Science.gov (United States)

    Pasciuti, Enzo; Franchi, Lorenzo; Baccetti, Tiziano; Milani, Silvano; Farronato, Giampietro

    2013-09-01

    The knowledge of facial growth and development is fundamental to determine the optimal timing for different treatment procedures in the growing patient. To analyze the reproducibility of three methods in assessing individual skeletal maturity, and to evaluate any degree of concordance among them. In all, 100 growing subjects were enrolled to test three methods: the hand-wrist, cervical vertebral maturation (CVM), and medial phalanges of the third finger method (MP3). Four operators determined the skeletal maturity of the subjects to evaluate the reproducibility of each method. After 30 days the operators repeated the analysis to assess the repeatability of each method. Finally, one operator examined all subjects' radiographs to detect any concordance among the three methods. The weighted kappa values for inter-operator variability were 0.94, 0.91, and 0.90, for the WRI, CVM, and MP3 methods, respectively. The weighted kappa values for intra-operator variability were 0.92, 0.91, and 0.92, for the WRI, CVM, and MP3 methods, respectively. The three methods revealed a high degree of repeatability and reproducibility. Complete agreement among the three methods was observed in 70% of the analyzed samples. The CVM method has the advantage of not necessitating an additional radiograph. The MP3 method is a simple and practical alternative as it requires only a standard dental x-ray device.

  4. Comparison study on cell calculation method of fast reactor

    International Nuclear Information System (INIS)

    Chiba, Gou

    2002-10-01

    Effective cross sections obtained by cell calculations are used in core calculations in current deterministic methods. Therefore, it is important to calculate the effective cross sections accurately and several methods have been proposed. In this study, some of the methods are compared to each other using a continuous energy Monte Carlo method as a reference. The result shows that the table look-up method used in Japan Nuclear Cycle Development Institute (JNC) sometimes has a difference over 10% in effective microscopic cross sections and be inferior to the sub-group method. The problem was overcome by introducing a new nuclear constant system developed in JNC, in which the ultra free energy group library is used. The system can also deal with resonance interaction effects between nuclides which are not able to be considered by other methods. In addition, a new method was proposed to calculate effective cross section accurately for power reactor fuel subassembly where the new nuclear constant system cannot be applied. This method uses the sub-group method and the ultra fine energy group collision probability method. The microscopic effective cross sections obtained by this method agree with the reference values within 5% difference. (author)

  5. Comparison of Kernel Equating and Item Response Theory Equating Methods

    Science.gov (United States)

    Meng, Yu

    2012-01-01

    The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…

  6. Comparison of test methods for mould growth in buildings

    DEFF Research Database (Denmark)

    Bonderup, Sirid; Gunnarsen, Lars Bo; Knudsen, Sofie Marie

    2016-01-01

    renovation needs. This is of importance when hidden surface testing would require destructive measures and subsequent renovation. After identifying available methods on the Danish market for assessing mould growth in dwellings, a case study was conducted to test the usefulness of the methods in four......The purpose of this work is to compare a range of test methods and kits for assessing whether a building structure is infested by mould fungi. A further purpose of this work is to evaluate whether air-based methods for sampling fungal emissions provide information qualifying decisions concerning...... methods measure different aspects relating to mould growth and vary in selectivity and precision. The two types of air samples indicated low levels of mould growth, even where the results of the other methods indicated high to moderate growth. With methods based on culture and DNA testing some differences...

  7. Comparison of electric field exposure measurement methods under power lines

    International Nuclear Information System (INIS)

    Korpinen, L.; Kuisti, H.; Tarao, H.; Paeaekkoenen, R.; Elovaara, J.

    2014-01-01

    The object of the study was to investigate extremely low frequency (ELF) electric field exposure measurement methods under power lines. The authors compared two different methods under power lines: in Method A, the sensor was placed on a tripod; and Method B required the measurer to hold the meter horizontally so that the distance from him/her was at least 1.5 m. The study includes 20 measurements in three places under 400 kV power lines. The authors used two commercial three-axis meters, EFA-3 and EFA-300. In statistical analyses, they did not find significant differences between Methods A and B. However, in the future, it is important to take into account that measurement methods can, in some cases, influence ELF electric field measurement results, and it is important to report the methods used so that it is possible to repeat the measurements. (authors)

  8. Instrumental and statistical methods for the comparison of class evidence

    Science.gov (United States)

    Liszewski, Elisa Anne

    Trace evidence is a major field within forensic science. Association of trace evidence samples can be problematic due to sample heterogeneity and a lack of quantitative criteria for comparing spectra or chromatograms. The aim of this study is to evaluate different types of instrumentation for their ability to discriminate among samples of various types of trace evidence. Chemometric analysis, including techniques such as Agglomerative Hierarchical Clustering, Principal Components Analysis, and Discriminant Analysis, was employed to evaluate instrumental data. First, automotive clear coats were analyzed by using microspectrophotometry to collect UV absorption data. In total, 71 samples were analyzed with classification accuracy of 91.61%. An external validation was performed, resulting in a prediction accuracy of 81.11%. Next, fiber dyes were analyzed using UV-Visible microspectrophotometry. While several physical characteristics of cotton fiber can be identified and compared, fiber color is considered to be an excellent source of variation, and thus was examined in this study. Twelve dyes were employed, some being visually indistinguishable. Several different analyses and comparisons were done, including an inter-laboratory comparison and external validations. Lastly, common plastic samples and other polymers were analyzed using pyrolysis-gas chromatography/mass spectrometry, and their pyrolysis products were then analyzed using multivariate statistics. The classification accuracy varied dependent upon the number of classes chosen, but the plastics were grouped based on composition. The polymers were used as an external validation and misclassifications occurred with chlorinated samples all being placed into the category containing PVC.

  9. Comparisons of ratchetting analysis methods using RCC-M, RCC-MR and ASME codes

    International Nuclear Information System (INIS)

    Yang Yu; Cabrillat, M.T.

    2005-01-01

    The present paper compares the simplified ratcheting analysis methods used in RCC-M, RCC-MR and ASME with some examples. Firstly, comparisons of the methods in RCC-M and efficiency diagram in RCC-MR are investigated. A special method is used to describe these two methods with curves in one coordinate, and the different conservation is demonstrated. RCC-M method is also be interpreted by SR (second ratio) and v (efficiency index) which is used in RCC-MR. Hence, we can easily compare the previous two methods by defining SR as abscissa and v as ordinate and plotting two curves of them. Secondly, comparisons of the efficiency curve in RCC-MR and methods in ASME-NH APPENDIX T are investigated, with significant creep. At last, two practical evaluations are performed to show the comparisons of aforementioned methods. (authors)

  10. Application of adjustment calculus in the nodeless Trefftz method for a problem of two-dimensional temperature field of the boiling liquid flowing in a minichannel

    Directory of Open Access Journals (Sweden)

    Hożejowska Sylwia

    2014-03-01

    Full Text Available The paper presents application of the nodeless Trefftz method to calculate temperature of the heating foil and the insulating glass pane during continuous flow of a refrigerant along a vertical minichannel. Numerical computations refer to an experiment in which the refrigerant (FC-72 enters under controlled pressure and temperature a rectangular minichannel. Initially its temperature is below the boiling point. During the flow it is heated by a heating foil. The thermosensitive liquid crystals allow to obtain twodimensional temperature field in the foil. Since the nodeless Trefftz method has very good performance for providing solutions to such problems, it was chosen as a numerical method to approximate two-dimensional temperature distribution in the protecting glass and the heating foil. Due to known temperature of the refrigerant it was also possible to evaluate the heat transfer coefficient at the foil-refrigerant interface. For expected improvement of the numerical results the nodeless Trefftz method was combined with adjustment calculus. Adjustment calculus allowed to smooth the measurements and to decrease the measurement errors. As in the case of the measurement errors, the error of the heat transfer coefficient decreased.

  11. A Method and a Model for Describing Competence and Adjustment: A Preschool Version of the Classroom Behavior Inventory.

    Science.gov (United States)

    Schaefer, Earl S.; Edgerton, Marianna D.

    A preschool version of the Classroom Behavior Inventory which provides a method for collecting valid data on a child's classroom behavior from day care and preschool teachers, was developed to complement the earlier form which was developed and validated for elementary school populations. The new version was tested with a pilot group of twenty-two…

  12. Using Case-Mix Adjustment Methods To Measure the Effectiveness of Substance Abuse Treatment: Three Examples Using Client Employment Outcomes.

    Science.gov (United States)

    Koenig, Lane; Fields, Errol L.; Dall, Timothy M.; Ameen, Ansari Z.; Harwood, Henrick J.

    This report demonstrates three applications of case-mix methods using regression analysis. The results are used to assess the relative effectiveness of substance abuse treatment providers. The report also examines the ability of providers to improve client employment outcomes, an outcome domain relatively unexamined in the assessment of provider…

  13. Improved Conjugate Gradient Bundle Adjustment of Dunhuang Wall Painting Images

    Science.gov (United States)

    Hu, K.; Huang, X.; You, H.

    2017-09-01

    Bundle adjustment with additional parameters is identified as a critical step for precise orthoimage generation and 3D reconstruction of Dunhuang wall paintings. Due to the introduction of self-calibration parameters and quasi-planar constraints, the structure of coefficient matrix of the reduced normal equation is banded-bordered, making the solving process of bundle adjustment complex. In this paper, Conjugate Gradient Bundle Adjustment (CGBA) method is deduced by calculus of variations. A preconditioning method based on improved incomplete Cholesky factorization is adopt to reduce the condition number of coefficient matrix, as well as to accelerate the iteration rate of CGBA. Both theoretical analysis and experimental results comparison with conventional method indicate that, the proposed method can effectively conquer the ill-conditioned problem of normal equation and improve the calculation efficiency of bundle adjustment with additional parameters considerably, while maintaining the actual accuracy.

  14. IMPROVED CONJUGATE GRADIENT BUNDLE ADJUSTMENT OF DUNHUANG WALL PAINTING IMAGES

    Directory of Open Access Journals (Sweden)

    K. Hu

    2017-09-01

    Full Text Available Bundle adjustment with additional parameters is identified as a critical step for precise orthoimage generation and 3D reconstruction of Dunhuang wall paintings. Due to the introduction of self-calibration parameters and quasi-planar constraints, the structure of coefficient matrix of the reduced normal equation is banded-bordered, making the solving process of bundle adjustment complex. In this paper, Conjugate Gradient Bundle Adjustment (CGBA method is deduced by calculus of variations. A preconditioning method based on improved incomplete Cholesky factorization is adopt to reduce the condition number of coefficient matrix, as well as to accelerate the iteration rate of CGBA. Both theoretical analysis and experimental results comparison with conventional method indicate that, the proposed method can effectively conquer the ill-conditioned problem of normal equation and improve the calculation efficiency of bundle adjustment with additional parameters considerably, while maintaining the actual accuracy.

  15. Comparison of Pectin Hydrogel Collection Methods in Microfluidic Device

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Chaeyeon; Park, Ki-Su; Kang, Sung-Min; Kim, Jongmin; Song, YoungShin; Lee, Chang-Soo [Chungnam National University, Daejeon (Korea, Republic of)

    2015-12-15

    This study investigated the effect of different collection methods on physical properties of pectin hydrogels in microfluidic synthetic approach. The pectin hydrogels were simply produced by the incorporation of calcium ions dissolved in continuous mineral oil. Then, different collection methods, pipetting, tubing, and settling, for harvesting pectin hydrogels were applied. The settling method showed most uniform and monodispersed hydrogels. In the case of settling, a coefficient of variation was 3.46 which was lower than pipetting method (18.60) and tubing method (14.76). Under the settling method, we could control the size of hydrogels, ranging from 30 μm to 180 μm, by simple manipulation of the viscosity of pectin and volumetric flow rate of dispersed and continuous phase. Finally, according to the characteristics of simple encapsulation of biological materials, we envision that the pectin hydrogels can be applied to drug delivery, food, and biocompatible materials.

  16. A Comparison between the Effect of Cooperative Learning Teaching Method and Lecture Teaching Method on Students' Learning and Satisfaction Level

    Science.gov (United States)

    Mohammadjani, Farzad; Tonkaboni, Forouzan

    2015-01-01

    The aim of the present research is to investigate a comparison between the effect of cooperative learning teaching method and lecture teaching method on students' learning and satisfaction level. The research population consisted of all the fourth grade elementary school students of educational district 4 in Shiraz. The statistical population…

  17. A statistical comparison of accelerated concrete testing methods

    OpenAIRE

    Denny Meyer

    1997-01-01

    Accelerated curing results, obtained after only 24 hours, are used to predict the 28 day strength of concrete. Various accelerated curing methods are available. Two of these methods are compared in relation to the accuracy of their predictions and the stability of the relationship between their 24 hour and 28 day concrete strength. The results suggest that Warm Water accelerated curing is preferable to Hot Water accelerated curing of concrete. In addition, some other methods for improving the...

  18. Comparison of microstickies measurement methods. Part II, Results and discussion

    Science.gov (United States)

    Mahendra R. Doshi; Angeles Blanco; Carlos Negro; Concepcion Monte; Gilles M. Dorris; Carlos C. Castro; Axel Hamann; R. Daniel Haynes; Carl Houtman; Karen Scallon; Hans-Joachim Putz; Hans Johansson; R. A. Venditti; K. Copeland; H.-M. Chang

    2003-01-01

    In part I of the article we discussed sample preparation procedure and described various methods used for the measurement of microstickies. Some of the important features of different methods are highlighted in Table 1. Temperatures used in the measurement methods vary from room temperature in some cases, 45 °C to 65 °C in other cases. Sample size ranges from as low as...

  19. Comparison of electrical conductivity calculation methods for natural waters

    Science.gov (United States)

    McCleskey, R. Blaine; Nordstrom, D. Kirk; Ryan, Joseph N.

    2012-01-01

    The capability of eleven methods to calculate the electrical conductivity of a wide range of natural waters from their chemical composition was investigated. A brief summary of each method is presented including equations to calculate the conductivities of individual ions, the ions incorporated, and the method's limitations. The ability of each method to reliably predict the conductivity depends on the ions included, effective accounting of ion pairing, and the accuracy of the equation used to estimate the ionic conductivities. The performances of the methods were evaluated by calculating the conductivity of 33 environmentally important electrolyte solutions, 41 U.S. Geological Survey standard reference water samples, and 1593 natural water samples. The natural waters tested include acid mine waters, geothermal waters, seawater, dilute mountain waters, and river water impacted by municipal waste water. The three most recent conductivity methods predict the conductivity of natural waters better than other methods. Two of the recent methods can be used to reliably calculate the conductivity for samples with pH values greater than about 3 and temperatures between 0 and 40°C. One method is applicable to a variety of natural water types with a range of pH from 1 to 10, temperature from 0 to 95°C, and ionic strength up to 1 m.

  20. Soil structure interaction calculations: a comparison of methods

    International Nuclear Information System (INIS)

    Wight, L.; Zaslawsky, M.

    1976-01-01

    Two approaches for calculating soil structure interaction (SSI) are compared: finite element and lumped mass. Results indicate that the calculations with the lumped mass method are generally conservative compared to those obtained by the finite element method. They also suggest that a closer agreement between the two sets of calculations is possible, depending on the use of frequency-dependent soil springs and dashpots in the lumped mass calculations. There is a total lack of suitable guidelines for implementing the lumped mass method of calculating SSI, which leads to the conclusion that the finite element method is generally superior for calculative purposes

  1. Soil structure interaction calculations: a comparison of methods

    Energy Technology Data Exchange (ETDEWEB)

    Wight, L.; Zaslawsky, M.

    1976-07-22

    Two approaches for calculating soil structure interaction (SSI) are compared: finite element and lumped mass. Results indicate that the calculations with the lumped mass method are generally conservative compared to those obtained by the finite element method. They also suggest that a closer agreement between the two sets of calculations is possible, depending on the use of frequency-dependent soil springs and dashpots in the lumped mass calculations. There is a total lack of suitable guidelines for implementing the lumped mass method of calculating SSI, which leads to the conclusion that the finite element method is generally superior for calculative purposes.

  2. Comparison of digestion methods to determine heavy metals in fertilizers

    Directory of Open Access Journals (Sweden)

    Ygor Jacques Agra Bezerra da Silva

    2014-04-01

    Full Text Available The lack of a standard method to regulate heavy metal determination in Brazilian fertilizers and the subsequent use of several digestion methods have produced variations in the results, hampering interpretation. Thus, the aim of this study was to compare the effectiveness of three digestion methods for determination of metals such as Cd, Ni, Pb, and Cr in fertilizers. Samples of 45 fertilizers marketed in northeastern Brazil were used. A fertilizer sample with heavy metal contents certified by the US National Institute of Standards and Technology (NIST was used as control. The following fertilizers were tested: rock phosphate; organo-mineral fertilizer with rock phosphate; single superphosphate; triple superphosphate; mixed N-P-K fertilizer; and fertilizer with micronutrients. The substances were digested according to the method recommended by the Ministry for Agriculture, Livestock and Supply of Brazil (MAPA and by the two methods 3051A and 3052 of the United States Environmental Protection Agency (USEPA. By the USEPA method 3052, higher portions of the less soluble metals such as Ni and Pb were recovered, indicating that the conventional digestion methods for fertilizers underestimate the total amount of these elements. The results of the USEPA method 3051A were very similar to those of the method currently used in Brazil (Brasil, 2006. The latter is preferable, in view of the lower cost requirement for acids, a shorter digestion period and greater reproducibility.

  3. Evaluation and comparison of mammalian subcellular localization prediction methods

    Directory of Open Access Journals (Sweden)

    Fink J Lynn

    2006-12-01

    Full Text Available Abstract Background Determination of the subcellular location of a protein is essential to understanding its biochemical function. This information can provide insight into the function of hypothetical or novel proteins. These data are difficult to obtain experimentally but have become especially important since many whole genome sequencing projects have been finished and many resulting protein sequences are still lacking detailed functional information. In order to address this paucity of data, many computational prediction methods have been developed. However, these methods have varying levels of accuracy and perform differently based on the sequences that are presented to the underlying algorithm. It is therefore useful to compare these methods and monitor their performance. Results In order to perform a comprehensive survey of prediction methods, we selected only methods that accepted large batches of protein sequences, were publicly available, and were able to predict localization to at least nine of the major subcellular locations (nucleus, cytosol, mitochondrion, extracellular region, plasma membrane, Golgi apparatus, endoplasmic reticulum (ER, peroxisome, and lysosome. The selected methods were CELLO, MultiLoc, Proteome Analyst, pTarget and WoLF PSORT. These methods were evaluated using 3763 mouse proteins from SwissProt that represent the source of the training sets used in development of the individual methods. In addition, an independent evaluation set of 2145 mouse proteins from LOCATE with a bias towards the subcellular localization underrepresented in SwissProt was used. The sensitivity and specificity were calculated for each method and compared to a theoretical value based on what might be observed by random chance. Conclusion No individual method had a sufficient level of sensitivity across both evaluation sets that would enable reliable application to hypothetical proteins. All methods showed lower performance on the LOCATE

  4. Comparison of different methods for estimation of potential evapotranspiration

    International Nuclear Information System (INIS)

    Nazeer, M.

    2010-01-01

    Evapotranspiration can be estimated with different available methods. The aim of this research study to compare and evaluate the originally measured potential evapotranspiration from Class A pan with the Hargreaves equation, the Penman equation, the Penman-Montheith equation, and the FAO56 Penman-Monteith equation. The evaporation rate from pan recorded greater than stated methods. For each evapotranspiration method, results were compared against mean monthly potential evapotranspiration (PET) from Pan data according to FAO (ET/sub o/=K/sub pan X E/sub pan)), from daily measured recorded data of the twenty-five years (1984-2008). On the basis of statistical analysis between the pan data and the FAO56- Penman-Monteith method are not considered to be very significant (=0.98) at 95% confidence and prediction intervals. All methods required accurate weather data for precise results, for the purpose of this study the past twenty five years data were analyzed and used including maximum and minimum air temperature, relative humidity, wind speed, sunshine duration and rainfall. Based on linear regression analysis results the FAO56 PMM ranked first (R/sup 2/=0.98) followed by Hergreaves method (R/sup 2/=0.96), Penman-Monteith method (R/sup 2/=0.94) and Penman method (=0.93). Obviously, using FAO56 Penman Monteith method with precise climatic variables for ET/sub o/ estimation is more reliable than the other alternative methods, Hergreaves is more simple and rely only on air temperatures data and can be used alternative of FAO56 Penman-Monteith method if other climatic data are missing or unreliable. (author)

  5. Comparison contemporary methods of regeneration sodium-cationic filters

    Science.gov (United States)

    Burakov, I. A.; Burakov, A. Y.; Nikitina, I. S.; Verkhovsky, A. E.; Ilyushin, A. S.; Aladushkin, S. V.

    2017-11-01

    Regeneration plays a crucial role in the field of efficient application sodium-cationic filters for softening the water. Traditionally used as regenerant saline NaCl. However, due to the modern development of the energy industry and its close relationship with other industrial and academic sectors the opportunity to use in the regeneration of other solutions. The report estimated data and application possibilities as regenerant solution sodium-cationic filters brine wells a high mineral content, as both primary application and after balneotherapeutic use reverse osmosis and concentrates especially recycled regenerant water repeated. Comparison of the effectiveness of these solutions with the traditional use of NaCl. Developed and tested system for the processing of highly mineralized brines wells after balneological use. Recommendations for use as regeneration solutions for the sodium-cationic unit considered solutions and defined rules of brine for regeneration costs.

  6. Advances in supercell calculation methods and comparison with measurements

    Energy Technology Data Exchange (ETDEWEB)

    Arsenault, B [Atomic Energy of Canada Limited, Mississauga, Ontario (Canada); Baril, R; Hotte, G [Hydro-Quebec, Central Nucleaire Gentilly, Montreal, Quebec (Canada)

    1996-07-01

    In the last few years, modelling techniques have been developed in new supercell computer codes. These techniques have been used to model the CANDU reactivity devices. One technique is based on one- and two-dimensional transport calculations with the WIMS-AECL lattice code followed by super homogenization and three-dimensional flux calculations in a modified version of the MULTICELL code. The second technique is based on two- and three-dimensional transport calculations in DRAGON. The code calculates the lattice properties by solving the transport equation in a two-dimensional geometry followed by supercell calculations in three dimensions. These two calculation schemes have been used to calculate the incremental macroscopic properties of CANDU reactivity devices. The supercell size has also been modified to define incremental properties over a larger region. The results show improved agreement between the reactivity worth of zone controllers and adjusters. However, at the same time the agreement between measured and simulated flux distributions deteriorated somewhat. (author)

  7. Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files. SG39 meeting, May 2015

    International Nuclear Information System (INIS)

    Wang, Wenming; Yokoyama, Kenji; Kim, Do Heon; Kodeli, Ivan-Alexander; Hursin, Mathieu; Pelloni, Sandro; Palmiotti, Giuseppe; Salvatores, Massimo; Touran, Nicholas; Cabellos De Francisco, Oscar; )

    2015-05-01

    The aim of WPEC subgroup 39 'Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files' is to provide criteria and practical approaches to use effectively the results of sensitivity analyses and cross section adjustments for feedback to evaluators and differential measurement experimentalists in order to improve the knowledge of neutron cross sections, uncertainties, and correlations to be used in a wide range of applications. This document is the proceedings of the fourth Subgroup meeting, held at the NEA, Issy-les-Moulineaux, France, on 19-20 May 2015. It comprises a Summary Record of the meeting, two papers on deliverables and all the available presentations (slides) given by the participants: 1 - Status of Deliverables: '1. Methodology' (K. Yokoyama); 2 - Status of Deliverables: '2. Comments on covariance data' (K. Yokoyama); 3 - PROTEUS HCLWR Experiments (M. Hursin); 4 - Preliminary UQ Efforts for TWR Design (N. Touran); 5 - Potential use of beta-eff and other benchmark for adjustment (I. Kodeli); 6 - k_e_f_f uncertainties for a simple case of Am"2"4"1 using different codes and evaluated files (I. Kodeli); 7 - k_e_f_f uncertainties for a simple case of Am"2"4"1 using TSUNAMI (O. Cabellos); 8 - REWIND: Ranking Experiments by Weighting to Improve Nuclear Data (G. Palmiotti); 9 - Recent analysis on NUDUNA/MOCABA applications to reactor physics parameters (E. Castro); 10 - INL exploratory study for SEG (A. Hummel); 11 - The Development of Nuclear Data Adjustment Code at CNDC (H. Wu); 12 - SG39 Perspectives (M. Salvatores). A list of issues and actions conclude the document

  8. Comparison of two disc diffusion methods with minimum inhibitory ...

    African Journals Online (AJOL)

    Susceptibility to penicillin, ciprofloxacin, tetracycline, ceftriaxone and spectinomycin and cefixime were determined by CSLI and AGSP method and Kappa statistics used to analyse the data with SPSS software. Results: All isolates were susceptible to ceftriaxone and spectinomycin by three methods. Ninety‑nine (99%) ...

  9. A comparison of analysis methods to estimate contingency strength.

    Science.gov (United States)

    Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T

    2018-05-09

    To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.

  10. A comparison of five extraction methods for extracellular polymeric ...

    African Journals Online (AJOL)

    Two physical methods (centrifugation and ultrasonication) and 3 chemical methods (extraction with EDTA, extraction with formaldehyde, and extraction with formaldehyde plus NaOH) for extraction of EPS from alga-bacteria biofilm were assessed. Pretreatment with ultrasound at low intensity doubled the EPS yield without ...

  11. Comparison of Methods of Teaching Children Proper Lifting ...

    African Journals Online (AJOL)

    Objective: This study was designed to determine the effects of three teaching methods on children\\'s ability to demonstrate and recall their mastery of proper lifting techniques. Method: Ninety-three primary five and six public school children who had no knowledge of proper lifting technique were assigned into three equal ...

  12. Soybean allergen detection methods--a comparison study

    DEFF Research Database (Denmark)

    Pedersen, M. Højgaard; Holzhauser, T.; Bisson, C.

    2008-01-01

    Soybean containing products are widely consumed, thus reliable methods for detection of soy in foods are needed in order to make appropriate risk assessment studies to adequately protect soy allergic patients. Six methods were compared using eight food products with a declared content of soy...

  13. Validation, verification and comparison: Adopting new methods in ...

    African Journals Online (AJOL)

    2005-07-03

    Jul 3, 2005 ... chemical analyses can be assumed to be homogeneously distrib- uted. When introduced ... For water microbiology this has been resolved with the publication of .... tion exercise can result in a laboratory adopting the method. If, however, the new ... For methods used for environmental sam- ples, a range of ...

  14. Comparison between different synchronization methods of identical chaotic systems

    International Nuclear Information System (INIS)

    Haeri, Mohammad; Khademian, Behzad

    2006-01-01

    This paper studies and compares three nonadaptive (bidirectional, unidirectional, and sliding mode) and two adaptive (active control and backstepping) synchronization methods on the synchronizing of four pairs of identical chaotic systems (Chua's circuit, Roessler system, Lorenz system, and Lue system). Results from computer simulations are presented in order to illustrate the effectiveness of the methods and to compare them based on different criteria

  15. Task exposures in an office environment: a comparison of methods.

    Science.gov (United States)

    Van Eerd, Dwayne; Hogg-Johnson, Sheilah; Mazumder, Anjali; Cole, Donald; Wells, Richard; Moore, Anne

    2009-10-01

    Task-related factors such as frequency and duration are associated with musculoskeletal disorders in office settings. The primary objective was to compare various task recording methods as measures of exposure in an office workplace. A total of 41 workers from different jobs were recruited from a large urban newspaper (71% female, mean age 41 years SD 9.6). Questionnaire, task diaries, direct observation and video methods were used to record tasks. A common set of task codes was used across methods. Different estimates of task duration, number of tasks and task transitions arose from the different methods. Self-report methods did not consistently result in longer task duration estimates. Methodological issues could explain some of the differences in estimates seen between methods observed. It was concluded that different task recording methods result in different estimates of exposure likely due to different exposure constructs. This work addresses issues of exposure measurement in office environments. It is of relevance to ergonomists/researchers interested in how to best assess the risk of injury among office workers. The paper discusses the trade-offs between precision, accuracy and burden in the collection of computer task-based exposure measures and different underlying constructs captures in each method.

  16. Comparison of direct and precipitation methods for the estimation of ...

    African Journals Online (AJOL)

    Background: There is increase in use of direct assays for analysis of high and low density lipoprotein cholesterol by clinical laboratories despite differences in performance characteristics with conventional precipitation methods. Calculation of low density lipoprotein cholesterol in precipitation methods is based on total ...

  17. Computation Method Comparison for Th Based Seed-Blanket Cores

    International Nuclear Information System (INIS)

    Kolesnikov, S.; Galperin, A.; Shwageraus, E.

    2004-01-01

    This work compares two methods for calculating a given nuclear fuel cycle in the WASB configuration. Both methods use the ELCOS Code System (2-D transport code BOXER and 3-D nodal code SILWER) [4] are compared. In the first method, the cross-sections of the Seed and Blanket, needed for the 3-D nodal code are generated separately for each region by the 2-D transport code. In the second method, the cross-sections of the Seed and Blanket, needed for the 3-D nodal code are generated from Seed-Blanket Colorsets (Fig.1) calculated by the 2-D transport code. The evaluation of the error introduced by the first method is the main objective of the present study

  18. Comparison of RNA extraction methods in Thai aromatic coconut water

    Directory of Open Access Journals (Sweden)

    Nopporn Jaroonchon

    2015-10-01

    Full Text Available Many researches have reported that nucleic acid in coconut water is in free form and at very low yields which makes it difficult to process in molecular studies. Our research attempted to compare two extraction methods to obtain a higher yield of total RNA in aromatic coconut water and monitor its change at various fruit stages. The first method used ethanol and sodium acetate as reagents; the second method used lithium chloride. We found that extraction using only lithium chloride gave a higher total RNA yield than the method using ethanol to precipitate nucleic acid. In addition, the total RNA from both methods could be used in amplification of betaine aldehyde dehydrogenase2 (Badh2 genes, which is involved in coconut aroma biosynthesis, and could be used to perform further study as we expected. From the molecular study, the nucleic acid found in coconut water increased with fruit age.

  19. The Comparison between Teacher Centered and Student Centered Educational Methods

    Directory of Open Access Journals (Sweden)

    M Anvar

    2009-02-01

    Full Text Available Background and Purpose: Various approaches to learning are suggested & practiced. The traditional medical education were more teacher centered oriented . In this method the students’ involvement in the process of learning is not remarkable, but the new approach to medical education supports the students involvement. This study evaluated the various method of lecturing considering students involvements.Methods: One hundred two first year medical and nursing students involved in this study and their opinion about these two methods of learning were obtained by filling of a questionnaire. The subject of the lectures was “general psychology” which was carried out 50% by the students and 50% by the teacher. The statistical analysis was carried out by SPSS program.Results: Considering students opinion in student-centered method the various aspect of learning such as mutual understanding, use of textbooks and references were significantly increased , whereasother aspects of learning such as self esteem, study time, innovation, and study attitude though were improved, but were not significant as compared with teacher centered method. In teacher-centeredmethod the understanding of the subjects was significantly increased .Other aspects of learning such as motivation and concentration were improved but not significantly as compared with studentcentered method.Conclusion: As the result showed student centered method was favored in several aspects of learning while in teacher centered method only understanding of the subject was better . Careful choice of teaching method to provide a comprehensive learning experience should take into account these differences.Key words: TEACHER CENTERED, STUDENT CENTERED, LEARNING

  20. Simulated annealing method for electronic circuits design: adaptation and comparison with other optimization methods; La methode du recuit simule pour la conception des circuits electroniques: adaptation et comparaison avec d`autres methodes d`optimisation

    Energy Technology Data Exchange (ETDEWEB)

    Berthiau, G

    1995-10-01

    The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. (Abstract Truncated)

  1. Comparison of Different Drying Methods for Recovery of Mushroom DNA.

    Science.gov (United States)

    Wang, Shouxian; Liu, Yu; Xu, Jianping

    2017-06-07

    Several methods have been reported for drying mushroom specimens for population genetic, taxonomic, and phylogenetic studies. However, most methods have not been directly compared for their effectiveness in preserving mushroom DNA. In this study, we compared silica gel drying at ambient temperature and oven drying at seven different temperatures. Two mushroom species representing two types of fruiting bodies were examined: the fleshy button mushroom Agaricus bisporus and the leathery shelf fungus Trametes versicolor. For each species dried with the eight methods, we assessed the mushroom water loss rate, the quality and quantity of extracted DNA, and the effectiveness of using the extracted DNA as a template for PCR amplification of two DNA fragments (ITS and a single copy gene). Dried specimens from all tested methods yielded sufficient DNA for PCR amplification of the two genes in both species. However, differences among the methods for the two species were found in: (i) the time required by different drying methods for the fresh mushroom tissue to reach a stable weight; and (ii) the relative quality and quantity of the extracted genomic DNA. Among these methods, oven drying at 70 °C for 3-4 h seemed the most efficient for preserving field mushroom samples for subsequent molecular work.

  2. COMPARISON OF IMAGE ENHANCEMENT METHODS FOR CHROMOSOME KARYOTYPE IMAGE ENHANCEMENT

    Directory of Open Access Journals (Sweden)

    Dewa Made Sri Arsa

    2017-02-01

    Full Text Available The chromosome is a set of DNA structure that carry information about our life. The information can be obtained through Karyotyping. The process requires a clear image so the chromosome can be evaluate well. Preprocessing have to be done on chromosome images that is image enhancement. The process starts with image background removing. The image will be cleaned background color. The next step is image enhancement. This paper compares several methods for image enhancement. We evaluate some method in image enhancement like Histogram Equalization (HE, Contrast-limiting Adaptive Histogram Equalization (CLAHE, Histogram Equalization with 3D Block Matching (HE+BM3D, and basic image enhancement, unsharp masking. We examine and discuss the best method for enhancing chromosome image. Therefore, to evaluate the methods, the original image was manipulated by the addition of some noise and blur. Peak Signal-to-noise Ratio (PSNR and Structural Similarity Index (SSIM are used to examine method performance. The output of enhancement method will be compared with result of Professional software for karyotyping analysis named Ikaros MetasystemT M . Based on experimental results, HE+BM3D method gets a stable result on both scenario noised and blur image.

  3. Comparison of methods used for estimating pharmacist counseling behaviors.

    Science.gov (United States)

    Schommer, J C; Sullivan, D L; Wiederholt, J B

    1994-01-01

    To compare the rates reported for provision of types of information conveyed by pharmacists among studies for which different methods of estimation were used and different dispensing situations were studied. Empiric studies conducted in the US, reported from 1982 through 1992, were selected from International Pharmaceutical Abstracts, MEDLINE, and noncomputerized sources. Empiric studies were selected for review if they reported the provision of at least three types of counseling information. Four components of methods used for estimating pharmacist counseling behaviors were extracted and summarized in a table: (1) sample type and area, (2) sampling unit, (3) sample size, and (4) data collection method. In addition, situations that were investigated in each study were compiled. Twelve studies met our inclusion criteria. Patients were interviewed via telephone in four studies and were surveyed via mail in two studies. Pharmacists were interviewed via telephone in one study and surveyed via mail in two studies. For three studies, researchers visited pharmacy sites for data collection using the shopper method or observation method. Studies with similar methods and situations provided similar results. Data collected by using patient surveys, pharmacist surveys, and observation methods can provide useful estimations of pharmacist counseling behaviors if researchers measure counseling for specific, well-defined dispensing situations.

  4. Comparison of potential method in analytic hierarchy process for multi-attribute of catering service companies

    Science.gov (United States)

    Mamat, Siti Salwana; Ahmad, Tahir; Awang, Siti Rahmah

    2017-08-01

    Analytic Hierarchy Process (AHP) is a method used in structuring, measuring and synthesizing criteria, in particular ranking of multiple criteria in decision making problems. On the other hand, Potential Method is a ranking procedure in which utilizes preference graph ς (V, A). Two nodes are adjacent if they are compared in a pairwise comparison whereby the assigned arc is oriented towards the more preferred node. In this paper Potential Method is used to solve problem on a catering service selection. The comparison of result by using Potential method is made with Extent Analysis. The Potential Method is found to produce the same rank as Extent Analysis in AHP.

  5. Comparison of different methods for the solution of sets of linear equations

    International Nuclear Information System (INIS)

    Bilfinger, T.; Schmidt, F.

    1978-06-01

    The application of the conjugate-gradient methods as novel general iterative methods for the solution of sets of linear equations with symmetrical systems matrices led to this paper, where a comparison of these methods with the conventional differently accelerated Gauss-Seidel iteration was carried out. In additon, the direct Cholesky method was also included in the comparison. The studies referred mainly to memory requirement, computing time, speed of convergence, and accuracy of different conditions of the systems matrices, by which also the sensibility of the methods with respect to the influence of truncation errors may be recognized. (orig.) 891 RW [de

  6. Comparison of association mapping methods in a complex pedigreed population

    DEFF Research Database (Denmark)

    Sahana, Goutam; Guldbrandtsen, Bernt; Janss, Luc

    2010-01-01

    to collect SNP signals in intervals, to avoid the scattering of a QTL signal over multiple neighboring SNPs. Methods not accounting for genetic background (full pedigree information) performed worse, and methods using haplotypes were considerably worse with a high false-positive rate, probably due...... to the presence of low-frequency haplotypes. It was necessary to account for full relationships among individuals to avoid excess false discovery. Although the methods were tested on a cattle pedigree, the results are applicable to any population with a complex pedigree structure...

  7. Comparison between calculation methods of dose rates in gynecologic brachytherapy

    International Nuclear Information System (INIS)

    Vianello, E.A.; Biaggio, M.F.; D R, M.F.; Almeida, C.E. de

    1998-01-01

    In treatments with radiations for gynecologic tumors is necessary to evaluate the quality of the results obtained by different calculation methods for the dose rates on the points of clinical interest (A, rectal, vesicle). The present work compares the results obtained by two methods. The Manual Calibration Method (MCM) tri dimensional (Vianello E., et.al. 1998), using orthogonal radiographs for each patient in treatment, and the Theraplan/T P-11 planning system (Thratonics International Limited 1990) this last one verified experimentally (Vianello et.al. 1996). The results show that MCM can be used in the physical-clinical practice with a percentile difference comparable at the computerized programs. (Author)

  8. A statistical comparison of accelerated concrete testing methods

    Directory of Open Access Journals (Sweden)

    Denny Meyer

    1997-01-01

    Full Text Available Accelerated curing results, obtained after only 24 hours, are used to predict the 28 day strength of concrete. Various accelerated curing methods are available. Two of these methods are compared in relation to the accuracy of their predictions and the stability of the relationship between their 24 hour and 28 day concrete strength. The results suggest that Warm Water accelerated curing is preferable to Hot Water accelerated curing of concrete. In addition, some other methods for improving the accuracy of predictions of 28 day strengths are suggested. In particular the frequency at which it is necessary to recalibrate the prediction equation is considered.

  9. Bundle Adjustment-Based Stability Analysis Method with a Case Study of a Dual Fluoroscopy Imaging System

    Science.gov (United States)

    Al-Durgham, K.; Lichti, D. D.; Detchev, I.; Kuntze, G.; Ronsky, J. L.

    2018-05-01

    A fundamental task in photogrammetry is the temporal stability analysis of a camera/imaging-system's calibration parameters. This is essential to validate the repeatability of the parameters' estimation, to detect any behavioural changes in the camera/imaging system and to ensure precise photogrammetric products. Many stability analysis methods exist in the photogrammetric literature; each one has different methodological bases, and advantages and disadvantages. This paper presents a simple and rigorous stability analysis method that can be straightforwardly implemented for a single camera or an imaging system with multiple cameras. The basic collinearity model is used to capture differences between two calibration datasets, and to establish the stability analysis methodology. Geometric simulation is used as a tool to derive image and object space scenarios. Experiments were performed on real calibration datasets from a dual fluoroscopy (DF; X-ray-based) imaging system. The calibration data consisted of hundreds of images and thousands of image observations from six temporal points over a two-day period for a precise evaluation of the DF system stability. The stability of the DF system - for a single camera analysis - was found to be within a range of 0.01 to 0.66 mm in terms of 3D coordinates root-mean-square-error (RMSE), and 0.07 to 0.19 mm for dual cameras analysis. It is to the authors' best knowledge that this work is the first to address the topic of DF stability analysis.

  10. Pyrrolizidine alkaloids in honey: comparison of analytical methods

    NARCIS (Netherlands)

    Kempf, M.; Wittig, M.; Reinhard, A.; Ohe, von der K.; Blacquière, T.; Raezke, K.P.; Michel, R.; Schreier, P.; Beuerle, T.

    2011-01-01

    Pyrrolizidine alkaloids (PAs) are a structurally diverse group of toxicologically relevant secondary plant metabolites. Currently, two analytical methods are used to determine PA content in honey. To achieve reasonably high sensitivity and selectivity, mass spectrometry detection is demanded. One

  11. Comparison of surrogate models with different methods in ...

    Indian Academy of Sciences (India)

    In this article, polynomial regression (PR), radial basis function artificial neural network (RBFANN), and kriging ..... 10 kriging models with different parameters were also obtained. ..... shapes using stochastic optimization methods and com-.

  12. Comparison of certain microbial counting methods which are ...

    African Journals Online (AJOL)

    STORAGESEVER

    2009-12-15

    Dec 15, 2009 ... tinuous food analyses. For example, the ... plate method for the enumeration of E. coli in foods. How- ... Although its preservation for 48 h is recommended in certain ..... Innovative Fungicide For Leather Industry: Essential Oil of.

  13. Teaching Idiomatic Expressions: A Comparison of Two Instructional Methods.

    Science.gov (United States)

    Rittenhouse, Robert K.; Kenyon, Patricia L.

    1990-01-01

    Twenty hearing-impaired adolescents were taught idiomatic expressions using captioned videotape presentations followed by classroom discussion, or by extended classroom discussions. Improvement in understanding idioms was significantly greater under the videotape method. (Author/JDD)

  14. Comparison of protein extraction methods suitable for proteomics ...

    African Journals Online (AJOL)

    Jane

    2011-07-27

    Jul 27, 2011 ... An efficient protein extraction method is a prerequisite for successful implementation of proteomics. ... research, it is noteworthy to discover a proteome ..... Proteomic analysis of rice (Oryza sativa) seeds during germination.

  15. Comparison of matrix methods for elastic wave scattering problems

    International Nuclear Information System (INIS)

    Tsao, S.J.; Varadan, V.K.; Varadan, V.V.

    1983-01-01

    This article briefly describes the T-matrix method and the MOOT (method of optimal truncation) of elastic wave scattering as they apply to A-D, SH- wave problems as well as 3-D elastic wave problems. Two methods are compared for scattering by elliptical cylinders as well as oblate spheroids of various eccentricity as a function of frequency. Convergence, and symmetry of the scattering cross section are also compared for ellipses and spheroidal cavities of different aspect ratios. Both the T-matrix approach and the MOOT were programmed on an AMDHL 470 computer using double precision arithmetic. Although the T-matrix method and MOOT are not always in agreement, it is in no way implied that any of the published results using MOOT are in error

  16. Comparison of different methods for shielding design in computed tomography

    International Nuclear Information System (INIS)

    Ciraj-Bjelac, O.; Arandjic, D.; Kosutic, D.

    2011-01-01

    The purpose of this work is to compare different methods for shielding calculation in computed tomography (CT). The BIR-IPEM (British Inst. of Radiology and Inst. of Physics in Engineering in Medicine) and NCRP (National Council on Radiation Protection) method were used for shielding thickness calculation. Scattered dose levels and calculated barrier thickness were also compared with those obtained by scatter dose measurements in the vicinity of a dedicated CT unit. Minimal requirement for protective barriers based on BIR-IPEM method ranged between 1.1 and 1.4 mm of lead demonstrating underestimation of up to 20 % and overestimation of up to 30 % when compared with thicknesses based on measured dose levels. For NCRP method, calculated thicknesses were 33 % higher (27-42 %). BIR-IPEM methodology-based results were comparable with values based on scattered dose measurements, while results obtained using NCRP methodology demonstrated an overestimation of the minimal required barrier thickness. (authors)

  17. Comparison between ASHRAE and ISO thermal transmittance calculation methods

    DEFF Research Database (Denmark)

    Blanusa, Petar; Goss, William P.; Roth, Hartwig

    2007-01-01

    is proportional to the glazing/frame sightline distance that is also proportional to the total glazing spacer length. An example calculation of the overall heat transfer and thermal transmittance (U-value or U-factor) using the two methods for a thermally broken, aluminum framed slider window is presented....... The fenestration thermal transmittance calculations analyses presented in this paper show that small differences exist between the calculated thermal transmittance values produced by the ISO and ASHRAE methods. The results also show that the overall thermal transmittance difference between the two methodologies...... decreases as the total window area (glazing plus frame) increases. Thus, the resulting difference in thermal transmittance values for the two methods is negligible for larger windows. This paper also shows algebraically that the differences between the ISO and ASHRAE methods turn out to be due to the way...

  18. Comparison of the methods for determination of scintillation light yield

    CERN Document Server

    Sysoeva, E; Zelenskaya, O

    2002-01-01

    One of the most important characteristics of scintillators is the light yield. It depends not only on the properties of scintillators, but also on the conditions of measurements. Even for widely used crystals, such as alkali halide scintillators NaI(Tl) and CsI(Tl), light yield data, obtained by various authors, are different. Therefore, it is very important to choose the convenient method of the light yield measurements. In the present work, methods for the determination of the physical light yield, based on measurements of pulse amplitude, single-electron pulses and intrinsic photomultiplier resolution are discussed. These methods have been used for the measurements of light yield of alkali halide crystals and oxide scintillators. Repeatability and reproducibility of results were determined. All these methods are rather complicated in use, not for measurements, but for further data processing. Besides that, they demand a precise determination of photoreceiver's parameters, as well as determination of light ...

  19. Comparison of different precondtioners for nonsymmtric finite volume element methods

    Energy Technology Data Exchange (ETDEWEB)

    Mishev, I.D.

    1996-12-31

    We consider a few different preconditioners for the linear systems arising from the discretization of 3-D convection-diffusion problems with the finite volume element method. Their theoretical and computational convergence rates are compared and discussed.

  20. COMPARISON OF FOUR METHODS TO DETECT ADVERSE EVENTS IN HOSPITAL

    Directory of Open Access Journals (Sweden)

    Inge Dhamanti

    2015-09-01

    Full Text Available AbstrakDeteksi terjadinya kejadian yang tidak diharapkan (KTD telah menjadi salah satu tantangan dalam keselamatan pasien oleh karena itu metode untuk mendeteksi terjadinya KTD sangatlah penting untuk meningkatkan keselamatan pasien. Tujuan dari artikel ini adalah untuk membandingkan kelebihan dan kekurangan dari beberapa metode untuk mendeteksi terjadinya KTD di rumah sakit, meliputi review rekam medis, pelaporan insiden secara mandiri, teknologi informasi, dan pelaporan oleh pasien. Studi ini merupakan kajian literatur untuk membandingkan dan menganalisa metode terbaik untuk mendeteksi KTD yang dapat diimplementasikan oleh rumah sakit. Semua dari empat metode telah terbukti mampu untuk mendeteksi terjadinya KTD di rumah sakit, tetapi masing-masing metode mempunyai kelebihan dan kekurangan yang perlu diatasi. Tidak ada satu metode terbaik yang akan memberikan hasil terbaik untuk mendeteksi KTD di rumah sakit. Sehingga untuk mendeteksi lebih banyak KTD yang seharusnya dapat dicegah, atau KTD yang telah terjadi, rumah sakit seharusnya mengkombinasikan lebih dari satu metode untuk mendeteksi, karena masing-masing metode mempunyai sensitivitas berbeda-beda.AbstractDetecting adverse events has become one of the challenges in patient safety thus methods to detect adverse events become critical for improving patient safety. The purpose of this paper is to compare the strengths and weaknesses of several methods of identifying adverse events in hospital, including medical records reviews, self-reported incidents, information technology, and patient self-reports. This study is a literature review to compared and analyzed to determine the best method implemented by the hospital. All of four methods have been proved in their ability in detecting adverse events in hospitals, but each method had strengths and limitations to be overcome. There is no ‘best’ single method that will give the best results for adverse events detection in hospital. Thus to

  1. Comparison of mentha extracts obtained by different extraction methods

    Directory of Open Access Journals (Sweden)

    Milić Slavica

    2006-01-01

    Full Text Available The different methods of mentha extraction, such as steam distillation, extraction by methylene chloride (Soxhlet extraction and supercritical fluid extraction (SFE by carbon dioxide (CO J were investigated. SFE by CO, was performed at pressure of 100 bar and temperature of40°C. The extraction yield, as well as qualitative and quantitative composition of obtained extracts, determined by GC-MS method, were compared.

  2. Comparison between different synchronization methods of identical chaotic systems

    Energy Technology Data Exchange (ETDEWEB)

    Haeri, Mohammad [Advanced Control System Laboratory, Electrical Engineering Department, Sharif University of Technology, Azadi Avenue, P.O. Box 11365-9363 Tehran (Iran, Islamic Republic of)]. E-mail: haeri@sina.sharif.edu; Khademian, Behzad [Advanced Control System Laboratory, Electrical Engineering Department, Sharif University of Technology, Azadi Avenue, P.O. Box 11365-9363 Tehran (Iran, Islamic Republic of)

    2006-08-15

    This paper studies and compares three nonadaptive (bidirectional, unidirectional, and sliding mode) and two adaptive (active control and backstepping) synchronization methods on the synchronizing of four pairs of identical chaotic systems (Chua's circuit, Roessler system, Lorenz system, and Lue system). Results from computer simulations are presented in order to illustrate the effectiveness of the methods and to compare them based on different criteria.

  3. Comparison of bone densitometry methods in healthy and osteoporotic women

    International Nuclear Information System (INIS)

    Reinbold, W.D.; Dinkel, E.; Genant, H.K.

    1988-01-01

    To compare methods of noninvasive measurement of bone mineral content, 40 healthy early postmenopausal women and 68 postmenopausal women with osteoporosis were studied. The methods included mono- and dual-energy quantitative computed tomography (QCT) and dual-photon absorptiometry (DPA) of the lumbar spine, single-photon absorptiometry (SPA) of the distal third of the radius, and combined cortical thickness (CCT) of the second metacarpal shaft. Lateral thoracolumbar radiographic studies were performed and the spinal fracture index calculated. There was good correlation between QCT and DPA methods in early postmenopausal women and moderate correlation in postmenopausal osteoporotic women. Correlations between spinal measurements (QCT or DPA) and appendicular cortical measurements (SPA or CCT) were moderate in healthy women and poor in osteoporotic women. Measurements resulting from one method were not predictive of measurements obtained by another method for individual patients. The strongest correlation with severity of vertebral fracture was provided by QCT and the weakest by SPA. There was good correlation between single- and dual-energy QCT results. Osteoporotic women and younger healthy women can be distinguished by the measurement of spinal trabecular bone density using QCT, and this method is more sensitive than the measurement of spinal integral bone by DPA or of appendicular cortical bone by SPA or CCT. (orig.) [de

  4. Comparison of the Methods for the Diagnosis of Onychomycosis

    Directory of Open Access Journals (Sweden)

    Banu Bayraktar

    2008-10-01

    Full Text Available Background and Design: A potassium hydroxide (KOH direct microscopic examination, histopathological investigation with periodic acid-Schiff (PAS stain and fungal culture are common diagnostic methods in the diagnosis of onychomycosis. The purpose of this study was to compare these three methods for the diagnosis of onychomycosis.Material and Method: A total of 100 patients who were suspected clinically of having onychomycosis on their toenails were included in this study. Three diagnostic tests were performed for each patients. In addition to clinical suspicion when one of the diagnostic tests was positive the diagnosis of onychomycosis was made. Accordingly, the sensitivity and the negative predictive value were calculated for each diagnostic test.Results: In 92 (92% of the patients, at least one of the three diagnostic methods was positive. The sensitivites of these methods were as follows: KOH direct microscopic examination, 92%; histopathological investigation with PAS stain, 80%; and culture, 20%. Their negative predictive values were also 53%, 42%, and 10% respectively. Conclusion: KOH direct microscopic examination is the most sensitive method for the diagnosis of onychomycosis. Its high negative predictive value supports this finding. (Turkderm 2008; 42: 91-3

  5. Comparison of Localization Methods for a Robot Soccer Team

    Directory of Open Access Journals (Sweden)

    H. Levent Akın

    2008-11-01

    Full Text Available In this work, several localization algorithms that are designed and implemented for Cerberus'05 Robot Soccer Team are analyzed and compared. These algorithms are used for global localization of autonomous mobile agents in the robotic soccer domain, to overcome the uncertainty in the sensors, environment and the motion model. The algorithms are Reverse Monte Carlo Localization (R-MCL, Simple Localization (S-Loc and Sensor Resetting Localization (SRL. R-MCL is a hybrid method based on both Markov Localization (ML and Monte Carlo Localization (MCL where the ML module finds the region where the robot should be and MCL predicts the geometrical location with high precision by selecting samples in this region. S-Loc is another localization method where just one sample per percept is drawn, for global localization. Within this method another novel method My Environment (ME is designed to hold the history and overcome the lack of information due to the drastically decrease in the number of samples in S-Loc. ME together with S-Loc is used in the Technical Challenges in Robocup 2005 and play an important role in ranking the First Place in the Challenges. In this work, these methods together with SRL, which is a widely used successful localization algorithm, are tested with both offline and real-time tests. First they are tested on a challenging data set that is used by many researches and compared in terms of error rate against different levels of noise, and sparsity. Besides time required recovering from kidnapping and speed of the methods are tested and compared. Then their performances are tested with real-time tests with scenarios like the ones in the Technical Challenges in ROBOCUP. The main aim is to find the best method which is very robust and fast and requires less computational power and memory compared to similar approaches and is accurate enough for high level decision making which is vital for robot soccer.

  6. Comparison of Localization Methods for a Robot Soccer Team

    Directory of Open Access Journals (Sweden)

    Hatice Kose

    2006-12-01

    Full Text Available In this work, several localization algorithms that are designed and implemented for Cerberus'05 Robot Soccer Team are analyzed and compared. These algorithms are used for global localization of autonomous mobile agents in the robotic soccer domain, to overcome the uncertainty in the sensors, environment and the motion model. The algorithms are Reverse Monte Carlo Localization (R-MCL, Simple Localization (S-Loc and Sensor Resetting Localization (SRL. R-MCL is a hybrid method based on both Markov Localization (ML and Monte Carlo Localization (MCL where the ML module finds the region where the robot should be and MCL predicts the geometrical location with high precision by selecting samples in this region. S-Loc is another localization method where just one sample per percept is drawn, for global localization. Within this method another novel method My Environment (ME is designed to hold the history and overcome the lack of information due to the drastically decrease in the number of samples in S-Loc. ME together with S-Loc is used in the Technical Challenges in Robocup 2005 and play an important role in ranking the First Place in the Challenges. In this work, these methods together with SRL, which is a widely used successful localization algorithm, are tested with both offline and real-time tests. First they are tested on a challenging data set that is used by many researches and compared in terms of error rate against different levels of noise, and sparsity. Besides time required recovering from kidnapping and speed of the methods are tested and compared. Then their performances are tested with real-time tests with scenarios like the ones in the Technical Challenges in ROBOCUP. The main aim is to find the best method which is very robust and fast and requires less computational power and memory compared to similar approaches and is accurate enough for high level decision making which is vital for robot soccer.

  7. A comparison of cosegregation analysis methods for the clinical setting.

    Science.gov (United States)

    Rañola, John Michael O; Liu, Quanhui; Rosenthal, Elisabeth A; Shirts, Brian H

    2018-04-01

    Quantitative cosegregation analysis can help evaluate the pathogenicity of genetic variants. However, genetics professionals without statistical training often use simple methods, reporting only qualitative findings. We evaluate the potential utility of quantitative cosegregation in the clinical setting by comparing three methods. One thousand pedigrees each were simulated for benign and pathogenic variants in BRCA1 and MLH1 using United States historical demographic data to produce pedigrees similar to those seen in the clinic. These pedigrees were analyzed using two robust methods, full likelihood Bayes factors (FLB) and cosegregation likelihood ratios (CSLR), and a simpler method, counting meioses. Both FLB and CSLR outperform counting meioses when dealing with pathogenic variants, though counting meioses is not far behind. For benign variants, FLB and CSLR greatly outperform as counting meioses is unable to generate evidence for benign variants. Comparing FLB and CSLR, we find that the two methods perform similarly, indicating that quantitative results from either of these methods could be combined in multifactorial calculations. Combining quantitative information will be important as isolated use of cosegregation in single families will yield classification for less than 1% of variants. To encourage wider use of robust cosegregation analysis, we present a website ( http://www.analyze.myvariant.org ) which implements the CSLR, FLB, and Counting Meioses methods for ATM, BRCA1, BRCA2, CHEK2, MEN1, MLH1, MSH2, MSH6, and PMS2. We also present an R package, CoSeg, which performs the CSLR analysis on any gene with user supplied parameters. Future variant classification guidelines should allow nuanced inclusion of cosegregation evidence against pathogenicity.

  8. Determination of chloride in water. A comparison of three methods

    International Nuclear Information System (INIS)

    Steele, P.J.

    1978-09-01

    The presence of chloride in the water circuits of nuclear reactors, power stations and experimental rigs is undesirable because of the possibility of corrosion. Three methods are considered for the determination of chloride in water in the 0 to 10 μg ml -1 range. The potentiometric method, using a silver-silver chloride electrode, is capable of determining chloride above the 0.1μg ml -1 level, with a standard deviation of 0.03 to 0.12 μg ml -1 in the range 0.1 to 6.0 μg ml -1 chloride. Bromide, iodide and strong reducing agents interfere but none of the cations likely to be present has an effect. The method is very susceptible to variations in temperature. The turbidimetric method involves the production of suspended silver chloride by the addition of silver nitride solution to the sample. The method is somewhat unreliable and is more useful as a rapid, routine limit-testing technique. In the third method, chloride in the sample is pre-concentrated by co-precipitation on lead phosphate, redissolved in acidified ferric nitrate solution and determined colorimetrically by the addition of mercuric thiocyanate solution. It is suitable for determining chloride in the range 0 to 50 μg, using a sample volume of 100 to 500 ml. None of the chemical species likely to be present interferes. In all three methods, chloride contamination can occur at any point in the determination. Analyses should be carried out in conditions where airborne contamination is minimised and a high degree of cleanliness must be maintained. (author)

  9. Comparison of statistical sampling methods with ScannerBit, the GAMBIT scanning module

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, Gregory D. [University of California, Physics and Astronomy Department, Los Angeles, CA (United States); McKay, James; Scott, Pat [Imperial College London, Department of Physics, Blackett Laboratory, London (United Kingdom); Farmer, Ben; Conrad, Jan [AlbaNova University Centre, Oskar Klein Centre for Cosmoparticle Physics, Stockholm (Sweden); Stockholm University, Department of Physics, Stockholm (Sweden); Roebber, Elinore [McGill University, Department of Physics, Montreal, QC (Canada); Putze, Antje [LAPTh, Universite de Savoie, CNRS, Annecy-le-Vieux (France); Collaboration: The GAMBIT Scanner Workgroup

    2017-11-15

    We introduce ScannerBit, the statistics and sampling module of the public, open-source global fitting framework GAMBIT. ScannerBit provides a standardised interface to different sampling algorithms, enabling the use and comparison of multiple computational methods for inferring profile likelihoods, Bayesian posteriors, and other statistical quantities. The current version offers random, grid, raster, nested sampling, differential evolution, Markov Chain Monte Carlo (MCMC) and ensemble Monte Carlo samplers. We also announce the release of a new standalone differential evolution sampler, Diver, and describe its design, usage and interface to ScannerBit. We subject Diver and three other samplers (the nested sampler MultiNest, the MCMC GreAT, and the native ScannerBit implementation of the ensemble Monte Carlo algorithm T-Walk) to a battery of statistical tests. For this we use a realistic physical likelihood function, based on the scalar singlet model of dark matter. We examine the performance of each sampler as a function of its adjustable settings, and the dimensionality of the sampling problem. We evaluate performance on four metrics: optimality of the best fit found, completeness in exploring the best-fit region, number of likelihood evaluations, and total runtime. For Bayesian posterior estimation at high resolution, T-Walk provides the most accurate and timely mapping of the full parameter space. For profile likelihood analysis in less than about ten dimensions, we find that Diver and MultiNest score similarly in terms of best fit and speed, outperforming GreAT and T-Walk; in ten or more dimensions, Diver substantially outperforms the other three samplers on all metrics. (orig.)

  10. A comparison of two analytical evaluation methods for educational computer games for young children

    NARCIS (Netherlands)

    Bekker, M.M.; Baauw, E.; Barendregt, W.

    2008-01-01

    In this paper we describe a comparison of two analytical methods for educational computer games for young children. The methods compared in the study are the Structured Expert Evaluation Method (SEEM) and the Combined Heuristic Evaluation (HE) (based on a combination of Nielsen’s HE and the

  11. Interface and thin film analysis: Comparison of methods, trends

    International Nuclear Information System (INIS)

    Werner, H.W.; Torrisi, A.

    1990-01-01

    Thin film properties are governed by a number of parameters such as: Surface and interface chemical composition, microstructure and the distribution of defects, dopants and impurities. For the determination of most of these aspects sophisticated analytical methods are needed. An overview of these analytical methods is given including: - Features and modes of analytical methods; - Main characteristics, advantages and disadvantages of the established methods [e.g. ESCA (Electron Spectroscopy for Chemical Analysis), AES (Auger Electron Spectroscopy), SIMS (Secondary Ion Mass Spectrometry), RBS (Rutherford Backscattering Spectrometry), SEM (Scanning Electron Microscopy), TEM (Transmission Electron Microscopy), illustrated with typical examples]; - Presentation of relatively new methods such as XRM (X-ray Microscopy) and SCAM (Scanning Acoustic Microscopy). Some features of ESCA (chemical information, insulator analysis, non-destructive depth profiling) have been selected for a more detailed presentation, viz. to illustrate the application of ESCA to practical problems. Trends in instrumental development and analytical applications of the techniques are discussed; the need for a multi-technique approach to solve complex analytical problems is emphasized. (orig.)

  12. Comparison of four surgical methods for eyebrow reconstruction

    Directory of Open Access Journals (Sweden)

    Omranifard Mahmood

    2007-01-01

    Full Text Available Background: The eyebrow plays an important role in facial harmony and eye protection. Eyebrows can be injured by burn, trauma, tumour, tattooing and alopecia. Eyebrow reconstructions have been done via several techniques. Here, our experience with a fairly new method for eyebrow reconstruction is presented. Materials and Methods: This is a descriptive-analytical study which was done on 76 patients at the Al-Zahra and Imam Mousa Kazem hospitals at Isfahan University of Medical University, Isfahan, Iran, from 1994 to 2004. Totally 86 eyebrows were reconstructed. All patients were examined before and after the operation. Methods which are commonly applied in eyebrow reconstruction are as follows: 1. Superficial Temporal Artery Flap (Island, 2. Interpolitation Scalp Flap, 3. Graft. Our method which is named Forehead Facial Island Flap with inferior pedicle provides an easier approach for the surgeon and more ideal hair growth direction for the patient. Results: Significantly lower rates of complication along with greater patient satisfaction were obtained with Forehead Facial Island Flap. Conclusions: According to the acquired results, this method seems to be more technically practical and aesthetically favourable when compared to others.

  13. Comparison of matrix exponential methods for fuel burnup calculations

    International Nuclear Information System (INIS)

    Oh, Hyung Suk; Yang, Won Sik

    1999-01-01

    Series expansion methods to compute the exponential of a matrix have been compared by applying them to fuel depletion calculations. Specifically, Taylor, Pade, Chebyshev, and rational Chebyshev approximations have been investigated by approximating the exponentials of bum matrices by truncated series of each method with the scaling and squaring algorithm. The accuracy and efficiency of these methods have been tested by performing various numerical tests using one thermal reactor and two fast reactor depletion problems. The results indicate that all the four series methods are accurate enough to be used for fuel depletion calculations although the rational Chebyshev approximation is relatively less accurate. They also show that the rational approximations are more efficient than the polynomial approximations. Considering the computational accuracy and efficiency, the Pade approximation appears to be better than the other methods. Its accuracy is better than the rational Chebyshev approximation, while being comparable to the polynomial approximations. On the other hand, its efficiency is better than the polynomial approximations and is similar to the rational Chebyshev approximation. In particular, for fast reactor depletion calculations, it is faster than the polynomial approximations by a factor of ∼ 1.7. (author). 11 refs., 4 figs., 2 tabs

  14. Comparison of DNA preservation methods for environmental bacterial community samples.

    Science.gov (United States)

    Gray, Michael A; Pratte, Zoe A; Kellogg, Christina A

    2013-02-01

    Field collections of environmental samples, for example corals, for molecular microbial analyses present distinct challenges. The lack of laboratory facilities in remote locations is common, and preservation of microbial community DNA for later study is critical. A particular challenge is keeping samples frozen in transit. Five nucleic acid preservation methods that do not require cold storage were compared for effectiveness over time and ease of use. Mixed microbial communities of known composition were created and preserved by DNAgard(™), RNAlater(®), DMSO-EDTA-salt (DESS), FTA(®) cards, and FTA Elute(®) cards. Automated ribosomal intergenic spacer analysis and clone libraries were used to detect specific changes in the faux communities over weeks and months of storage. A previously known bias in FTA(®) cards that results in lower recovery of pure cultures of Gram-positive bacteria was also detected in mixed community samples. There appears to be a uniform bias across all five preservation methods against microorganisms with high G + C DNA. Overall, the liquid-based preservatives (DNAgard(™), RNAlater(®), and DESS) outperformed the card-based methods. No single liquid method clearly outperformed the others, leaving method choice to be based on experimental design, field facilities, shipping constraints, and allowable cost. © 2012 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.

  15. Adjoint Methods for Adjusting Three-Dimensional Atmosphere and Surface Properties to Fit Multi-Angle Multi-Pixel Polarimetric Measurements

    Science.gov (United States)

    Martin, William G.; Cairns, Brian; Bal, Guillaume

    2014-01-01

    This paper derives an efficient procedure for using the three-dimensional (3D) vector radiative transfer equation (VRTE) to adjust atmosphere and surface properties and improve their fit with multi-angle/multi-pixel radiometric and polarimetric measurements of scattered sunlight. The proposed adjoint method uses the 3D VRTE to compute the measurement misfit function and the adjoint 3D VRTE to compute its gradient with respect to all unknown parameters. In the remote sensing problems of interest, the scalar-valued misfit function quantifies agreement with data as a function of atmosphere and surface properties, and its gradient guides the search through this parameter space. Remote sensing of the atmosphere and surface in a three-dimensional region may require thousands of unknown parameters and millions of data points. Many approaches would require calls to the 3D VRTE solver in proportion to the number of unknown parameters or measurements. To avoid this issue of scale, we focus on computing the gradient of the misfit function as an alternative to the Jacobian of the measurement operator. The resulting adjoint method provides a way to adjust 3D atmosphere and surface properties with only two calls to the 3D VRTE solver for each spectral channel, regardless of the number of retrieval parameters, measurement view angles or pixels. This gives a procedure for adjusting atmosphere and surface parameters that will scale to the large problems of 3D remote sensing. For certain types of multi-angle/multi-pixel polarimetric measurements, this encourages the development of a new class of three-dimensional retrieval algorithms with more flexible parametrizations of spatial heterogeneity, less reliance on data screening procedures, and improved coverage in terms of the resolved physical processes in the Earth?s atmosphere.

  16. Consistent comparison of angle Kappa adjustment between Oculyzer and Topolyzer Vario topography guided LASIK for myopia by EX500 excimer laser.

    Science.gov (United States)

    Sun, Ming-Shen; Zhang, Li; Guo, Ning; Song, Yan-Zheng; Zhang, Feng-Ju

    2018-01-01

    To evaluate and compare the uniformity of angle Kappa adjustment between Oculyzer and Topolyzer Vario topography guided ablation of laser in situ keratomileusis (LASIK) by EX500 excimer laser for myopia. Totally 145 cases (290 consecutive eyes )with myopia received LASIK with a target of emmetropia. The ablation for 86 cases (172 eyes) was guided manually based on Oculyzer topography (study group), while the ablation for 59 cases (118 eyes) was guided automatically by Topolyzer Vario topography (control group). Measurement of adjustment values included data respectively in horizontal and vertical direction of cornea. Horizontally, synclastic adjustment between manually actual values (dx manu ) and Oculyzer topography guided data (dx ocu ) accounts 35.5% in study group, with mean dx manu /dx ocu of 0.78±0.48; while in control group, synclastic adjustment between automatically actual values (dx auto ) and Oculyzer topography data (dx ocu ) accounts 54.2%, with mean dx auto /dx ocu of 0.79±0.66. Vertically, synclastic adjustment between dy manu and dy ocu accounts 55.2% in study group, with mean dy manu /dy ocu of 0.61±0.42; while in control group, synclastic adjustment between dy auto and dy ocu accounts 66.1%, with mean dy auto /dy ocu of 0.66±0.65. There was no statistically significant difference in ratio of actual values/Oculyzer topography guided data in horizontal and vertical direction between two groups ( P =0.951, 0.621). There is high consistency in angle Kappa adjustment guided manually by Oculyzer and guided automatically by Topolyzer Vario topography during corneal refractive surgery by WaveLight EX500 excimer laser.

  17. A comparison of non-integrating reprogramming methods

    Science.gov (United States)

    Schlaeger, Thorsten M; Daheron, Laurence; Brickler, Thomas R; Entwisle, Samuel; Chan, Karrie; Cianci, Amelia; DeVine, Alexander; Ettenger, Andrew; Fitzgerald, Kelly; Godfrey, Michelle; Gupta, Dipti; McPherson, Jade; Malwadkar, Prerana; Gupta, Manav; Bell, Blair; Doi, Akiko; Jung, Namyoung; Li, Xin; Lynes, Maureen S; Brookes, Emily; Cherry, Anne B C; Demirbas, Didem; Tsankov, Alexander M; Zon, Leonard I; Rubin, Lee L; Feinberg, Andrew P; Meissner, Alexander; Cowan, Chad A; Daley, George Q

    2015-01-01

    Human induced pluripotent stem cells (hiPSCs1–3) are useful in disease modeling and drug discovery, and they promise to provide a new generation of cell-based therapeutics. To date there has been no systematic evaluation of the most widely used techniques for generating integration-free hiPSCs. Here we compare Sendai-viral (SeV)4, episomal (Epi)5 and mRNA transfection mRNA6 methods using a number of criteria. All methods generated high-quality hiPSCs, but significant differences existed in aneuploidy rates, reprogramming efficiency, reliability and workload. We discuss the advantages and shortcomings of each approach, and present and review the results of a survey of a large number of human reprogramming laboratories on their independent experiences and preferences. Our analysis provides a valuable resource to inform the use of specific reprogramming methods for different laboratories and different applications, including clinical translation. PMID:25437882

  18. A Comparison of Various Forecasting Methods for Autocorrelated Time Series

    Directory of Open Access Journals (Sweden)

    Karin Kandananond

    2012-07-01

    Full Text Available The accuracy of forecasts significantly affects the overall performance of a whole supply chain system. Sometimes, the nature of consumer products might cause difficulties in forecasting for the future demands because of its complicated structure. In this study, two machine learning methods, artificial neural network (ANN and support vector machine (SVM, and a traditional approach, the autoregressive integrated moving average (ARIMA model, were utilized to predict the demand for consumer products. The training data used were the actual demand of six different products from a consumer product company in Thailand. Initially, each set of data was analysed using Ljung‐Box‐Q statistics to test for autocorrelation. Afterwards, each method was applied to different sets of data. The results indicated that the SVM method had a better forecast quality (in terms of MAPE than ANN and ARIMA in every category of products.

  19. Comparison of three methods for identification of pathogenic Neisseria species

    Energy Technology Data Exchange (ETDEWEB)

    Appelbaum, P.C.; Lawrence, R.B.

    1979-05-01

    A radiometric procedure was compared with the Minitek and Cystine Trypticase Agar sugar degradation methods for identification of 113 Neisseria species (58 Neisseria meningitidis, 51 Neisseria gonorrhoeae, 2 Neisseria lactamica, 2 Neisseria sicca). Identification of meningococci and gonoccoi was confirmed by agglutination and fluorescent antibody techniques, respectively. The Minitek method identified 97% of meningococci, 92% of gonococci, and 100% of other Neisseria after 4 h of incubation. The radiometric (Bactec) procedure identified 100% of gonococci and 100% of miscellaneous Neisseria after 3 h, but problems were encountered with meningococci: 45% of the later strains yielded index values for fructose between 20 and 28 (recommended negative cut-off point, less than 20), with strongly positive (greater than 100) glucose and maltose and negative o-nitrophenyl-beta-0-galactopyranoside reactions in all 58 strains. The Cystine Trypticase Agar method identified 91% of meningococci, ases.

  20. A comparison of multivariate genome-wide association methods

    DEFF Research Database (Denmark)

    Galesloot, Tessel E; Van Steen, Kristel; Kiemeney, Lambertus A L M

    2014-01-01

    Joint association analysis of multiple traits in a genome-wide association study (GWAS), i.e. a multivariate GWAS, offers several advantages over analyzing each trait in a separate GWAS. In this study we directly compared a number of multivariate GWAS methods using simulated data. We focused on six...... methods that are implemented in the software packages PLINK, SNPTEST, MultiPhen, BIMBAM, PCHAT and TATES, and also compared them to standard univariate GWAS, analysis of the first principal component of the traits, and meta-analysis of univariate results. We simulated data (N = 1000) for three...... for scenarios with an opposite sign of genetic and residual correlation. All multivariate analyses resulted in a higher power than univariate analyses, even when only one of the traits was associated with the QTL. Hence, use of multivariate GWAS methods can be recommended, even when genetic correlations between...

  1. Sensitivity of Spaceborne and Ground Radar Comparison Results to Data Analysis Methods and Constraints

    Science.gov (United States)

    Morris, Kenneth R.; Schwaller, Mathew

    2011-01-01

    With the availability of active weather radar observations from space from the Precipitation Radar (PR) on board the Tropical Rainfall Measuring Mission (TR.MM) satellite, numerous studies have been performed comparing PR reflectivity and derived rain rates to similar observations from ground-based weather radars (GR). These studies have used a variety of algorithms to compute matching PR and GR volumes for comparison. Most studies have used a fixed 3-dimensional Cartesian grid centered on the ground radar, onto which the PR and GR data are interpolated using a proprietary approach and/or commonly available GR analysis software (e.g., SPRINT, REORDER). Other studies have focused on the intersection of the PR and GR viewing geometries either explicitly or using a hybrid of the fixed grid and PR/GR common fields of view. For the Dual-Frequency Precipitation Radar (DPR) of the upcoming Global Precipitation Measurement (GPM) mission, a prototype DPR/GR comparison algorithm based on similar TRMM PR data has been developed that defines the common volumes in terms of the geometric intersection of PR and GR rays, where smoothing of the PR and GR data are minimized and no interpolation is performed. The PR and GR volume-averaged reflectivity values of each sample volume are accompanied by descriptive metadata, for attributes including the variability and maximum of the reflectivity within the sample volume, and the fraction of range gates in the sample average having reflectivity values above an adjustable detection threshold (typically taken to be 18 dBZ for the PR). Sample volumes are further characterized by rain type (Stratiform or Convective), proximity to the melting layer, underlying surface (land/water/mixed), and the time difference between the PR and GR observations. The mean reflectivity differences between the PR and GR can differ between data sets produced by the different analysis methods; and for the GPM prototype, by the type of constraints and

  2. A method of adjusting SUV for injection-acquisition time differences in {sup 18}F-FDG PET Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Laffon, Eric [Hopital du Haut Leveque, CHU de Bordeaux, Pessac (France); Centre de Recherche Cardio-Thoracique, Bordeaux (France); Hopital du Haut-Leveque, Service de Medecine Nucleaire, Pessac (France); Clermont, Henri de [Hopital du Haut Leveque, CHU de Bordeaux, Pessac (France); Marthan, Roger [Hopital du Haut Leveque, CHU de Bordeaux, Pessac (France); Centre de Recherche Cardio-Thoracique, Bordeaux (France)

    2011-11-15

    A time normalisation method of tumour SUVs in {sup 18}F-FDG PET imaging is proposed that has been verified in lung cancer patients. A two-compartment model analysis showed that, when SUV is not corrected for {sup 18}F physical decay (SUV{sub uncorr}), its value is within 5% of its peak value (t = 79 min) between 55 and 110 min after injection, in each individual patient. In 10 patients, each with 1 or more malignant lesions (n = 15), two PET acquisitions were performed within this time delay, and the maximal SUV of each lesion, both corrected and uncorrected, was assessed. No significant difference was found between the two uncorrected SUVs, whereas there was a significant difference between the two corrected ones: mean differences were 0.04 {+-} 0.22 and 3.24 {+-} 0.75 g.ml{sup -1}, respectively (95% confidence intervals). Therefore, a simple normalisation of decay-corrected SUV for time differences after injection is proposed: SUV{sub N} = 1.66*SUV{sub uncorr}, where the factor 1.66 arises from decay correction at t = 79 min. When {sup 18}F-FDG PET imaging is performed within the range 55-110 min after injection, a simple SUV normalisation for time differences after injection has been verified in patients with lung cancer, with a {+-}2.5% relative measurement uncertainty. (orig.)

  3. Comparison between powder and slices diffraction methods in teeth samples

    Energy Technology Data Exchange (ETDEWEB)

    Colaco, Marcos V.; Barroso, Regina C. [Universidade do Estado do Rio de Janeiro (IF/UERJ), RJ (Brazil). Inst. de Fisica. Dept. de Fisica Aplicada; Porto, Isabel M. [Universidade Estadual de Campinas (FOP/UNICAMP), Piracicaba, SP (Brazil). Fac. de Odontologia. Dept. de Morfologia; Gerlach, Raquel F. [Universidade de Sao Paulo (FORP/USP), Rieirao Preto, SP (Brazil). Fac. de Odontologia. Dept. de Morfologia, Estomatologia e Fisiologia; Costa, Fanny N. [Coordenacao dos Programas de Pos-Graduacao de Engenharia (LIN/COPPE/UFRJ), RJ (Brazil). Lab. de Instrumentacao Nuclear

    2011-07-01

    Propose different methods to obtain crystallographic information about biological materials are important since powder method is a nondestructive method. Slices are an approximation of what would be an in vivo analysis. Effects of samples preparation cause differences in scattering profiles compared with powder method. The main inorganic component of bones and teeth is a calcium phosphate mineral whose structure closely resembles hydroxyapatite (HAp). The hexagonal symmetry, however, seems to work well with the powder diffraction data, and the crystal structure of HAp is usually described in space group P63/m. Were analyzed ten third molar teeth. Five teeth were separated in enamel, detin and circumpulpal detin powder and five in slices. All the scattering profile measurements were carried out at the X-ray diffraction beamline (XRD1) at the National Synchrotron Light Laboratory - LNLS, Campinas, Brazil. The LNLS synchrotron light source is composed of a 1.37 GeV electron storage ring, delivering approximately 4x10{sup -1}0 photons/s at 8 keV. A double-crystal Si(111) pre-monochromator, upstream of the beamline, was used to select a small energy bandwidth at 11 keV . Scattering signatures were obtained at intervals of 0.04 deg for angles from 24 deg to 52 deg. The human enamel experimental crystallite size obtained in this work were 30(3)nm (112 reflection) and 30(3)nm (300 reflection). These values were obtained from measurements of powdered enamel. When comparing the slice obtained 58(8)nm (112 reflection) and 37(7)nm (300 reflection) enamel diffraction patterns with those generated by the powder specimens, a few differences emerge. This work shows differences between powder and slices methods, separating characteristics of sample of the method's influence. (author)

  4. CT hepatic perfusion measurement: Comparison of three analytic methods

    International Nuclear Information System (INIS)

    Kanda, Tomonori; Yoshikawa, Takeshi; Ohno, Yoshiharu; Kanata, Naoki; Koyama, Hisanobu; Takenaka, Daisuke; Sugimura, Kazuro

    2012-01-01

    Objectives: To compare the efficacy of three analytic methods, maximum slope (MS), dual-input single-compartment model (CM) and deconvolution (DC), for CT measurements of hepatic perfusion and assess the effects of extra-hepatic systemic factors. Materials and methods: Eighty-eight patients who were suspected of having metastatic liver tumors underwent hepatic CT perfusion. The scans were performed at the hepatic hilum 7–77 s after administration of contrast material. Hepatic arterial and portal perfusions (HAP and HPP, ml/min/100 ml) and arterial perfusion fraction (APF, %) were calculated with the three methods, followed by correlation assessment. Partial correlation analysis was used to assess the effects on hepatic perfusion values by various factors such as age, sex, risk of cardiovascular diseases, arrival time of contrast material at abdominal aorta, transit time from abdominal aorta to hepatic parenchyma, and liver dysfunction. Results: Mean HAP of MS was significantly higher than DC. HPP of CM was significantly higher than MS and CM, and HPP of MS was significantly higher than DC. There was no significant difference in APF. HAP and APF showed significant and moderate correlations among the methods. HPP showed significant and moderate correlations between CM and DC, and poor correlation between MS and CM or DC. All methods showed weak correlations between HAP or APF and age or sex. Finally, MS showed weak correlations between HAP or HPP and arrival time or cardiovascular risks. Conclusions: Hepatic perfusion values arrived at with the three methods are not interchangeable. CM and DC are less susceptible to extra-hepatic systemic factors

  5. Analysis and Comparison of Objective Methods for Image Quality Assessment

    Directory of Open Access Journals (Sweden)

    P. S. Babkin

    2014-01-01

    Full Text Available The purpose of this work is research and modification of the reference objective methods for image quality assessment. The ultimate goal is to obtain a modification of formal assessments that more closely corresponds to the subjective expert estimates (MOS.In considering the formal reference objective methods for image quality assessment we used the results of other authors, which offer results and comparative analyzes of the most effective algorithms. Based on these investigations we have chosen two of the most successful algorithm for which was made a further analysis in the MATLAB 7.8 R 2009 a (PQS and MSSSIM. The publication focuses on the features of the algorithms, which have great importance in practical implementation, but are insufficiently covered in the publications by other authors.In the implemented modification of the algorithm PQS boundary detector Kirsch was replaced by the boundary detector Canny. Further experiments were carried out according to the method of the ITU-R VT.500-13 (01/2012 using monochrome images treated with different types of filters (should be emphasized that an objective assessment of image quality PQS is applicable only to monochrome images. Images were obtained with a thermal imaging surveillance system. The experimental results proved the effectiveness of this modification.In the specialized literature in the field of formal to evaluation methods pictures, this type of modification was not mentioned.The method described in the publication can be applied to various practical implementations of digital image processing.Advisability and effectiveness of using the modified method of PQS to assess the structural differences between the images are shown in the article and this will be used in solving the problems of identification and automatic control.

  6. Soybean phospholipase D activity determination. A comparison of two methods

    Directory of Open Access Journals (Sweden)

    Ré, E.

    2007-09-01

    Full Text Available Due to a discrepancy between previously published results, two methods to determine the soybean phospholipase D activity were evaluated. One method is based on the extraction of the enzyme from whole soybean flour, quantifying the enzyme activity on the extract. The other method quantifies the enzymatic activity on whole soybean flour without enzyme extraction. In the extraction-based-method, both the extraction time and the number of extractions were optimized. The highest phospholipase D activity values were obtained from the method without enzyme extraction. This method is less complex, requires less running-time and the conditions of the medium in which phospholipase D acts resemble the conditions found in the oil industrySe evaluaron dos métodos para determinar la actividad de la fosfolipasa D en soja debido a que existe discrepancia entre los resultados publicados. Un método se basa en la extracción de la enzima de la harina resultante de la molienda del grano de soja entero, cuantificando la actividad sobre el extracto. En el otro método, la cuantificación se realiza sobre la harina del grano entero molido, sin extraer la enzima. En el método de extracción se optimizaron tanto el tiempo como el número de extracciones. Los mayores valores de actividad de la fosfolipasa D se obtuvieron por el método sin extracción de la enzima. Este método es más simple, exige menos tiempo de ejecución y las condiciones del medio en que actúa la fosfolipasa D se asemejan a las condiciones encontradas en la industria aceitera.

  7. Comparison of high efficiency particulate filter testing methods

    International Nuclear Information System (INIS)

    1985-01-01

    High Efficiency Particulate Air (HEPA) filters are used for the removal of submicron size particulates from air streams. In nuclear industry they are used as an important engineering safeguard to prevent the release of air borne radioactive particulates to the environment. HEPA filters used in the nuclear industry should therefore be manufactured and operated under strict quality control. There are three levels of testing HEPA filters: i) testing of the filter media; ii) testing of the assembled filter including filter media and filter housing; and iii) on site testing of the complete filter installation before putting into operation and later for the purpose of periodic control. A co-ordinated research programme on particulate filter testing methods was taken up by the Agency and contracts were awarded to the Member Countries, Belgium, German Democratic Republic, India and Hungary. The investigations carried out by the participants of the present co-ordinated research programme include the results of the nowadays most frequently used HEPA filter testing methods both for filter medium test, rig test and in-situ test purposes. Most of the experiments were carried out at ambient temperature and humidity, but indications were given to extend the investigations to elevated temperature and humidity in the future for the purpose of testing the performance of HEPA filter under severe conditions. A major conclusion of the co-ordinated research programme was that it was not possible to recommend one method as a reference method for in situ testing of high efficiency particulate air filters. Most of the present conventional methods are adequate for current requirements. The reasons why no method is to be recommended were multiple, ranging from economical aspects, through incompatibility of materials to national regulations

  8. Comparison between powder and slices diffraction methods in teeth samples

    International Nuclear Information System (INIS)

    Colaco, Marcos V.; Barroso, Regina C.; Porto, Isabel M.; Gerlach, Raquel F.; Costa, Fanny N.

    2011-01-01

    Propose different methods to obtain crystallographic information about biological materials are important since powder method is a nondestructive method. Slices are an approximation of what would be an in vivo analysis. Effects of samples preparation cause differences in scattering profiles compared with powder method. The main inorganic component of bones and teeth is a calcium phosphate mineral whose structure closely resembles hydroxyapatite (HAp). The hexagonal symmetry, however, seems to work well with the powder diffraction data, and the crystal structure of HAp is usually described in space group P63/m. Were analyzed ten third molar teeth. Five teeth were separated in enamel, detin and circumpulpal detin powder and five in slices. All the scattering profile measurements were carried out at the X-ray diffraction beamline (XRD1) at the National Synchrotron Light Laboratory - LNLS, Campinas, Brazil. The LNLS synchrotron light source is composed of a 1.37 GeV electron storage ring, delivering approximately 4x10 -1 0 photons/s at 8 keV. A double-crystal Si(111) pre-monochromator, upstream of the beamline, was used to select a small energy bandwidth at 11 keV . Scattering signatures were obtained at intervals of 0.04 deg for angles from 24 deg to 52 deg. The human enamel experimental crystallite size obtained in this work were 30(3)nm (112 reflection) and 30(3)nm (300 reflection). These values were obtained from measurements of powdered enamel. When comparing the slice obtained 58(8)nm (112 reflection) and 37(7)nm (300 reflection) enamel diffraction patterns with those generated by the powder specimens, a few differences emerge. This work shows differences between powder and slices methods, separating characteristics of sample of the method's influence. (author)

  9. Comparison of topotactic fluorination methods for complex oxide films

    Science.gov (United States)

    Moon, E. J.; Choquette, A. K.; Huon, A.; Kulesa, S. Z.; Barbash, D.; May, S. J.

    2015-06-01

    We have investigated the synthesis of SrFeO3-αFγ (α and γ ≤ 1) perovskite films using topotactic fluorination reactions utilizing poly(vinylidene fluoride) as a fluorine source. Two different fluorination methods, a spin-coating and a vapor transport approach, were performed on as-grown SrFeO2.5 films. We highlight differences in the structural, compositional, and optical properties of the oxyfluoride films obtained via the two methods, providing insight into how fluorination reactions can be used to modify electronic and optical behavior in complex oxide heterostructures.

  10. Comparison of topotactic fluorination methods for complex oxide films

    Energy Technology Data Exchange (ETDEWEB)

    Moon, E. J., E-mail: em582@drexel.edu; Choquette, A. K.; Huon, A.; Kulesa, S. Z.; May, S. J., E-mail: smay@coe.drexel.edu [Department of Materials Science and Engineering, Drexel University, Philadelphia, Pennsylvania 19104 (United States); Barbash, D. [Centralized Research Facilities, Drexel University, Philadelphia, Pennsylvania 19104 (United States)

    2015-06-01

    We have investigated the synthesis of SrFeO{sub 3−α}F{sub γ} (α and γ ≤ 1) perovskite films using topotactic fluorination reactions utilizing poly(vinylidene fluoride) as a fluorine source. Two different fluorination methods, a spin-coating and a vapor transport approach, were performed on as-grown SrFeO{sub 2.5} films. We highlight differences in the structural, compositional, and optical properties of the oxyfluoride films obtained via the two methods, providing insight into how fluorination reactions can be used to modify electronic and optical behavior in complex oxide heterostructures.

  11. Comparison of optimization methods for electronic-structure calculations

    International Nuclear Information System (INIS)

    Garner, J.; Das, S.G.; Min, B.I.; Woodward, C.; Benedek, R.

    1989-01-01

    The performance of several local-optimization methods for calculating electronic structure is compared. The fictitious first-order equation of motion proposed by Williams and Soler is integrated numerically by three procedures: simple finite-difference integration, approximate analytical integration (the Williams-Soler algorithm), and the Born perturbation series. These techniques are applied to a model problem for which exact solutions are known, the Mathieu equation. The Williams-Soler algorithm and the second Born approximation converge equally rapidly, but the former involves considerably less computational effort and gives a more accurate converged solution. Application of the method of conjugate gradients to the Mathieu equation is discussed

  12. Formal Methods for Abstract Specifications – A Comparison of Concepts

    DEFF Research Database (Denmark)

    Instenberg, Martin; Schneider, Axel; Schnetter, Sabine

    2006-01-01

    In industry formal methods are becoming increasingly important for the verification of hardware and software designs. However current practice for specification of system and protocol functionality on high level of abstraction is textual description. For verification of the system behavior manual...... inspections and tests are usual means. To facilitate the introduction of formal methods in the development process of complex systems and protocols, two different tools evolved from research activities – UPPAAL and SpecEdit – have been investigated and compared regarding their concepts and functionality...

  13. Municipal solid waste processing methods: Technical-economic comparison

    International Nuclear Information System (INIS)

    Bertanza, G.

    1993-01-01

    This paper points out the advantages and disadvantages of municipal solid waste processing methods incorporating different energy and/or materials recovery techniques, i.e., those involving composting or incineration and those with a mix of composting and incineration. The various technologies employed are compared especially with regard to process reliability, flexibility, modularity, pollution control efficiency and cost effectiveness. For that which regards composting, biodigestors are examined, while for incineration, the paper analyzes systems using combustion with complete recovery of vapour, combustion with total recovery of available electric energy, and combustion with cogeneration. Each of the processing methods examined includes an iron recovery cycle

  14. COMPARISON OF NONLINEAR DYNAMICS OPTIMIZATION METHODS FOR APS-U

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Y.; Borland, Michael

    2017-06-25

    Many different objectives and genetic algorithms have been proposed for storage ring nonlinear dynamics performance optimization. These optimization objectives include nonlinear chromaticities and driving/detuning terms, on-momentum and off-momentum dynamic acceptance, chromatic detuning, local momentum acceptance, variation of transverse invariant, Touschek lifetime, etc. In this paper, the effectiveness of several different optimization methods and objectives are compared for the nonlinear beam dynamics optimization of the Advanced Photon Source upgrade (APS-U) lattice. The optimized solutions from these different methods are preliminarily compared in terms of the dynamic acceptance, local momentum acceptance, chromatic detuning, and other performance measures.

  15. Comparison of dosimetric methods for virtual wedge analysis

    International Nuclear Information System (INIS)

    Bailey, M.; Nelson, V.; Collins, O.; West, M.; Holloway, L.; Rajapaske, S.; Arts, J.; Varas, J.; Cho, G.; Hill, R.

    2004-01-01

    Full text: The Siemens Virtual Wedge (Concord, USA) creates wedged beam profile by moving a single collimator jaw across the specified field size whilst varying the dose rate and jaw speed for use in the delivery of radiotherapy treatments. The measurement of the dosimetric characteristics of the Siemens Virtual Wedge poses significant challenges to medical physicists. This study investigates several different methods for measuring and analysing the virtual wedge for data collection for treatment planning systems and ongoing quality assurance. The beam profiles of the Virtual Wedge (VW) were compared using several different dosimetric methods. Open field profiles were measured with Kodak X-Omat V (Rochester, NY, USA) radiographic film and compared with measurements made using the Sun Nuclear Profiler with a Motorized Drive Assembly (MDA) (Melbourne, FL, USA) and the Scanditronix Wellhofer CC13 ionisation chamber and 24 ion Chamber Array (CA24) (Schwarzenbruck, Germany). The resolution of each dosimetric method for open field profiles was determined. The Virtual Wedge profiles were measured with radiographic film the Profiler and the Scanditronix Wellhofer CA 24 ion Chamber Array at 5 different depths. The ease of setup, time taken, analysis and accuracy of measurement were all evaluated to determine the method that would be both appropriate and practical for routine quality assurance of the Virtual Wedge. The open field profiles agreed within ±2% or 2mm for all dosimetric methods. The accuracy of the Profiler and CA24 are limited to half of the step size selected for each of these detectors. For the VW measurements a step size of 2mm was selected for the Profiler and the CA24. The VW profiles for all dosimetric methods agreed within ±2% or 2mm for the main wedged section of the profile. The toe and heel ends of the wedges showed the significant discrepancies dependent upon the dosimetry method used, up to 7% for the toe end with the CA24. The dosimetry of the

  16. Comparison of topotactic fluorination methods for complex oxide films

    Directory of Open Access Journals (Sweden)

    E. J. Moon

    2015-06-01

    Full Text Available We have investigated the synthesis of SrFeO3−αFγ (α and γ ≤ 1 perovskite films using topotactic fluorination reactions utilizing poly(vinylidene fluoride as a fluorine source. Two different fluorination methods, a spin-coating and a vapor transport approach, were performed on as-grown SrFeO2.5 films. We highlight differences in the structural, compositional, and optical properties of the oxyfluoride films obtained via the two methods, providing insight into how fluorination reactions can be used to modify electronic and optical behavior in complex oxide heterostructures.

  17. Statistical comparison of excystation methods in Cryptosporidium parvum oocysts

    Czech Academy of Sciences Publication Activity Database

    Pecková, R.; Stuart, P. D.; Sak, Bohumil; Květoňová, Dana; Kváč, Martin; Foitová, I.

    2016-01-01

    Roč. 230, OCT 30 (2016), s. 1-5 ISSN 0304-4017 R&D Projects: GA ČR(CZ) GAP505/11/1163 Institutional support: RVO:60077344 Keywords : Cryptosporidium parvum * excystation methods * in vitro cultivation * sodium hypochlorite * tlypsin Subject RIV: EG - Zoology Impact factor: 2.356, year: 2016

  18. "A Comparison of Several Methods in a Rock Slope Stability ...

    African Journals Online (AJOL)

    This researchuses the mentioned methods and principles in the stability analysis of some rock slopes in an open pit mine in Syria, that is Khneifees phosphate mine. The importance of this researchis that it shows the role of kinematical analysis in minimizing efforts when verifying the safety of rock slopes in site, and when ...

  19. Self-compacting concretes (SCC: comparison of methods of dosage

    Directory of Open Access Journals (Sweden)

    B. F. Tutikian

    Full Text Available The composition of a self-compacting concrete (SCC should be defined to fulfills a number of requirements, such as self-compactibility, strength and durability. This study aims to compare three methods of dosage for SCC with local materials, so as to determine which one is the most economical and rational, thus assisting the executor in making a decision and enabling economic and technical feasibility for its application. The methods used in the experimental program were: Nan Su et al., which was developed in 2001 [1]; Repette-Melo, which was proposed in 2005 [2]; and Tutikian & Dal Molin, which was developed in 2007 [3]. From the results obtained in the experimental program, it was observed that the method which presented the lowest cost and highest compressive strength at the ages of 7, 28 and 91 days was Tutikian & Dal Molin, while the one which reached the lowest chloride ion penetration, best compactness and highest elasticity modulus was Repette-Melo. In tests carried out in the fresh state, all tested methods yielded mixtures which comply with the self-compactibility levels required by ABNT NBR 15823:2010 [4].

  20. Comparison of design methods for axially loaded buckets in sand

    DEFF Research Database (Denmark)

    Vaitkunaite, Evelina; Nielsen, Benjaminn Nordahl; Ibsen, Lars Bo

    2015-01-01

    . Itwas found that bearing capacity from the surcharge increases approximately twice if the foundation skirt is two times longer. However, the predicted compressive soil capacity can differ by 3.6 times depending on the chosen bearing capacity parameters. Few methods are available for the estimation...

  1. Comparisons and Analyses of Gifted Students' Characteristics and Learning Methods

    Science.gov (United States)

    Lu, Jiamei; Li, Daqi; Stevens, Carla; Ye, Renmin

    2017-01-01

    Using PISA 2009, an international education database, this study compares gifted and talented (GT) students in three groups with normal (non-GT) students by examining student characteristics, reading, schooling, learning methods, and use of strategies for understanding and memorizing. Results indicate that the GT and non-GT gender distributions…

  2. Quantitative assessment of breast density: comparison of different methods

    International Nuclear Information System (INIS)

    Qin Naishan; Guo Li; Dang Yi; Song Luxin; Wang Xiaoying

    2011-01-01

    Objective: To Compare different methods of quantitative breast density measurement. Methods: The study included sixty patients who underwent both mammography and breast MRI. The breast density was computed automatically on digital mammograms with R2 workstation, Two experienced radiologists read the mammograms and assessed the breast density with Wolfe and ACR classification respectively. Fuzzy C-means clustering algorithm (FCM) was used to assess breast density on MRI. Each assessment method was repeated after 2 weeks. Spearman and Pearson correlations of inter- and intrareader and intermodality were computed for density estimates. Results: Inter- and intrareader correlation of Wolfe classification were 0.74 and 0.65, and they were 0.74 and 0.82 for ACR classification respectively. Correlation between Wolfe and ACR classification was 0.77. High interreader correlation of 0.98 and intrareader correlation of 0.96 was observed with MR FCM measurement. And the correlation between digital mammograms and MRI was high in the assessment of breast density (r=0.81, P<0.01). Conclusion: High correlation of breast density estimates on digital mammograms and MRI FCM suggested the former could be used as a simple and accurate method. (authors)

  3. Comparison of methods for cryostating superconducting dipole magnets

    International Nuclear Information System (INIS)

    Son Zun Gan; Filippov, Yu.P.; Zinchenko, S.I.

    1985-01-01

    An attempt is made to refine basic parameters of the UNK cryogenic system with account of real characteristics of horizontal two-phase helium flows and to project ways of optimizing these parameters. The method 1 where liquid helium in the state close to saturation is supplied to the chain of magnets and removes heat releasing in coils and coming from environment at the expense of phase transformation and it leaves the chain as vapour-liquid mixture, is compared with the method 2 where magnet cooling is arranged at the expense of heat transfer from one-phase direct flow to two-phase helium counter flow. The results of calculations are presented as dependences of maximum temperatures of coils on the length of magnetic path. It is shown that at the length of chain of about 300-400 m both methods are practically equivalent by temperature criterion, but the method 1 is preferable due to simpler design of the cryostat and lesser helium quantity in the system

  4. COMPARISON OF LARGE RIVER SAMPLING METHODS ON ALGAL METRICS

    Science.gov (United States)

    We compared the results of four methods used to assess the algal communities at 60 sites distributed among four rivers. Based on Principle Component Analysis of physical habitat data collected concomitantly with the algal data, sites were separated into those with a mean thalweg...

  5. COMPARISON OF LARGE RIVER SAMPLING METHOD USING DIATOM METRICS

    Science.gov (United States)

    We compared the results of four methods used to assess the algal communities at 60 sites distributed among four rivers. Based on Principle Component Analysis of physical habitat data collected concomitantly with the algal data, sites were separated into those with a mean thalweg...

  6. Comparison of two detection methods in thin layer chromatographic ...

    African Journals Online (AJOL)

    o-tolidine plus potassium iodide and photosynthesis inhibition detection methods were investigated for the analysis of three triazine herbicides (atrazine, ametryne, simazine) and two urea herbicides (diuron, metobromuron) in a coastal savanna soil using thin layer chromatography to compare the suitability of the two ...

  7. Predicting proteasomal cleavage sites: a comparison of available methods

    DEFF Research Database (Denmark)

    Saxova, P.; Buus, S.; Brunak, Søren

    2003-01-01

    -terminal, in particular, of CTL epitopes is cleaved precisely by the proteasome, whereas the N-terminal is produced with an extension, and later trimmed by peptidases in the cytoplasm and in the endoplasmic reticulum. Recently, three publicly available methods have been developed for prediction of the specificity...

  8. Comparison of Two Disc Diffusion Methods with Minimum Inhibitory ...

    African Journals Online (AJOL)

    antimicrobial susceptibility pattern of N. gonorrhoeae may change rapidly, especially in areas where ineffective treatment regimens are applied.[3]. There are no universally accepted guidelines for testing the antimicrobial susceptibility of N. gonorrhoeae by a disc diffusion method, but different techniques are in practice, like ...

  9. Comparison of three methods for determination of protein ...

    African Journals Online (AJOL)

    However, a six fold greater amount of protein was obtained when FastPrep was applied to lyse LAB cells. Our results also indicate that, this fast and easy extraction method allows more spot-abundant polyacrylamide gels. More clear and consistent strips were detected by SDS-PAGE when proteins were extracted by ...

  10. A comparison of multidimensional scaling methods for perceptual mapping

    NARCIS (Netherlands)

    Bijmolt, T.H.A.; Wedel, M.

    Multidimensional scaling has been applied to a wide range of marketing problems, in particular to perceptual mapping based on dissimilarity judgments. The introduction of methods based on the maximum likelihood principle is one of the most important developments. In this article, the authors compare

  11. Comparison of different methods to investigate postprandial lipaemia

    NARCIS (Netherlands)

    van Oostrom, A. J. H. H. M.; Alipour, A.; Sijmonsma, T. P.; Verseyden, C.; Dallinga-Thie, G. M.; Plokker, H. W. M.; Castro Cabezas, M.

    2009-01-01

    Postprandial hyperlipidaemia has been associated with coronary artery disease (CAD). We investigated which of the generally used methods to test postprandial lipaemia differentiated best between patients with premature CAD (50 +/- 4 years, n=20) and healthy controls. Furthermore, the effects of

  12. Comparison between implicit and hybrid solvation methods for the ...

    Indian Academy of Sciences (India)

    Administrator

    Both implicit solvation method (dielectric polarizable continuum model, DPCM) and hybrid ... the free energy change (ΔGsol) as per the PCM ... Here the gas phase change is written as ΔGg = ΔEelec + ..... bution to the field of electrochemistry.

  13. A comparison of three methods of Nitrogen analysis for feedstuffs

    African Journals Online (AJOL)

    Unknown

    Introduction. The Kjeldahl method for determining crude protein is very widely used for analysis of feed samples. However, it has its drawbacks and hence new techniques which are without some of the disadvantages are considered desirable. One such modification was developed by Hach et al. (1987). This promising ...

  14. Comparison of multiple gene assembly methods for metabolic engineering

    Science.gov (United States)

    Chenfeng Lu; Karen Mansoorabadi; Thomas Jeffries

    2007-01-01

    A universal, rapid DNA assembly method for efficient multigene plasmid construction is important for biological research and for optimizing gene expression in industrial microbes. Three different approaches to achieve this goal were evaluated. These included creating long complementary extensions using a uracil-DNA glycosylase technique, overlap extension polymerase...

  15. Comparison of Methods for Sparse Representation of Musical Signals

    DEFF Research Database (Denmark)

    Endelt, Line Ørtoft; la Cour-Harbo, Anders

    2005-01-01

    by a number of sparseness measures and results are shown on the ℓ1 norm of the coefficients, using a dictionary containing a Dirac basis, a Discrete Cosine Transform, and a Wavelet Packet. Evaluated only on the sparseness Matching Pursuit is the best method, and it is also relatively fast....

  16. Preliminary comparison of different reduction methods of graphene

    Indian Academy of Sciences (India)

    The reduction of graphene oxide (GO) is a promising route to bulk produce graphene-based sheets. Different reduction processes result in reduced graphene oxide (RGO) with different properties. In this paper three reduction methods, chemical, thermal and electrochemical reduction, were compared on three aspects ...

  17. Comparison of the performances and validation of three methods for ...

    African Journals Online (AJOL)

    SARAH

    2014-02-28

    Feb 28, 2014 ... bacteria in Norwegian slaughter pigs. Int J. Food Microbiol 1, 301–309. [NCFA] Nordic Committee of Food Analysis (1996). Yersinia enterocolitica Detection in foods 117,. 3rd,edn,1-12. Nowak, B., Mueffling, T.V., Caspari, K. and Hartung, J. 2006 Validation of a method for the detection of virulent Yersinia ...

  18. Comparison Of Different Methods For The Swimming Aerobic Capacity Evaluation.

    Science.gov (United States)

    Pelarigo, Jailton Gregório; Fernandes, Ricardo Jorge; Ribeiro, João; Denadai, Benedito Sérgio; Greco, Camila Coelho; Vilas-Boas, João Paulo

    2017-02-23

    This study compared velocity (v) and bioenergetical factors using different methods applied for the swimming aerobic capacity evaluation. Ten elite female swimmers (17.6 ± 1.9 yrs., 1.70 ± 0.05 m and 61.3 ± 5.8 kg) performed an intermittent incremental velocity protocol until voluntary exhaustion to determine the v associated to the individual anaerobic threshold (IAnT), ventilatory threshold (VT), heart rate threshold (HRT), lactate threshold fixed in 3.5 mmol.L (LT3.5) and maximal oxygen uptake (V[Combining Dot Above]O2max). Two-to-three 30 min submaximal constant tests for the v assessment at maximal lactate steady state (MLSS). The v, gas exchange, heart rate and blood lactate concentration variables were monitored in all tests. The values of all parameters at the v corresponding to MLSS, IAnT, VT and HRT were similar (p 0.400), except for carbon dioxide (V[Combining Dot Above]CO2) that was higher for MLSS compared to VT (p higher when compared to other methods for v and bioenergetical factors. It is suggested that IAnT, VT and HRT methods are better predictors of the intensity corresponding to the commonly accepted gold-standard method (i.e. MLSS) for the aerobic capacity evaluation compared to LT3.5.

  19. Comparison of methods for separating vibration sources in rotating machinery

    Science.gov (United States)

    Klein, Renata

    2017-12-01

    Vibro-acoustic signatures are widely used for diagnostics of rotating machinery. Vibration based automatic diagnostics systems need to achieve a good separation between signals generated by different sources. The separation task may be challenging, since the effects of the different vibration sources often overlap. In particular, there is a need to separate between signals related to the natural frequencies of the structure and signals resulting from the rotating components (signal whitening), as well as a need to separate between signals generated by asynchronous components like bearings and signals generated by cyclo-stationary components like gears. Several methods were proposed to achieve the above separation tasks. The present study compares between some of these methods. The paper also presents a new method for whitening, Adaptive Clutter Separation, as well as a new efficient algorithm for dephase, which separates between asynchronous and cyclo-stationary signals. For whitening the study compares between liftering of the high quefrencies and adaptive clutter separation. For separating between the asynchronous and the cyclo-stationary signals the study compares between liftering in the quefrency domain and dephase. The methods are compared using both simulated signals and real data.

  20. Comparison of traditional physico-chemical methods and molecular ...

    African Journals Online (AJOL)

    This study was aim to review the efficiency of molecular markers and traditional physico-chemical methods for the identification of basmati rice. The study involved 44 promising varieties of Indica rices collected from geographically distant places and adapted to irrigated and aerobic agro-ecosystems. Quality data for ...

  1. Comparison of protein extraction methods suitable for proteomics ...

    African Journals Online (AJOL)

    An efficient protein extraction method is a prerequisite for successful implementation of proteomics. In this study, seedling roots of Jerusalem artichoke were treated with the concentration of 250 mM NaCl for 36 h. Subsequently, six different protocols of protein extraction were applied to seedling roots of Jerusalem artichoke ...

  2. A Comparison of Assessment Methods and Raters in Product Creativity

    Science.gov (United States)

    Lu, Chia-Chen; Luh, Ding-Bang

    2012-01-01

    Although previous studies have attempted to use different experiences of raters to rate product creativity by adopting the Consensus Assessment Method (CAT) approach, the validity of replacing CAT with another measurement tool has not been adequately tested. This study aimed to compare raters with different levels of experience (expert ves.…

  3. Comparisons of likelihood and machine learning methods of individual classification

    Science.gov (United States)

    Guinand, B.; Topchy, A.; Page, K.S.; Burnham-Curtis, M. K.; Punch, W.F.; Scribner, K.T.

    2002-01-01

    Classification methods used in machine learning (e.g., artificial neural networks, decision trees, and k-nearest neighbor clustering) are rarely used with population genetic data. We compare different nonparametric machine learning techniques with parametric likelihood estimations commonly employed in population genetics for purposes of assigning individuals to their population of origin (“assignment tests”). Classifier accuracy was compared across simulated data sets representing different levels of population differentiation (low and high FST), number of loci surveyed (5 and 10), and allelic diversity (average of three or eight alleles per locus). Empirical data for the lake trout (Salvelinus namaycush) exhibiting levels of population differentiation comparable to those used in simulations were examined to further evaluate and compare classification methods. Classification error rates associated with artificial neural networks and likelihood estimators were lower for simulated data sets compared to k-nearest neighbor and decision tree classifiers over the entire range of parameters considered. Artificial neural networks only marginally outperformed the likelihood method for simulated data (0–2.8% lower error rates). The relative performance of each machine learning classifier improved relative likelihood estimators for empirical data sets, suggesting an ability to “learn” and utilize properties of empirical genotypic arrays intrinsic to each population. Likelihood-based estimation methods provide a more accessible option for reliable assignment of individuals to the population of origin due to the intricacies in development and evaluation of artificial neural networks. In recent years, characterization of highly polymorphic molecular markers such as mini- and microsatellites and development of novel methods of analysis have enabled researchers to extend investigations of ecological and evolutionary processes below the population level to the level of

  4. Comparison of Chemical and Physical-chemical Wastewater Discoloring Methods

    Directory of Open Access Journals (Sweden)

    Durašević, V.

    2007-11-01

    Full Text Available Today's chemical and physical-chemical wastewater discoloration methods do not completely meet demands regarding degree of discoloration. In this paper discoloration was performed using Fenton (FeSO4 . 7 H2O + H2O2 + H2SO4 and Fenton-like (FeCl3 . 6 H2O + H2O2 + HCOOH chemical methods and physical-chemical method of coagulation/flocculation (using poly-electrolyte (POEL combining anion active coagulant (modified poly-acrylamides and cationic flocculant (product of nitrogen compounds in combination with adsorption on activated carbon. Suitability of aforementioned methods was investigated on reactive and acid dyes, regarding their most common use in the textile industry. Also, investigations on dyes of different chromogen (anthraquinone, phthalocyanine, azo and xanthene were carried out in order to determine the importance of molecular spatial structure. Oxidative effect of Fenton and Fenton-like reagents resulted in decomposition of colored chromogen and high degree of discoloration. However, the problem is the inability of adding POEL in stechiometrical ratio (also present in physical-chemical methods, when the phenomenon of overdosing coagulants occurs in order to obtain a higher degree of discoloration, creating a potential danger of burdening water with POEL. Input and output water quality was controlled through spectrophotometric measurements and standard biological parameters. In addition, part of the investigations concerned industrial wastewaters obtained from dyeing cotton materials using reactive dye (C. I. Reactive Blue 19, a process that demands the use of vast amounts of electrolytes. Also, investigations of industrial wastewaters was labeled as a crucial step carried out in order to avoid serious misassumptions and false conclusions, which may arise if dyeing processes are only simulated in the laboratory.

  5. The energetic cost of walking: a comparison of predictive methods.

    Directory of Open Access Journals (Sweden)

    Patricia Ann Kramer

    Full Text Available BACKGROUND: The energy that animals devote to locomotion has been of intense interest to biologists for decades and two basic methodologies have emerged to predict locomotor energy expenditure: those based on metabolic and those based on mechanical energy. Metabolic energy approaches share the perspective that prediction of locomotor energy expenditure should be based on statistically significant proxies of metabolic function, while mechanical energy approaches, which derive from many different perspectives, focus on quantifying the energy of movement. Some controversy exists as to which mechanical perspective is "best", but from first principles all mechanical methods should be equivalent if the inputs to the simulation are of similar quality. Our goals in this paper are 1 to establish the degree to which the various methods of calculating mechanical energy are correlated, and 2 to investigate to what degree the prediction methods explain the variation in energy expenditure. METHODOLOGY/PRINCIPAL FINDINGS: We use modern humans as the model organism in this experiment because their data are readily attainable, but the methodology is appropriate for use in other species. Volumetric oxygen consumption and kinematic and kinetic data were collected on 8 adults while walking at their self-selected slow, normal and fast velocities. Using hierarchical statistical modeling via ordinary least squares and maximum likelihood techniques, the predictive ability of several metabolic and mechanical approaches were assessed. We found that all approaches are correlated and that the mechanical approaches explain similar amounts of the variation in metabolic energy expenditure. Most methods predict the variation within an individual well, but are poor at accounting for variation between individuals. CONCLUSION: Our results indicate that the choice of predictive method is dependent on the question(s of interest and the data available for use as inputs. Although we

  6. The energetic cost of walking: a comparison of predictive methods.

    Science.gov (United States)

    Kramer, Patricia Ann; Sylvester, Adam D

    2011-01-01

    The energy that animals devote to locomotion has been of intense interest to biologists for decades and two basic methodologies have emerged to predict locomotor energy expenditure: those based on metabolic and those based on mechanical energy. Metabolic energy approaches share the perspective that prediction of locomotor energy expenditure should be based on statistically significant proxies of metabolic function, while mechanical energy approaches, which derive from many different perspectives, focus on quantifying the energy of movement. Some controversy exists as to which mechanical perspective is "best", but from first principles all mechanical methods should be equivalent if the inputs to the simulation are of similar quality. Our goals in this paper are 1) to establish the degree to which the various methods of calculating mechanical energy are correlated, and 2) to investigate to what degree the prediction methods explain the variation in energy expenditure. We use modern humans as the model organism in this experiment because their data are readily attainable, but the methodology is appropriate for use in other species. Volumetric oxygen consumption and kinematic and kinetic data were collected on 8 adults while walking at their self-selected slow, normal and fast velocities. Using hierarchical statistical modeling via ordinary least squares and maximum likelihood techniques, the predictive ability of several metabolic and mechanical approaches were assessed. We found that all approaches are correlated and that the mechanical approaches explain similar amounts of the variation in metabolic energy expenditure. Most methods predict the variation within an individual well, but are poor at accounting for variation between individuals. Our results indicate that the choice of predictive method is dependent on the question(s) of interest and the data available for use as inputs. Although we used modern humans as our model organism, these results can be extended

  7. The significance of amlodipine on autonomic nervous system adjustment (ANSA method: A new approach in the treatment of hypertension

    Directory of Open Access Journals (Sweden)

    Milovanović Branislav

    2009-01-01

    Full Text Available Introduction. Cardiovascular autonomic modulation is altered in patients with essential hypertension. Objective To evaluate acute and long-term effects of amlodipine on cardiovascular autonomic function and haemodynamic status in patients with mild essential hypertension. Methods. Ninety patients (43 male, mean age 52.12 ±10.7 years with mild hypertension were tested before, 30 minutes after the first 5 mg oral dose of amlodipine and three weeks after monotherapy with amlodipine. A comprehensive study protocol was done including finger blood pressure variability (BPV and heart rate variability (HRV beat-to-beat analysis with impedance cardiography, ECG with software short-term HRV and nonlinear analysis, 24-hour Holter ECG monitoring with QT and HRV analysis, 24-hour blood pressure (BP monitoring with systolic and diastolic BPV analysis, cardiovascular autonomic reflex tests, cold pressure test, mental stress test. The patients were also divided into sympathetic and parasympathetic groups, depending on predominance in short time spectral analysis of sympathovagal balance according to low frequency and high frequency values. Results. We confirmed a significant systolic and diastolic BP reduction, and a reduction of pulse pressure during day, night and early morning hours. The reduction of supraventricular and ventricular ectopic beats during the night was also achieved with therapy, but without statistical significance. The increment of sympathetic activity in early phase of amlodipine therapy was without statistical significance and persistence of sympathetic predominance after a few weeks of therapy detected based on the results of short-term spectral HRV analysis. All time domain parameters of long-term HRV analysis were decreased and low frequency amongst spectral parameters. Amlodipne reduced baroreflex sensitivity after three weeks of therapy, but increased it immediately after the administration of the first dose. Conclusion. The results

  8. A comparison of published methods of calculation of defect significance

    International Nuclear Information System (INIS)

    Ingham, T.; Harrison, R.P.

    1982-01-01

    This paper presents some of the results obtained in a round-robin calculational exercise organised by the OECD Committee on the Safety of Nuclear Installations (CSNI). The exercise was initiated to examine practical aspects of using documented elastic-plastic fracture mechanics methods to calculate defect significance. The extent to which the objectives of the exercise were met is illustrated using solutions to 'standard' problems produced by UKAEA and CEGB using the methods given in ASME XI, Appendix A, BSI PD6493, and the CEGB R/H/R6 Document. Differences in critical or tolerable defect size defined using these procedures are examined in terms of their different treatments and reasons for discrepancies are discussed. (author)

  9. Rigid inclusions-Comparison between analytical and numerical methods

    International Nuclear Information System (INIS)

    Gomez Perez, R.; Melentijevic, S.

    2014-01-01

    This paper compares different analytical methods for analysis of rigid inclusions with finite element modeling. First of all, the load transfer in the distribution layer is analyzed for its different thicknesses and different inclusion grids to define the range between results obtained by analytical and numerical methods. The interaction between the soft soil and the inclusion in the estimation of settlements is studied as well. Considering different stiffness of the soft soil, settlements obtained analytical and numerically are compared. The influence of the soft soil modulus of elasticity on the neutral point depth was also performed by finite elements. This depth has a great importance for the definition of the total length of rigid inclusion. (Author)

  10. Comparison of optical methods for surface roughness characterization

    DEFF Research Database (Denmark)

    Feidenhans'l, Nikolaj Agentoft; Hansen, Poul Erik; Pilny, Lukas

    2015-01-01

    We report a study of the correlation between three optical methods for characterizing surface roughness: a laboratory scatterometer measuring the bi-directional reflection distribution function (BRDF instrument), a simple commercial scatterometer (rBRDF instrument), and a confocal optical profiler....... For each instrument, the effective range of spatial surface wavelengths is determined, and the common bandwidth used when comparing the evaluated roughness parameters. The compared roughness parameters are: the root-mean-square (RMS) profile deviation (Rq), the RMS profile slope (Rdq), and the variance...... of the scattering angle distribution (Aq). The twenty-two investigated samples were manufactured with several methods in order to obtain a suitable diversity of roughness patterns.Our study shows a one-to-one correlation of both the Rq and the Rdq roughness values when obtained with the BRDF and the confocal...

  11. Comparison of Force Reconstruction Methods for a Lumped Mass Beam

    Directory of Open Access Journals (Sweden)

    Vesta I. Bateman

    1997-01-01

    Full Text Available Two extensions of the force reconstruction method, the sum of weighted accelerations technique (SWAT, are presented in this article. SWAT requires the use of the structure’s elastic mode shapes for reconstruction of the applied force. Although based on the same theory, the two new techniques do not rely on mode shapes to reconstruct the applied force and may be applied to structures whose mode shapes are not available. One technique uses the measured force and acceleration responses with the rigid body mode shapes to calculate the scalar weighting vector, so the technique is called SWAT-CAL (SWAT using a calibrated force input. The second technique uses the free-decay time response of the structure with the rigid body mode shapes to calculate the scalar weighting vector and is called SWAT-TEEM (SWAT using time eliminated elastic modes. All three methods are used to reconstruct forces for a simple structure.

  12. Comparison of operation optimization methods in energy system modelling

    DEFF Research Database (Denmark)

    Ommen, Torben Schmidt; Markussen, Wiebke Brix; Elmegaard, Brian

    2013-01-01

    In areas with large shares of Combined Heat and Power (CHP) production, significant introduction of intermittent renewable power production may lead to an increased number of operational constraints. As the operation pattern of each utility plant is determined by optimization of economics......, possibilities for decoupling production constraints may be valuable. Introduction of heat pumps in the district heating network may pose this ability. In order to evaluate if the introduction of heat pumps is economically viable, we develop calculation methods for the operation patterns of each of the used...... energy technologies. In the paper, three frequently used operation optimization methods are examined with respect to their impact on operation management of the combined technologies. One of the investigated approaches utilises linear programming for optimisation, one uses linear programming with binary...

  13. A Comparison of Card-sorting Analysis Methods

    DEFF Research Database (Denmark)

    Nawaz, Ather

    2012-01-01

    This study investigates how the choice of analysis method for card sorting studies affects the suggested information structure for websites. In the card sorting technique, a variety of methods are used to analyse the resulting data. The analysis of card sorting data helps user experience (UX......) designers to discover the patterns in how users make classifications and thus to develop an optimal, user-centred website structure. During analysis, the recurrence of patterns of classification between users influences the resulting website structure. However, the algorithm used in the analysis influences...... the recurrent patterns found and thus has consequences for the resulting website design. This paper draws an attention to the choice of card sorting analysis and techniques and shows how it impacts the results. The research focuses on how the same data for card sorting can lead to different website structures...

  14. Comparison of methods for prioritizing risk in radiation oncology

    International Nuclear Information System (INIS)

    Biazotto, Bruna; Tokarski, Marcio

    2016-01-01

    Proactive risk management tools, such as Failure Mode and Effect Analysis (FEMA), were imported from engineering and have been widely used in Radiation Oncology. An important step in this process is the risk prioritization and there are many methods to do that. This paper compares the risk prioritization of computerized planning phase in interstitial implants with high dose rate brachytherapy performed with Health Care Failure Mode and Effect Analysis (HFMEA) and FMEA with guidelines given by the Task Group 100 (TG 100) of the American Association of Physicists in Medicine. Out of the 33 possible failure modes of this process, 21 require more attention when evaluated by HFMEA and 22, when evaluated by FMEA TG 100. Despite the high coincidence between the methods, the execution of HFMEA was simpler. (author)

  15. Stochastic interpretation of magnetotelluric data, comparison of methods

    Czech Academy of Sciences Publication Activity Database

    Červ, Václav; Menvielle, M.; Pek, Josef

    2007-01-01

    Roč. 50, č. 1 (2007), s. 7-19 ISSN 1593-5213 R&D Projects: GA ČR GA205/04/0740; GA ČR GA205/04/0746; GA MŠk ME 677 Institutional research plan: CEZ:AV0Z30120515 Keywords : magnetotelluric method * inverse problem * controlled random search Subject RIV: DE - Earth Magnetism, Geodesy, Geography Impact factor: 0.298, year: 2007

  16. Cerebral autoregulation after subarachnoid hemorrhage: comparison of three methods

    OpenAIRE

    Budohoski, Karol P; Czosnyka, Marek; Smielewski, Peter; Varsos, Georgios V; Kasprowicz, Magdalena; Brady, Ken M; Pickard, John D; Kirkpatrick, Peter J

    2012-01-01

    In patients after subarachnoid hemorrhage (SAH) failure of cerebral autoregulation is associated with delayed cerebral ischemia (DCI). Various methods of assessing autoregulation are available, but their predictive values remain unknown. We characterize the relationship between different indices of autoregulation. Patients with SAH within 5 days were included in a prospective study. The relationship between three indices of autoregulation was analyzed: two indices calculated using spontaneous...

  17. Comparison of calculational methods for EBT reactor nucleonics

    International Nuclear Information System (INIS)

    Henninger, R.J.; Seed, T.J.; Soran, P.D.; Dudziak, D.J.

    1980-01-01

    Nucleonic calculations for a preliminary conceptual design of the first wall/blanket/shield/coil assembly for an EBT reactor are described. Two-dimensional Monte Carlo, and one- and two-dimensional discrete-ordinates calculations are compared. Good agreement for the calculated values of tritium breeding and nuclear heating is seen. We find that the three methods are all useful and complementary as a design of this type evolves

  18. Comparison of neutronic transport equation resolution nodal methods

    International Nuclear Information System (INIS)

    Zamonsky, O.M.; Gho, C.J.

    1990-01-01

    In this work, some transport equation resolution nodal methods are comparatively studied: the constant-constant (CC), linear-nodal (LN) and the constant-quadratic (CQ). A nodal scheme equivalent to finite differences has been used for its programming, permitting its inclusion in existing codes. Some bidimensional problems have been solved, showing that linear-nodal (LN) are, in general, obtained with accuracy in CPU shorter times. (Author) [es

  19. Comparison of identification methods for oral asaccharolytic Eubacterium species.

    Science.gov (United States)

    Wade, W G; Slayne, M A; Aldred, M J

    1990-12-01

    Thirty one strains of oral, asaccharolytic Eubacterium spp. and the type strains of E. brachy, E. nodatum and E. timidum were subjected to three identification techniques--protein-profile analysis, determination of metabolic end-products, and the API ATB32A identification kit. Five clusters were obtained from numerical analysis of protein profiles and excellent correlations were seen with the other two methods. Protein profiles alone allowed unequivocal identification.

  20. Comparison between two methods for resin removing after bracket debonding

    OpenAIRE

    Marchi, Rodrigo De; Marchi, Luciana Manzotti De; Terada, Raquel Sano Suga; Terada, Hélio Hissashi

    2012-01-01

    OBJECTIVE: The aim of this study was to assess - using scanning electron microscopy (SEM) - the effectiveness of two abrasive discs, one made from silicon and one from aluminum oxide, in removing adhesive remnants (AR) after debonding orthodontic brackets. METHODS: Ten randomly selected bovine teeth were used, i.e., 2 in the control group, and the other 8 divided into two groups, which had orthodontic brackets bonded to their surface with Concise Orthodontic Adhesive (3M). The following metho...

  1. Comparison of three methods of feeding colostrum to dairy calves.

    Science.gov (United States)

    Besser, T E; Gay, C C; Pritchett, L

    1991-02-01

    Absorption of colostral immunoglobulins by Holstein calves was studied in 3 herds in which 3 methods of colostrum feeding were used. Failure of passive transfer, as determined by calf serum immunoglobulin G1 (IgG1) concentration less than 10 mg/ml at 48 hours of age, was diagnosed in 61.4% of calves from a dairy in which calves were nursed by their dams, 19.3% of calves from a dairy using nipple-bottle feeding, and 10.8% of calves from a dairy using tube feeding. The management factor determined to have the greatest influence on the probability of failure of passive transfer in the herds using artificial methods of colostrum feeding (bottle feeding or tube feeding) was the volume of colostrum fed as it affected the amount of IgG1 received by the calf. In dairies that used artificial feeding methods, failure of passive transfer was infrequent in calves fed greater than or equal to 100 g IgG1 in the first colostrum feeding. In the dairy that allowed calves to suckle, prevalence of failure of passive transfer was greater than 50% even among calves nursed by cows with above-average colostral IgG1 concentration. Analysis of the effect of other management factors on calf immunoglobulin absorption revealed small negative effects associated with the use of previously frozen colostrum and the use of colostrum from cows with long nonlactating intervals.

  2. Superhydrophobic transparent films from silica powder: Comparison of fabrication methods

    KAUST Repository

    Liu, Li-Der; Lin, Chao-Sung; Tikekar, Mukul; Chen, Ping-Hei

    2011-01-01

    The lotus leaf is known for its self-clean, superhydrophobic surface, which displays a hierarchical structure covered with a thin wax-like material. In this study, three fabrication techniques, using silicon dioxide particles to create surface roughness followed by a surface modification with a film of polydimethylsiloxane, were applied on a transparent glass substrate. The fabrication techniques differed mainly on the deposition of silicon dioxide particles, which included organic, inorganic, and physical methods. Each technique was used to coat three samples of varying particle load. The surface of each sample was evaluated with contact angle goniometer and optical spectrometer. Results confirmed the inverse relationships between contact angle and optical transmissivity independent of fabrication techniques. Microstructural morphologies also suggested the advantage of physical deposition over chemical methods. In summary, the direct sintering method proved outstanding for its contact angle vs transmissivity efficiency, and capable of generating a contact angle as high as 174°. © 2011 Elsevier B.V. All rights reserved.

  3. Comparison of tissue processing methods for microvascular visualization in axolotls.

    Science.gov (United States)

    Montoro, Rodrigo; Dickie, Renee

    2017-01-01

    The vascular system, the pipeline for oxygen and nutrient delivery to tissues, is essential for vertebrate development, growth, injury repair, and regeneration. With their capacity to regenerate entire appendages throughout their lifespan, axolotls are an unparalleled model for vertebrate regeneration, but they lack many of the molecular tools that facilitate vascular imaging in other animal models. The determination of vascular metrics requires high quality image data for the discrimination of vessels from background tissue. Quantification of the vasculature using perfused, cleared specimens is well-established in mammalian systems, but has not been widely employed in amphibians. The objective of this study was to optimize tissue preparation methods for the visualization of the microvascular network in axolotls, providing a basis for the quantification of regenerative angiogenesis. To accomplish this aim, we performed intracardiac perfusion of pigment-based contrast agents and evaluated aqueous and non-aqueous clearing techniques. The methods were verified by comparing the quality of the vascular images and the observable vascular density across treatment groups. Simple and inexpensive, these tissue processing techniques will be of use in studies assessing vascular growth and remodeling within the context of regeneration. Advantages of this method include: •Higher contrast of the vasculature within the 3D context of the surrounding tissue •Enhanced detection of microvasculature facilitating vascular quantification •Compatibility with other labeling techniques.

  4. Superhydrophobic transparent films from silica powder: Comparison of fabrication methods

    KAUST Repository

    Liu, Li-Der

    2011-07-01

    The lotus leaf is known for its self-clean, superhydrophobic surface, which displays a hierarchical structure covered with a thin wax-like material. In this study, three fabrication techniques, using silicon dioxide particles to create surface roughness followed by a surface modification with a film of polydimethylsiloxane, were applied on a transparent glass substrate. The fabrication techniques differed mainly on the deposition of silicon dioxide particles, which included organic, inorganic, and physical methods. Each technique was used to coat three samples of varying particle load. The surface of each sample was evaluated with contact angle goniometer and optical spectrometer. Results confirmed the inverse relationships between contact angle and optical transmissivity independent of fabrication techniques. Microstructural morphologies also suggested the advantage of physical deposition over chemical methods. In summary, the direct sintering method proved outstanding for its contact angle vs transmissivity efficiency, and capable of generating a contact angle as high as 174°. © 2011 Elsevier B.V. All rights reserved.

  5. Comparison of the reference mark azimuth determination methods

    Directory of Open Access Journals (Sweden)

    Danijel Šugar

    2013-03-01

    Full Text Available The knowledge of the azimuth of the reference mark is of crucial importance in the determination of the declination which is defined as the ellipsoidal (geodetic azimuth of the geomagnetic meridian. The accuracy of the azimuth determination has direct impact on the accuracy of the declination. The orientation of the Declination-Inclination Magnetometer is usually carried out by sighting the reference mark in two telescope faces in order to improve the reliability of the observations and eliminate some instrumental errors. In this paper, different coordinate as well as azimuth determination methods using GNSS (Global Navigation Satellite System observation techniques within VPPS (High-Precision Positioning Service and GPPS (Geodetic-Precision Positioning Service services of the CROPOS (CROatian POsitioning System system were explained. The azimuth determination by the observation of the Polaris was exposed and it was subsequently compared with the observation of the Sun using hour-angle and zenith-distance method. The procedure of the calculation of the geodetic azimuth from the astronomic azimuth was explained. The azimuth results obtained by different methods were compared and the recommendations on the minimal distance between repeat station and azimuth mark were given. The results shown in this paper were based on the observations taken on the POKU_SV repeat station.

  6. Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files. SG39 meeting, December 2015

    International Nuclear Information System (INIS)

    Cabellos, Oscar; De Saint Jean, Cyrille; Hursin, Mathieu; Pelloni, Sandro; Ivanov, Evgeny; Kodeli, Ivan; Leconte, Pierre; Palmiotti, Giuseppe; Salvatores, Massimo; Sobes, Vladimir; Yokoyama, Kenji

    2015-12-01

    The aim of WPEC subgroup 39 'Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files' is to provide criteria and practical approaches to use effectively the results of sensitivity analyses and cross section adjustments for feedback to evaluators and differential measurement experimentalists in order to improve the knowledge of neutron cross sections, uncertainties, and correlations to be used in a wide range of applications. This document is the proceedings of the fifth formal Subgroup 39 meeting held at the Institute Curie, Paris, France, on 4 December 2015. It comprises a Summary Record of the meeting, and all the available presentations (slides) given by the participants: A - Sensitivity methods: - 1: Short update on deliverables (K. Yokoyama); - 2: Does one shot Bayesian is equivalent to successive update? Bayesian inference: some matrix linear algebra (C. De Saint Jean); - 3: Progress in Methodology (G. Palmiotti); - SG39-3: Use of PIA approach. Possible application to neutron propagation experiments (S. Pelloni); - 4: Update on sensitivity coefficient methods (E. Ivanov); - 5: Stress test for U-235 fission (H. Wu); - 6: Methods and approaches development at ORNL for providing feedback from integral benchmark experiments for improvement of nuclear data files (V. Sobes); B - Integral experiments: - 7a: Update on SEG analysis (G. Palmiotti); - 7b:Status of MANTRA (G. Palmiotti); - 7c: Possible new experiments at NRAD (G. Palmiotti); - 8: B-eff experiments (I. Kodeli); - 9: On going CEA activities related to dedicated integral experiments for nuclear date validation in the Fast energy range (P. Leconte); - 10: PROTEUS Experiments: an update (M. Hursin); - 11: Short updates on neutron propagation experiments, STEK, CIELO status (O. Cabellos)

  7. Cell synchrony techniques. I. A comparison of methods

    Energy Technology Data Exchange (ETDEWEB)

    Grdina, D.J.; Meistrich, M.L.; Meyn, R.E.; Johnson, T.S.; White, R.A.

    1984-01-01

    Selected cell synchrony techniques, as applied to asynchronous populations of Chinese hamster ovary (CHO) cells, have been compared. Aliquots from the same culture of exponentially growing cells were synchronized using mitotic selection, mitotic selection and hydroxyurea block, centrifugal elutriation, or an EPICS V cell sorter. Sorting of cells was achieved after staining cells with Hoechst 33258. After syncronization by the various methods the relative distribution of cells in G/sub 1/, S, or G/sub 2/ + M phases of the cell cycle was determined by flow cytometry. Fractions of synchronized cells obtained from each method were replated and allowed to progress through a second cell cycle. Mitotic selection gave rise to relatively pure and unperturbed early G/sub 1/ phase cells. While cell synchrony rapidly dispersed with time, cells progressed through the cell cycle in 12 hr. Sorting with the EPIC V on the modal G/sub 1/ peak yielded a relatively pure but heterogeneous G/sub 1/ population (i.e. early to late G/sub 1/). Again, synchrony dispersed with time, but cell-cycle progression required 14 hr. With centrifugal elutriation, several different cell populations synchronized throughout the cell cycle could be rapidly obtained with a purity comparable to mitotic selection and cell sorting. It was concluded that, either alone or in combination with blocking agents such as hydroxyurea, elutriation and mitotic selection were both excellent methods for synchronizing CHO cells. Cell sorting exhibited limitations in sample size and time required for synchronizing CHO cells. Its major advantage would be its ability to isolate cell populations unique with respect to selected cellular parameters. 19 references, 9 figures.

  8. Estimating recharge at yucca mountain, nevada, usa: comparison of methods

    International Nuclear Information System (INIS)

    Flint, A. L.; Flint, L. E.; Kwicklis, E. M.; Fabryka-Martin, J. T.; Bodvarsson, G. S.

    2001-01-01

    Obtaining values of net infiltration, groundwater travel time, and recharge is necessary at the Yucca Mountain site, Nevada, USA, in order to evaluate the expected performance of a potential repository as a containment system for high-level radioactive waste. However, the geologic complexities of this site, its low precipitation and net infiltration, with numerous mechanisms operating simultaneously to move water through the system, provide many challenges for the estimation of the spatial distribution of recharge. A variety of methods appropriate for and environments has been applied, including water-balance techniques, calculations using Darcy's law in the unsaturated zone, a soil-physics method applied to neutron-hole water-content data, inverse modeling of thermal profiles in boreholes extending through the thick unsaturated zone, chloride mass balance, atmospheric radionuclides, and empirical approaches. These methods indicate that near-surface infiltration rates at Yucca Mountain are highly variable in time and space, with local (point) values ranging from zero to several hundred millimeters per year. Spatially distributed net-infiltration values average 5 mm/year, with the highest values approaching 20 nun/year near Yucca Crest. Site-scale recharge estimates range from less than I to about 12 mm/year. These results have been incorporated into a site-scale model that has been calibrated using these data sets that reflect infiltration processes acting on highly variable temporal and spatial scales. The modeling study predicts highly non-uniform recharge at the water table, distributed significantly differently from the non-uniform infiltration pattern at the surface. [References: 57

  9. Estimating recharge at Yucca Mountain, Nevada, USA: Comparison of methods

    Science.gov (United States)

    Flint, A.L.; Flint, L.E.; Kwicklis, E.M.; Fabryka-Martin, J. T.; Bodvarsson, G.S.

    2002-01-01

    Obtaining values of net infiltration, groundwater travel time, and recharge is necessary at the Yucca Mountain site, Nevada, USA, in order to evaluate the expected performance of a potential repository as a containment system for high-level radioactive waste. However, the geologic complexities of this site, its low precipitation and net infiltration, with numerous mechanisms operating simultaneously to move water through the system, provide many challenges for the estimation of the spatial distribution of recharge. A variety of methods appropriate for arid environments has been applied, including water-balance techniques, calculations using Darcy's law in the unsaturated zone, a soil-physics method applied to neutron-hole water-content data, inverse modeling of thermal profiles in boreholes extending through the thick unsaturated zone, chloride mass balance, atmospheric radionuclides, and empirical approaches. These methods indicate that near-surface infiltration rates at Yucca Mountain are highly variable in time and space, with local (point) values ranging from zero to several hundred millimeters per year. Spatially distributed net-infiltration values average 5 mm/year, with the highest values approaching 20 mm/year near Yucca Crest. Site-scale recharge estimates range from less than 1 to about 12 mm/year. These results have been incorporated into a site-scale model that has been calibrated using these data sets that reflect infiltration processes acting on highly variable temporal and spatial scales. The modeling study predicts highly non-uniform recharge at the water table, distributed significantly differently from the non-uniform infiltration pattern at the surface.

  10. Review and comparison of recent methods in space geodesy

    International Nuclear Information System (INIS)

    Varga, M.

    1983-01-01

    The study of geodynamic processes requires the application of new space-born geodesic measuring methods. A terrestrial reference system (TRS) is required for describing geodynamic processes. For this purpose satisfactory knowledge of polar motions, Earth rotation and tidal forces determined by laser, global positioning system (GPS) and VLBI measurements are needed. In addition, gravity and magnetic field of the Earth have to be known, modelled by using satellite to satellite traching (SST), altimetry, gradiometry and magnetometry results. Motions of the Earth-Moon system, as well as the relation between the terrestrial reference system and the inertial system can be determined by means of VLBI measurements. (author)

  11. A comparison of methods of assessment of scintigraphic colon transit.

    Science.gov (United States)

    Freedman, Patricia Noel; Goldberg, Paul A; Fataar, Abdul Basier; Mann, Michael M

    2006-06-01

    There is no standard method of analysis of scintigraphic colonic transit investigation. This study was designed to compare 4 techniques. Sixteen subjects (median age, 37.5 y; range, 21-61 y), who had sustained a spinal cord injury more than a year before the study, were given a pancake labeled with 10-18 MBq of (111)In bound to resin beads to eat. Anterior and posterior images were acquired with a gamma-camera 3 h after the meal and then 3 times a day for the next 4 d. Seven regions of interest, outlining the ascending colon, hepatic flexure, transverse colon, splenic flexure, descending colon, rectosigmoid, and total abdominal activity at each time point, were drawn on the anterior and posterior images. The counts were decay corrected and the geometric mean (GM), for each region, at each time point calculated. The GM was used to calculate the percentage of the initial total abdominal activity in each region, at each time point. Colonic transit was assessed in 4 ways: (a) Three independent nuclear medicine physicians visually assessed transit on the analog images and classified subjects into 5 categories of colonic transit (rapid, intermediate, generalized delay, right-sided delay, or left-sided delay). (b) Parametric images were constructed from the percentage activity in each region at each time point. (c) The arrival and clearance times of the activity in the right and left colon were plotted as time-activity curves. (d) The geometric center of the distribution of the activity was calculated and plotted on a graph versus time. The results of these 4 methods were compared using an agreement matrix. Though simple to perform, the visual assessment was unreliable. The best agreement occurred between the parametric images and the arrival and clearance times of the activity in the right and left colon. The different methods of assessment do not produce uniform results. The best option for evaluating colonic transit appears to be a combination of the analog images

  12. The shortest-path problem analysis and comparison of methods

    CERN Document Server

    Ortega-Arranz, Hector; Gonzalez-Escribano, Arturo

    2014-01-01

    Many applications in different domains need to calculate the shortest-path between two points in a graph. In this paper we describe this shortest path problem in detail, starting with the classic Dijkstra's algorithm and moving to more advanced solutions that are currently applied to road network routing, including the use of heuristics and precomputation techniques. Since several of these improvements involve subtle changes to the search space, it may be difficult to appreciate their benefits in terms of time or space requirements. To make methods more comprehensive and to facilitate their co

  13. Calibrations of pocket dosemeters using a comparison method

    International Nuclear Information System (INIS)

    Somarriba V, I.

    1996-01-01

    This monograph is dedicated mainly to the calibration of pocket dosemeters. Various types of radiation sources used in hospitals and different radiation detectors with emphasis on ionization chambers are briefly presented. Calibration methods based on the use of a reference dosemeter were developed to calibrate all pocket dosemeters existing at the Radiation Physics and Metrology Laboratory. Some of these dosemeters were used in personnel dosimetry at hospitals. Moreover, a study was realized about factors that affect the measurements with pocket dosemeters in the long term, such as discharges due to cosmic radiation. A DBASE IV program was developed to store the information included in the hospital's registry

  14. Comparison of three methods to diagnose hip dysplasia in dogs

    International Nuclear Information System (INIS)

    Sharma, Vikas; Mohindroo, J.

    2009-01-01

    The present study was designed to compare the usefulness of goniometry, radiography and distraction index in diagnosis of hip dysplasia in dogs. During the study 25 clinical cases (50 joints) suspected for hip dysplasia were evaluated. Norberg angle was found to have a significant positive correlation with extension, flexion, abduction, and adduction angles and a significant negative correlation with distraction index (DI) measurements. It could be inferred that all the six parameters (NA, DI, extension, flexion, abduction, and adduction) were reliable indicators for early diagnosis of hip dysplasia.Goniometry could be used as a safe and easy method for preliminary suspicion of hip dysplasia

  15. Comparison of Particulate Mercury Measured with Manual and Automated Methods

    Directory of Open Access Journals (Sweden)

    Rachel Russo

    2011-01-01

    Full Text Available A study was conducted to compare measuring particulate mercury (HgP with the manual filter method and the automated Tekran system. Simultaneous measurements were conducted with the Tekran and Teflon filter methodologies in the marine and coastal continental atmospheres. Overall, the filter HgP values were on the average 21% higher than the Tekran HgP, and >85% of the data were outside of ±25% region surrounding the 1:1 line. In some cases the filter values were as much as 3-fold greater, with

  16. Variational configuration interaction methods and comparison with perturbation theory

    International Nuclear Information System (INIS)

    Pople, J.A.; Seeger, R.; Krishnan, R.

    1977-01-01

    A configuration interaction (CI) procedure which includes all single and double substitutions from an unrestricted Hartree-Fock single determinant is described. This has the feature that Moller-Plesset perturbation results to second and third order are obtained in the first CI iterative cycle. The procedure also avoids the necessity of a full two-electron integral transformation. A simple expression for correcting the final CI energy for lack of size consistency is proposed. Finally, calculations on a series of small molecules are presented to compare these CI methods with perturbation theory

  17. Seasonal comparison of two spatially distributed evapotranspiration mapping methods

    Science.gov (United States)

    Kisfaludi, Balázs; Csáki, Péter; Péterfalvi, József; Primusz, Péter

    2017-04-01

    More rainfall is disposed of through evapotranspiration (ET) on a global scale than through runoff and storage combined. In Hungary, about 90% of the precipitation evapotranspirates from the land and only 10% goes to surface runoff and groundwater recharge. Therefore, evapotranspiration is a very important element of the water balance, so it is a suitable parameter for the calibration of hydrological models. Monthly ET values of two MODIS-data based ET products were compared for the area of Hungary and for the vegetation period of the year 2008. The differences were assessed by land cover types and by elevation zones. One ET map was the MOD16, aiming at global coverage and provided by the MODIS Global Evaporation Project. The other method is called CREMAP, it was developed at the Budapest University of Technology and Economics for regional scale ET mapping. CREMAP was validated for the area of Hungary with good results, but ET maps were produced only for the period of 2000-2008. The aim of this research was to evaluate the performance of the MOD16 product compared to the CREMAP method. The average difference between the two products was the highest during summer, CREMAP estimating higher ET values by about 25 mm/month. In the spring and autumn, MOD16 ET values were higher by an average of 6 mm/month. The differences by land cover types showed a similar seasonal pattern to the average differences, and they correlated strongly with each other. Practically the same difference values could be calculated for arable lands and forests that together cover nearly 75% of the area of the country. Therefore, it can be said that the seasonal changes had the same effect on the two method's ET estimations in each land cover type areas. The analysis by elevation zones showed that on elevations lower than 200 m AMSL the trends of the difference values were similar to the average differences. The correlation between the values of these elevation zones was also strong. However weaker

  18. Bearing-only SLAM: comparison between probabilistic and deterministic methods

    OpenAIRE

    Joly , Cyril; Rives , Patrick

    2008-01-01

    This work deals with the problem of simultaneous localization and mapping (SLAM). Classical methods for solving the SLAM problem are based on the Extended Kalman Filter (EKF-SLAM) or particle filter (FastSLAM). These kinds of algorithms allow on-line solving but could be inconsistent. In this report, the above-mentioned algorithms are not studied but global ones. Global approaches need all measurements from the initial step to the final step in order to compute the trajectory of the robot and...

  19. Comparison of Classification Methods for Detecting Emotion from Mandarin Speech

    Science.gov (United States)

    Pao, Tsang-Long; Chen, Yu-Te; Yeh, Jun-Heng

    It is said that technology comes out from humanity. What is humanity? The very definition of humanity is emotion. Emotion is the basis for all human expression and the underlying theme behind everything that is done, said, thought or imagined. Making computers being able to perceive and respond to human emotion, the human-computer interaction will be more natural. Several classifiers are adopted for automatically assigning an emotion category, such as anger, happiness or sadness, to a speech utterance. These classifiers were designed independently and tested on various emotional speech corpora, making it difficult to compare and evaluate their performance. In this paper, we first compared several popular classification methods and evaluated their performance by applying them to a Mandarin speech corpus consisting of five basic emotions, including anger, happiness, boredom, sadness and neutral. The extracted feature streams contain MFCC, LPCC, and LPC. The experimental results show that the proposed WD-MKNN classifier achieves an accuracy of 81.4% for the 5-class emotion recognition and outperforms other classification techniques, including KNN, MKNN, DW-KNN, LDA, QDA, GMM, HMM, SVM, and BPNN. Then, to verify the advantage of the proposed method, we compared these classifiers by applying them to another Mandarin expressive speech corpus consisting of two emotions. The experimental results still show that the proposed WD-MKNN outperforms others.

  20. Evaluation of the Match External Load in Soccer: Methods Comparison.

    Science.gov (United States)

    Castagna, Carlo; Varley, Matthew; Póvoas, Susana C A; D'Ottavio, Stefano

    2017-04-01

    To test the interchangeability of 2 match-analysis approaches for external-load detection considering arbitrary selected speeds and metabolic power (MP) thresholds in male top-level soccer. Data analyses were performed considering match physical performance of 60 matches (1200 player cases) of randomly selected Spanish, German, and English first-division championship matches (2013-14 season). Match analysis was performed with a validated semiautomated multicamera system operating at 25 Hz. During a match, players covered 10,673 ± 348 m, of which 1778 ± 208 m and 2759 ± 241 m were performed at high intensity, as measured using speed (≥16 km/h, HI) and metabolic power (≥20 W/kg, MPHI) notations. High-intensity notations were nearly perfectly associated (r = .93, P Player high-intensity decelerations (≥-2 m/s 2 ) were very largely associated with MPHI (r = .73, P physical match-analysis methods can be independently used to track match external load in elite-level players. However, match-analyst decisions must be based on use of a single method to avoid bias in external-load determination.

  1. Particle fluxes above forests: Observations, methodological considerations and method comparisons

    International Nuclear Information System (INIS)

    Pryor, S.C.; Larsen, S.E.; Sorensen, L.L.; Barthelmie, R.J.

    2008-01-01

    This paper reports a study designed to test, evaluate and compare micro-meteorological methods for determining the particle number flux above forest canopies. Half-hour average particle number fluxes above a representative broad-leaved forest in Denmark derived using eddy covariance range from -7 x 10 7 m -2 s -1 (1st percentile) to 5 x 10 7 m -2 s -1 (99th percentile), and have a median value of -1.6 x 10 6 m -2 s -1 . The statistical uncertainties associated with the particle number flux estimates are larger than those for momentum fluxes and imply that in this data set approximately half of the particle number fluxes are not statistically different to zero. Particle number fluxes from relaxed eddy accumulation (REA) and eddy covariance are highly correlated and of almost identical magnitude. Flux estimates from the co-spectral and dissipation methods are also correlated with those from eddy covariance but exhibit higher absolute magnitude of fluxes. - Number fluxes of ultra-fine particles over a forest computed using four micro-meteorological techniques are highly correlated but vary in magnitude

  2. Method comparison of ultrasound and kilovoltage x-ray fiducial marker imaging for prostate radiotherapy targeting

    Science.gov (United States)

    Fuller, Clifton David; Thomas, Charles R., Jr.; Schwartz, Scott; Golden, Nanalei; Ting, Joe; Wong, Adrian; Erdogmus, Deniz; Scarbrough, Todd J.

    2006-10-01

    Several measurement techniques have been developed to address the capability for target volume reduction via target localization in image-guided radiotherapy; among these have been ultrasound (US) and fiducial marker (FM) software-assisted localization. In order to assess interchangeability between methods, US and FM localization were compared using established techniques for determination of agreement between measurement methods when a 'gold-standard' comparator does not exist, after performing both techniques daily on a sequential series of patients. At least 3 days prior to CT simulation, four gold seeds were placed within the prostate. FM software-assisted localization utilized the ExacTrac X-Ray 6D (BrainLab AG, Germany) kVp x-ray image acquisition system to determine prostate position; US prostate targeting was performed on each patient using the SonArray (Varian, Palo Alto, CA). Patients were aligned daily using laser alignment of skin marks. Directional shifts were then calculated by each respective system in the X, Y and Z dimensions before each daily treatment fraction, previous to any treatment or couch adjustment, as well as a composite vector of displacement. Directional shift agreement in each axis was compared using Altman-Bland limits of agreement, Lin's concordance coefficient with Partik's grading schema, and Deming orthogonal bias-weighted correlation methodology. 1019 software-assisted shifts were suggested by US and FM in 39 patients. The 95% limits of agreement in X, Y and Z axes were ±9.4 mm, ±11.3 mm and ±13.4, respectively. Three-dimensionally, measurements agreed within 13.4 mm in 95% of all paired measures. In all axes, concordance was graded as 'poor' or 'unacceptable'. Deming regression detected proportional bias in both directional axes and three-dimensional vectors. Our data suggest substantial differences between US and FM image-guided measures and subsequent suggested directional shifts. Analysis reveals that the vast majority of

  3. Comparison of transfer entropy methods for financial time series

    Science.gov (United States)

    He, Jiayi; Shang, Pengjian

    2017-09-01

    There is a certain relationship between the global financial markets, which creates an interactive network of global finance. Transfer entropy, a measurement for information transfer, offered a good way to analyse the relationship. In this paper, we analysed the relationship between 9 stock indices from the U.S., Europe and China (from 1995 to 2015) by using transfer entropy (TE), effective transfer entropy (ETE), Rényi transfer entropy (RTE) and effective Rényi transfer entropy (ERTE). We compared the four methods in the sense of the effectiveness for identification of the relationship between stock markets. In this paper, two kinds of information flows are given. One reveals that the U.S. took the leading position when in terms of lagged-current cases, but when it comes to the same date, China is the most influential. And ERTE could provide superior results.

  4. Comparison of Machine Learning Methods for the Arterial Hypertension Diagnostics

    Directory of Open Access Journals (Sweden)

    Vladimir S. Kublanov

    2017-01-01

    Full Text Available The paper presents results of machine learning approach accuracy applied analysis of cardiac activity. The study evaluates the diagnostics possibilities of the arterial hypertension by means of the short-term heart rate variability signals. Two groups were studied: 30 relatively healthy volunteers and 40 patients suffering from the arterial hypertension of II-III degree. The following machine learning approaches were studied: linear and quadratic discriminant analysis, k-nearest neighbors, support vector machine with radial basis, decision trees, and naive Bayes classifier. Moreover, in the study, different methods of feature extraction are analyzed: statistical, spectral, wavelet, and multifractal. All in all, 53 features were investigated. Investigation results show that discriminant analysis achieves the highest classification accuracy. The suggested approach of noncorrelated feature set search achieved higher results than data set based on the principal components.

  5. Cost comparison between Subterrene and current tunneling methods. Final report

    International Nuclear Information System (INIS)

    Bledsoe, J.D.; Hill, J.E.; Coon, R.F.

    1975-05-01

    A study was made to compare tunnel construction costs between the Subterrene tunneling system and methods currently in use. Three completed tunnels were selected for study cases to represent finished diameters ranging from 3.05 meters (10 feet) to 6.25 meters (20.5 feet). The study cases were normalized by deleting extraneous work and assigning labor, equipment, and materials costs for the Southern California area in 1974. Detailed cost estimates (shown in Appendix A) were then made for the three tunnels for baseline. A conceptual nuclear powered Subterrene tunneling machine (NSTM) was designed. It was assumed that NSTM's were available for each of the three baseline tunnels. Costs were estimated (shown in Appendix B) for the baseline tunnels driven by NSTM

  6. Comparison of thermoluminescence detection methods for irradiated spices

    International Nuclear Information System (INIS)

    Kawamura, Y.; Murayama, M.; Uchiyama, S.; Saito, Y.

    1996-01-01

    Thermoluminescence (TL) analysis has been shown to be one of the most applicable methods for the detection of γ-irradiated spices. This analysis was introduced as a detection technique for irradiated spices using the whole sample. It was then found that the origin of the TL response to be mineral dust adhering to the spices. TL measurements on separated minerals and the normalised TL measurement by re-irradiation was then established. This paper details investigations on TL measurements carried out using clean powdered spices stored for one year after being irradiated with doses of 1, 5, 10 and 30 kGy in order to clarify their applicable dose range, the effect of storage and mineral content. The effect of the mineral separation was also studied. (author)

  7. Comparison of different skin preservation methods with gamma irradiation.

    Science.gov (United States)

    Guerrero, Linda; Camacho, Bernardo

    2017-06-01

    Allografts are in constant demand, not only for burn victims, but also for all open wounds as "biological dressings". Tissue quality and security are two of the major concerns of Tissue Banks. There are limited studies published. There has been extensive discussion on the subject of preservation methods for cadaver skin. Most literature available comes from clinical reports. In this research, the authors compared 85% glycerolized non irradiated skin allografts with three glycerolized irradiated skin allografts (using different glycerol concentrations 50%, 70% and 85%). The evaluation of allograft quality was done by measuring physical and biological properties of such prepared human tissue grafts. In the histological structure evaluation changes were minimal and did not alter the skin structure. The clinical function of their behavior as temporal dressings was tested. They proved to have similar capabilities for improving granulating tissue and contributing to wound beds closure (Hickerson et al. (1994) [1]). Copyright © 2017 Elsevier Ltd and ISBI. All rights reserved.

  8. A Comparison of Methods for Player Clustering via Behavioral Telemetry

    DEFF Research Database (Denmark)

    Drachen, Anders; Thurau, C.; Sifa, R.

    2013-01-01

    patterns in the behavioral data, and developing profiles that are actionable to game developers. There are numerous methods for unsupervised clustering of user behavior, e.g. k-means/c-means, Nonnegative Matrix Factorization, or Principal Component Analysis. Although all yield behavior categorizations......, interpretation of the resulting categories in terms of actual play behavior can be difficult if not impossible. In this paper, a range of unsupervised techniques are applied together with Archetypal Analysis to develop behavioral clusters from playtime data of 70,014 World of Warcraft players, covering a five......The analysis of user behavior in digital games has been aided by the introduction of user telemetry in game development, which provides unprecedented access to quantitative data on user behavior from the installed game clients of the entire population of players. Player behavior telemetry datasets...

  9. Superconducting microstrip antennas: An experimental comparison of two feeding methods

    International Nuclear Information System (INIS)

    Richard, M.A.; Claspy, P.C.; Bhasin, K.B.

    1993-01-01

    The recent discovery of high-temperature superconductors (HTS's) has generated a substantial amount of interest in microstrip antenna applications. However, the high permittivity of substrates compatible with HTS causes difficulty in feeding such antennas because of the high patch edge impedance. In this paper, two methods for feeding HTS microstrip antennas at K and Ka-band are examined. Superconducting microstrip antennas that are directly coupled and gap-coupled to a microstrip transmission line have been designed and fabricated on lanthanum aluminate substrates using Y-Ba-Cu-O superconducting thin films. Measurements from these antennas, including input impedance, bandwidth, efficiency, and patterns, are presented and compared with published models. The measured results demonstrate that usable antennas can be constructed using either of these architectures, although the antennas suffer from narrow bandwidths. In each case, the HTS antenna shows a substantial improvement over an identical antenna made with normal metals

  10. Comparison between two methods for resin removing after bracket debonding

    Directory of Open Access Journals (Sweden)

    Rodrigo De Marchi

    2012-12-01

    Full Text Available OBJECTIVE: The aim of this study was to assess - using scanning electron microscopy (SEM - the effectiveness of two abrasive discs, one made from silicon and one from aluminum oxide, in removing adhesive remnants (AR after debonding orthodontic brackets. METHODS: Ten randomly selected bovine teeth were used, i.e., 2 in the control group, and the other 8 divided into two groups, which had orthodontic brackets bonded to their surface with Concise Orthodontic Adhesive (3M. The following methods were employed - in one single step - to remove AR after debracketing: Group A, Optimize discs (TDV and Group B, Onegloss discs (Shofu, used at low speed. After removing the AR with the aforementioned methods, the teeth were prepared to undergo SEM analysis, and photographs were taken of the enamel surface with 50x magnification. Six examiners evaluated the photographs applying the Zachrisson and Årtun enamel surface index (ESI system (1979. RESULTS: Group A exhibited minor scratches on the enamel surface as well as some AR in some of the photographs, while Group B showed a smoother surface, little or no AR and some abrasion marks in the photographs. No statistically significant differences were found between the two methods and the control group. CONCLUSIONS: The two abrasive discs were effective in removing the AR after bracket debonding in one single step.OBJETIVO: o objetivo deste trabalho foi avaliar, por microscopia eletrônica de varredura, a eficácia de dois discos abrasivos de silicone e óxido de alumínio para a remoção da resina remanescente após a descolagem de braquetes ortodônticos. MÉTODOS: foram utilizados 10 dentes bovinos selecionados aleatoriamente, sendo 2 para o grupo controle e os demais divididos em dois grupos, os quais receberam colagem de braquetes ortodônticos com resina ortodôntica Concise (3M. Os métodos de remoção da resina após a descolagem dos acessórios ortodônticos em apenas uma etapa foram: Grupo A - disco

  11. Comparison of methods for removing electromagnetic noise from electromyographic signals.

    Science.gov (United States)

    Defreitas, Jason M; Beck, Travis W; Stock, Matt S

    2012-02-01

    The purpose of this investigation was to compare three different methods of removing noise from monopolar electromyographic (EMG) signals: (a) electrical shielding with a Faraday cage, (b) denoising with a digital notch-filter and (c) applying a bipolar differentiation with another monopolar EMG signal. Ten men and ten women (mean age = 24.0 years) performed isometric muscle actions of the leg extensors at 10-100% of their maximal voluntary contraction on two separate occasions. One trial was performed inside a Faraday tent (a flexible Faraday cage made from conductive material), and the other was performed outside the Faraday tent. The EMG signals collected outside the Faraday tent were analyzed three separate ways: as a raw signal, as a bipolar signal, and as a signal digitally notch filtered to remove 60 Hz noise and its harmonics. The signal-to-noise ratios were greatest after notch-filtering (range: 3.0-33.8), and lowest for the bipolar arrangement (1.6-10.2). Linear slope coefficients for the EMG amplitude versus force relationship were also used to compare the methods of noise removal. The results showed that a bipolar arrangement had a significantly lower linear slope coefficient when compared to the three other conditions (raw, notch and tent). These results suggested that an appropriately filtered monopolar EMG signal can be useful in situations that require a large pick-up area. Furthermore, although it is helpful, a Faraday tent (or cage) is not required to achieve an appropriate signal-to-noise ratio, as long as the correct filters are applied.

  12. Comparison of methods for removing electromagnetic noise from electromyographic signals

    International Nuclear Information System (INIS)

    DeFreitas, Jason M; Beck, Travis W; Stock, Matt S

    2012-01-01

    The purpose of this investigation was to compare three different methods of removing noise from monopolar electromyographic (EMG) signals: (a) electrical shielding with a Faraday cage, (b) denoising with a digital notch-filter and (c) applying a bipolar differentiation with another monopolar EMG signal. Ten men and ten women (mean age = 24.0 years) performed isometric muscle actions of the leg extensors at 10–100% of their maximal voluntary contraction on two separate occasions. One trial was performed inside a Faraday tent (a flexible Faraday cage made from conductive material), and the other was performed outside the Faraday tent. The EMG signals collected outside the Faraday tent were analyzed three separate ways: as a raw signal, as a bipolar signal, and as a signal digitally notch filtered to remove 60 Hz noise and its harmonics. The signal-to-noise ratios were greatest after notch-filtering (range: 3.0–33.8), and lowest for the bipolar arrangement (1.6–10.2). Linear slope coefficients for the EMG amplitude versus force relationship were also used to compare the methods of noise removal. The results showed that a bipolar arrangement had a significantly lower linear slope coefficient when compared to the three other conditions (raw, notch and tent). These results suggested that an appropriately filtered monopolar EMG signal can be useful in situations that require a large pick-up area. Furthermore, although it is helpful, a Faraday tent (or cage) is not required to achieve an appropriate signal-to-noise ratio, as long as the correct filters are applied. (paper)

  13. Study and comparison of different methods control in light water critical facility

    International Nuclear Information System (INIS)

    Michaiel, M.L.; Mahmoud, M.S.

    1980-01-01

    The control of nuclear reactors, may be studied using several control methods, such as control by rod absorbers, by inserting or removing fuel rods (moderator cavities), or by changing reflector thickness. Every method has its advantage, the comparison between these different methods and their effect on the reactivity of a reactor is the purpose of this work. A computer program is written by the authors to calculate the critical radius and worth in any case of the three precedent methods of control

  14. Comparison of measurement methods with a mixed effects procedure accounting for replicated evaluations (COM3PARE): method comparison algorithm implementation for head and neck IGRT positional verification.

    Science.gov (United States)

    Roy, Anuradha; Fuller, Clifton D; Rosenthal, David I; Thomas, Charles R

    2015-08-28

    Comparison of imaging measurement devices in the absence of a gold-standard comparator remains a vexing problem; especially in scenarios where multiple, non-paired, replicated measurements occur, as in image-guided radiotherapy (IGRT). As the number of commercially available IGRT presents a challenge to determine whether different IGRT methods may be used interchangeably, an unmet need conceptually parsimonious and statistically robust method to evaluate the agreement between two methods with replicated observations. Consequently, we sought to determine, using an previously reported head and neck positional verification dataset, the feasibility and utility of a Comparison of Measurement Methods with the Mixed Effects Procedure Accounting for Replicated Evaluations (COM3PARE), a unified conceptual schema and analytic algorithm based upon Roy's linear mixed effects (LME) model with Kronecker product covariance structure in a doubly multivariate set-up, for IGRT method comparison. An anonymized dataset consisting of 100 paired coordinate (X/ measurements from a sequential series of head and neck cancer patients imaged near-simultaneously with cone beam CT (CBCT) and kilovoltage X-ray (KVX) imaging was used for model implementation. Software-suggested CBCT and KVX shifts for the lateral (X), vertical (Y) and longitudinal (Z) dimensions were evaluated for bias, inter-method (between-subject variation), intra-method (within-subject variation), and overall agreement using with a script implementing COM3PARE with the MIXED procedure of the statistical software package SAS (SAS Institute, Cary, NC, USA). COM3PARE showed statistically significant bias agreement and difference in inter-method between CBCT and KVX was observed in the Z-axis (both p - value<0.01). Intra-method and overall agreement differences were noted as statistically significant for both the X- and Z-axes (all p - value<0.01). Using pre-specified criteria, based on intra-method agreement, CBCT was deemed

  15. The Heating Curve Adjustment Method

    NARCIS (Netherlands)

    Kornaat, W.; Peitsman, H.C.

    1995-01-01

    In apartment buildings with a collective heating system usually a weather compensator is used for controlling the heat delivery to the various apartments. With this weather compensator the supply water temperature to the apartments is regulated depending on the outside air temperature. With

  16. Description of comparison method for evaluating spatial or temporal homogeneity of environmental monitoring data

    International Nuclear Information System (INIS)

    Mecozzi, M.; Cicero, A.M.

    1995-01-01

    In this paper a comparison method to verify the homogeneity/inhomogeneity of environmental monitoring data is described. The comparison method is based on the simultaneous application of three statistical tests: One-Way ANOVA, Kruskal Wallis and One-Way IANOVA. Robust tests such as IANOVA and Kruskal Wallis can be more efficient than the usual ANOVA methods because are resistant against the presence of outliers and divergences from the normal distribution of the data. The evidences of the study are that the validation of the result about presence/absence of homogeneity in the data set is obtained when it is confirmed by two tests at least

  17. A comparison of four gravimetric fine particle sampling methods.

    Science.gov (United States)

    Yanosky, J D; MacIntosh, D L

    2001-06-01

    A study was conducted to compare four gravimetric methods of measuring fine particle (PM2.5) concentrations in air: the BGI, Inc. PQ200 Federal Reference Method PM2.5 (FRM) sampler; the Harvard-Marple Impactor (HI); the BGI, Inc. GK2.05 KTL Respirable/Thoracic Cyclone (KTL); and the AirMetrics MiniVol (MiniVol). Pairs of FRM, HI, and KTL samplers and one MiniVol sampler were collocated and 24-hr integrated PM2.5 samples were collected on 21 days from January 6 through April 9, 2000. The mean and standard deviation of PM2.5 levels from the FRM samplers were 13.6 and 6.8 microg/m3, respectively. Significant systematic bias was found between mean concentrations from the FRM and the MiniVol (1.14 microg/m3, p = 0.0007), the HI and the MiniVol (0.85 microg/m3, p = 0.0048), and the KTL and the MiniVol (1.23 microg/m3, p = 0.0078) according to paired t test analyses. Linear regression on all pairwise combinations of the sampler types was used to evaluate measurements made by the samplers. None of the regression intercepts was significantly different from 0, and only two of the regression slopes were significantly different from 1, that for the FRM and the MiniVol [beta1 = 0.91, 95% CI (0.83-0.99)] and that for the KTL and the MiniVol [beta1 = 0.88, 95% CI (0.78-0.98)]. Regression R2 terms were 0.96 or greater between all pairs of samplers, and regression root mean square error terms (RMSE) were 1.65 microg/m3 or less. These results suggest that the MiniVol will underestimate measurements made by the FRM, the HI, and the KTL by an amount proportional to PM2.5 concentration. Nonetheless, these results indicate that all of the sampler types are comparable if approximately 10% variation on the mean levels and on individual measurement levels is considered acceptable and the actual concentration is within the range of this study (5-35 microg/m3).

  18. WTA estimates using the method of paired comparison: tests of robustness

    Science.gov (United States)

    Patricia A. Champ; John B. Loomis

    1998-01-01

    The method of paired comparison is modified to allow choices between two alternative gains so as to estimate willingness to accept (WTA) without loss aversion. The robustness of WTA values for two public goods is tested with respect to sensitivity of theWTA measure to the context of the bundle of goods used in the paired comparison exercise and to the scope (scale) of...

  19. Environmental risk comparisons with internal methods of UST leak detection

    International Nuclear Information System (INIS)

    Durgin, P.B.

    1993-01-01

    The past five years have seen a variety of advances in how leaks can be detected from within underground storage tanks. Any leak-detection approach employed within a storage tanks must be conducted at specific time intervals and meet certain leak-rate criteria according to federal and state regulations. Nevertheless, the potential environmental consequences of leak detection approaches differ widely. Internal, volumetric UST monitoring techniques have developed over time including: (1) inventory control with stick measurements, (2) precision tank testing, (3) automatic tank gauging (ATG), (4) statistical inventory reconciliation (SIR), and (5) statistical techniques with automatic tank gauging. An ATG focuses on the advantage of precise data but measured for only a brief period. On the other hand, stick data has less precision but when combined with SIR over extended periods it too can detect low leak rates. Graphs demonstrate the comparable amounts of fuel than can leak out of a tank before being detected by these techniques. The results indicate that annual tank testing has the greatest potential for large volumes of fuel leaking without detection while new statistical approaches with an ATG have the least potential. The environmental implications of the volumes of fuel leaked prior to detection are site specific. For example, if storage tank is surrounded by a high water table and in a sole-source aquifer even small leaks may cause problems. The user must also consider regulatory risks. The level of environmental and regulatory risk should influence selection of the UST leak detection method

  20. Comparison of RF spectrum prediction methods for dynamic spectrum access

    Science.gov (United States)

    Kovarskiy, Jacob A.; Martone, Anthony F.; Gallagher, Kyle A.; Sherbondy, Kelly D.; Narayanan, Ram M.

    2017-05-01

    Dynamic spectrum access (DSA) refers to the adaptive utilization of today's busy electromagnetic spectrum. Cognitive radio/radar technologies require DSA to intelligently transmit and receive information in changing environments. Predicting radio frequency (RF) activity reduces sensing time and energy consumption for identifying usable spectrum. Typical spectrum prediction methods involve modeling spectral statistics with Hidden Markov Models (HMM) or various neural network structures. HMMs describe the time-varying state probabilities of Markov processes as a dynamic Bayesian network. Neural Networks model biological brain neuron connections to perform a wide range of complex and often non-linear computations. This work compares HMM, Multilayer Perceptron (MLP), and Recurrent Neural Network (RNN) algorithms and their ability to perform RF channel state prediction. Monte Carlo simulations on both measured and simulated spectrum data evaluate the performance of these algorithms. Generalizing spectrum occupancy as an alternating renewal process allows Poisson random variables to generate simulated data while energy detection determines the occupancy state of measured RF spectrum data for testing. The results suggest that neural networks achieve better prediction accuracy and prove more adaptable to changing spectral statistics than HMMs given sufficient training data.