WorldWideScience

Sample records for comparison adjustment method

  1. Comparison of different methods for liquid level adjustment in tank prover calibration

    International Nuclear Information System (INIS)

    Garcia, D A; Farias, E C; Gabriel, P C; Aquino, M H; Gomes, R S E; Aibe, V Y

    2015-01-01

    The adjustment of the liquid level during the calibration of tank provers with fixed volume is normally done by overfill but it can be done in different ways. In this article four level adjustment techniques are compared: plate, pipette, ruler and overfill adjustment. The adjustment methods using plate and pipette presented good agreement with the tank's nominal volume and lower uncertainty among the tested methods

  2. An adjusted probability method for the identification of sociometric status in classrooms

    NARCIS (Netherlands)

    García Bacete, F.J.; Cillessen, A.H.N.

    2017-01-01

    Objective: The aim of this study was to test the performance of an adjusted probability method for sociometric classification proposed by García Bacete (GB) in comparison with two previous methods. Specific goals were to examine the overall agreement between methods, the behavioral correlates of

  3. A comparison of methods to adjust for continuous covariates in the analysis of randomised trials

    Directory of Open Access Journals (Sweden)

    Brennan C. Kahan

    2016-04-01

    Full Text Available Abstract Background Although covariate adjustment in the analysis of randomised trials can be beneficial, adjustment for continuous covariates is complicated by the fact that the association between covariate and outcome must be specified. Misspecification of this association can lead to reduced power, and potentially incorrect conclusions regarding treatment efficacy. Methods We compared several methods of adjustment to determine which is best when the association between covariate and outcome is unknown. We assessed (a dichotomisation or categorisation; (b assuming a linear association with outcome; (c using fractional polynomials with one (FP1 or two (FP2 polynomial terms; and (d using restricted cubic splines with 3 or 5 knots. We evaluated each method using simulation and through a re-analysis of trial datasets. Results Methods which kept covariates as continuous typically had higher power than methods which used categorisation. Dichotomisation, categorisation, and assuming a linear association all led to large reductions in power when the true association was non-linear. FP2 models and restricted cubic splines with 3 or 5 knots performed best overall. Conclusions For the analysis of randomised trials we recommend (1 adjusting for continuous covariates even if their association with outcome is unknown; (2 keeping covariates as continuous; and (3 using fractional polynomials with two polynomial terms or restricted cubic splines with 3 to 5 knots when a linear association is in doubt.

  4. The Impact of Adjustment for Socioeconomic Status on Comparisons of Cancer Incidence between Two European Countries

    International Nuclear Information System (INIS)

    Donnelly, D. W.; Gavin, A.; Hegarty, A.

    2013-01-01

    Cancer incidence rates vary considerably between countries and by socioeconomic status (SES). We investigate the impact of SES upon the relative cancer risk in two neighbouring countries. Methods. Data on 229,824 cases for 16 cancers diagnosed in 1995-2007 were extracted from the cancer registries in Northern Ireland (NI) and Republic of Ireland (RoI). Cancers in the two countries were compared using incidence rate ratios (IRRs) adjusted for age and age plus area-based SES. Results. Adjusting for SES in addition to age had a considerable impact on NI/RoI comparisons for cancers strongly related to SES. Before SES adjustment, lung cancer incidence rates were 11% higher for males and 7% higher for females in NI, while after adjustment, the IRR was not statistically significant. Cervical cancer rates were lower in NI than in RoI after adjustment for age (IRR: 0.90 (0.84-0.97)), with this difference increasing after adjustment for SES (IRR: 0.85 (0.79-0.92)). For cancers with a weak or nonexistent relationship to SES, adjustment for SES made little difference to the IRR. Conclusion. Socioeconomic factors explain some international variations but also obscure other crucial differences; thus, adjustment for these factors should not become part of international comparisons.

  5. The Impact of Adjustment for Socioeconomic Status on Comparisons of Cancer Incidence between Two European Countries

    Directory of Open Access Journals (Sweden)

    David W. Donnelly

    2013-01-01

    Full Text Available Background. Cancer incidence rates vary considerably between countries and by socioeconomic status (SES. We investigate the impact of SES upon the relative cancer risk in two neighbouring countries. Methods. Data on 229,824 cases for 16 cancers diagnosed in 1995–2007 were extracted from the cancer registries in Northern Ireland (NI and Republic of Ireland (RoI. Cancers in the two countries were compared using incidence rate ratios (IRRs adjusted for age and age plus area-based SES. Results. Adjusting for SES in addition to age had a considerable impact on NI/RoI comparisons for cancers strongly related to SES. Before SES adjustment, lung cancer incidence rates were 11% higher for males and 7% higher for females in NI, while after adjustment, the IRR was not statistically significant. Cervical cancer rates were lower in NI than in RoI after adjustment for age (IRR: 0.90 (0.84–0.97, with this difference increasing after adjustment for SES (IRR: 0.85 (0.79–0.92. For cancers with a weak or nonexistent relationship to SES, adjustment for SES made little difference to the IRR. Conclusion. Socioeconomic factors explain some international variations but also obscure other crucial differences; thus, adjustment for these factors should not become part of international comparisons.

  6. Empirical comparison of four baseline covariate adjustment methods in analysis of continuous outcomes in randomized controlled trials

    Directory of Open Access Journals (Sweden)

    Zhang S

    2014-07-01

    Full Text Available Shiyuan Zhang,1 James Paul,2 Manyat Nantha-Aree,2 Norman Buckley,2 Uswa Shahzad,2 Ji Cheng,2 Justin DeBeer,5 Mitchell Winemaker,5 David Wismer,5 Dinshaw Punthakee,5 Victoria Avram,5 Lehana Thabane1–41Department of Clinical Epidemiology and Biostatistics, 2Department of Anesthesia, McMaster University, Hamilton, ON, Canada; 3Biostatistics Unit/Centre for Evaluation of Medicines, St Joseph's Healthcare - Hamilton, Hamilton, ON, Canada; 4Population Health Research Institute, Hamilton Health Science/McMaster University, 5Department of Surgery, Division of Orthopaedics, McMaster University, Hamilton, ON, CanadaBackground: Although seemingly straightforward, the statistical comparison of a continuous variable in a randomized controlled trial that has both a pre- and posttreatment score presents an interesting challenge for trialists. We present here empirical application of four statistical methods (posttreatment scores with analysis of variance, analysis of covariance, change in scores, and percent change in scores, using data from a randomized controlled trial of postoperative pain in patients following total joint arthroplasty (the Morphine COnsumption in Joint Replacement Patients, With and Without GaBapentin Treatment, a RandomIzed ControlLEd Study [MOBILE] trials.Methods: Analysis of covariance (ANCOVA was used to adjust for baseline measures and to provide an unbiased estimate of the mean group difference of the 1-year postoperative knee flexion scores in knee arthroplasty patients. Robustness tests were done by comparing ANCOVA with three comparative methods: the posttreatment scores, change in scores, and percentage change from baseline.Results: All four methods showed similar direction of effect; however, ANCOVA (-3.9; 95% confidence interval [CI]: -9.5, 1.6; P=0.15 and the posttreatment score (-4.3; 95% CI: -9.8, 1.2; P=0.12 method provided the highest precision of estimate compared with the change score (-3.0; 95% CI: -9.9, 3.8; P=0

  7. Exploring methods for comparing the real-world effectiveness of treatments for osteoporosis: adjusted direct comparisons versus using patients as their own control.

    Science.gov (United States)

    Karlsson, Linda; Mesterton, Johan; Tepie, Maurille Feudjo; Intorcia, Michele; Overbeek, Jetty; Ström, Oskar

    2017-09-21

    Using Swedish and Dutch registry data for women initiating bisphosphonates, we evaluated two methods of comparing the real-world effectiveness of osteoporosis treatments that attempt to adjust for differences in patient baseline characteristics. Each method has advantages and disadvantages; both are potential complements to clinical trial analyses. We evaluated methods of comparing the real-world effectiveness of osteoporosis treatments that attempt to adjust for both observed and unobserved confounding. Swedish and Dutch registry data for women initiating zoledronate or oral bisphosphonates (OBPs; alendronate/risedronate) were used; the primary outcome was fracture. In adjusted direct comparisons (ADCs), regression and matching techniques were used to account for baseline differences in known risk factors for fracture (e.g., age, previous fracture, comorbidities). In an own-control analysis (OCA), for each treatment, fracture incidence in the first 90 days following treatment initiation (the baseline risk period) was compared with fracture incidence in the 1-year period starting 91 days after treatment initiation (the treatment exposure period). In total, 1196 and 149 women initiating zoledronate and 14,764 and 25,058 initiating OBPs were eligible in the Swedish and Dutch registries, respectively. Owing to the small Dutch zoledronate sample, only the Swedish data were used to compare fracture incidences between treatment groups. ADCs showed a numerically higher fracture incidence in the zoledronate than in the OBPs group (hazard ratio 1.09-1.21; not statistically significant, p > 0.05). For both treatment groups, OCA showed a higher fracture incidence in the baseline risk period than in the treatment exposure period, indicating a treatment effect. OCA showed a similar or greater effect in the zoledronate group compared with the OBPs group. ADC and OCA each possesses advantages and disadvantages. Combining both methods may provide an estimate of real

  8. How does social comparison within a self-help group influence adjustment to chronic illness? A longitudinal study.

    Science.gov (United States)

    Dibb, Bridget; Yardley, Lucy

    2006-09-01

    Despite the growing popularity of self-help groups for people with chronic illness, there has been surprisingly little research into how these may support adjustment to illness. This study investigated the role that social comparison, occurring within a self-help group, may play in adjustment to chronic illness. A model of adjustment based on control process theory and response shift theory was tested to determine whether social comparisons predicted adjustment after controlling for the catalyst for adjustment (disease severity) and antecedents (demographic and psychological factors). A sample of 301 people with Ménière's disease who were members of the Ménière's Society UK completed questionnaires at baseline and 10-month follow-up assessing adjustment, defined for this study as functional and goal-oriented quality of life. At baseline, they also completed measures of the predictor variables i.e. the antecedents (age, sex, living circumstances, duration of self-help group membership, self-esteem, optimism and perceived control over illness), the catalyst (severity of vertigo, tinnitus, hearing loss and fullness in the ear) and mechanisms of social comparison within the self-help group. The social comparison variables included the extent to which self-help group resources were used, and whether reading about other members' experiences induced positive or negative feelings. Cross-sectional results showed that positive social comparison was indeed associated with better adjustment after controlling for all the other baseline variables, while negative social comparison was associated with worse adjustment. However, greater levels of social comparison at baseline were associated with a deteriorating quality of life over the 10-month follow-up period. Alternative explanations for these findings are discussed.

  9. Direct comparison of risk-adjusted and non-risk-adjusted CUSUM analyses of coronary artery bypass surgery outcomes.

    Science.gov (United States)

    Novick, Richard J; Fox, Stephanie A; Stitt, Larry W; Forbes, Thomas L; Steiner, Stefan

    2006-08-01

    We previously applied non-risk-adjusted cumulative sum methods to analyze coronary bypass outcomes. The objective of this study was to assess the incremental advantage of risk-adjusted cumulative sum methods in this setting. Prospective data were collected in 793 consecutive patients who underwent coronary bypass grafting performed by a single surgeon during a period of 5 years. The composite occurrence of an "adverse outcome" included mortality or any of 10 major complications. An institutional logistic regression model for adverse outcome was developed by using 2608 contemporaneous patients undergoing coronary bypass. The predicted risk of adverse outcome in each of the surgeon's 793 patients was then calculated. A risk-adjusted cumulative sum curve was then generated after specifying control limits and odds ratio. This risk-adjusted curve was compared with the non-risk-adjusted cumulative sum curve, and the clinical significance of this difference was assessed. The surgeon's adverse outcome rate was 96 of 793 (12.1%) versus 270 of 1815 (14.9%) for all the other institution's surgeons combined (P = .06). The non-risk-adjusted curve reached below the lower control limit, signifying excellent outcomes between cases 164 and 313, 323 and 407, and 667 and 793, but transgressed the upper limit between cases 461 and 478. The risk-adjusted cumulative sum curve never transgressed the upper control limit, signifying that cases preceding and including 461 to 478 were at an increased predicted risk. Furthermore, if the risk-adjusted cumulative sum curve was reset to zero whenever a control limit was reached, it still signaled a decrease in adverse outcome at 166, 653, and 782 cases. Risk-adjusted cumulative sum techniques provide incremental advantages over non-risk-adjusted methods by not signaling a decrement in performance when preoperative patient risk is high.

  10. Evaluation of trauma care using TRISS method: the role of adjusted misclassification rate and adjusted w-statistic.

    Science.gov (United States)

    Llullaku, Sadik S; Hyseni, Nexhmi Sh; Bytyçi, Cen I; Rexhepi, Sylejman K

    2009-01-15

    Major trauma is a leading cause of death worldwide. Evaluation of trauma care using Trauma Injury and Injury Severity Score (TRISS) method is focused in trauma outcome (deaths and survivors). For testing TRISS method TRISS misclassification rate is used. Calculating w-statistic, as a difference between observed and TRISS expected survivors, we compare our trauma care results with the TRISS standard. The aim of this study is to analyze interaction between misclassification rate and w-statistic and to adjust these parameters to be closer to the truth. Analysis of components of TRISS misclassification rate and w-statistic and actual trauma outcome. The component of false negative (FN) (by TRISS method unexpected deaths) has two parts: preventable (Pd) and non-preventable (nonPd) trauma deaths. Pd represents inappropriate trauma care of an institution; otherwise nonpreventable trauma deaths represents errors in TRISS method. Removing patients with preventable trauma deaths we get an Adjusted misclassification rate: (FP + FN - Pd)/N or (b+c-Pd)/N. Substracting nonPd from FN value in w-statistic formula we get an Adjusted w-statistic: [FP-(FN - nonPd)]/N, respectively (FP-Pd)/N, or (b-Pd)/N). Because adjusted formulas clean method from inappropriate trauma care, and clean trauma care from the methods error, TRISS adjusted misclassification rate and adjusted w-statistic gives more realistic results and may be used in researches of trauma outcome.

  11. Evaluation of trauma care using TRISS method: the role of adjusted misclassification rate and adjusted w-statistic

    Directory of Open Access Journals (Sweden)

    Bytyçi Cen I

    2009-01-01

    Full Text Available Abstract Background Major trauma is a leading cause of death worldwide. Evaluation of trauma care using Trauma Injury and Injury Severity Score (TRISS method is focused in trauma outcome (deaths and survivors. For testing TRISS method TRISS misclassification rate is used. Calculating w-statistic, as a difference between observed and TRISS expected survivors, we compare our trauma care results with the TRISS standard. Aim The aim of this study is to analyze interaction between misclassification rate and w-statistic and to adjust these parameters to be closer to the truth. Materials and methods Analysis of components of TRISS misclassification rate and w-statistic and actual trauma outcome. Results The component of false negative (FN (by TRISS method unexpected deaths has two parts: preventable (Pd and non-preventable (nonPd trauma deaths. Pd represents inappropriate trauma care of an institution; otherwise nonpreventable trauma deaths represents errors in TRISS method. Removing patients with preventable trauma deaths we get an Adjusted misclassification rate: (FP + FN - Pd/N or (b+c-Pd/N. Substracting nonPd from FN value in w-statistic formula we get an Adjusted w-statistic: [FP-(FN - nonPd]/N, respectively (FP-Pd/N, or (b-Pd/N. Conclusion Because adjusted formulas clean method from inappropriate trauma care, and clean trauma care from the methods error, TRISS adjusted misclassification rate and adjusted w-statistic gives more realistic results and may be used in researches of trauma outcome.

  12. An Adjusted Probability Method for the Identification of Sociometric Status in Classrooms

    Directory of Open Access Journals (Sweden)

    Francisco J. García Bacete

    2017-10-01

    Full Text Available Objective: The aim of this study was to test the performance of an adjusted probability method for sociometric classification proposed by García Bacete (GB in comparison with two previous methods. Specific goals were to examine the overall agreement between methods, the behavioral correlates of each sociometric group, the sources for discrepant classifications between methods, the behavioral profiles of discrepant and consistent cases between methods, and age differences.Method: We compared the GB adjusted probability method with the standard score model proposed by Coie and Dodge (CD and the probability score model proposed by Newcomb and Bukowski (NB. The GB method is an adaptation of the NB method, cutoff scores are derived from the distribution of raw liked most and liked least scores in each classroom instead of using fixed and absolute scores as does NB method. The criteria for neglected status are also modified by the GB method. Participants were 569 children (45% girls from 23 elementary school classrooms (13 Grades 1–2, 10 Grades 5–6.Results: We found agreement as well as differences between the three methods. The CD method yielded discrepancies in the classifications because of its dependence on z-scores and composite dimensions. The NB method was less optimal in the validation of the behavioral characteristics of the sociometric groups, because of its fixed cutoffs for identifying preferred, rejected, and controversial children, and not differentiating between positive and negative nominations for neglected children. The GB method addressed some of the limitations of the other two methods. It improved the classified of neglected students, as well as discrepant cases of the preferred, rejected, and controversial groups. Agreement between methods was higher with the oldest children.Conclusion: GB is a valid sociometric method as evidences by the behavior profiles of the sociometric status groups identified with this method.

  13. Inter-provider comparison of patient-reported outcomes: developing an adjustment to account for differences in patient case mix.

    Science.gov (United States)

    Nuttall, David; Parkin, David; Devlin, Nancy

    2015-01-01

    This paper describes the development of a methodology for the case-mix adjustment of patient-reported outcome measures (PROMs) data permitting the comparison of outcomes between providers on a like-for-like basis. Statistical models that take account of provider-specific effects form the basis of the proposed case-mix adjustment methodology. Indirect standardisation provides a transparent means of case mix adjusting the PROMs data, which are updated on a monthly basis. Recently published PROMs data for patients undergoing unilateral knee replacement are used to estimate empirical models and to demonstrate the application of the proposed case-mix adjustment methodology in practice. The results are illustrative and are used to highlight a number of theoretical and empirical issues that warrant further exploration. For example, because of differences between PROMs instruments, case-mix adjustment methodologies may require instrument-specific approaches. A number of key assumptions are made in estimating the empirical models, which could be open to challenge. The covariates of post-operative health status could be expanded, and alternative econometric methods could be employed. © 2013 Crown copyright.

  14. COMPARISON OF METHODS FOR GEOMETRIC CAMERA CALIBRATION

    Directory of Open Access Journals (Sweden)

    J. Hieronymus

    2012-09-01

    Full Text Available Methods for geometric calibration of cameras in close-range photogrammetry are established and well investigated. The most common one is based on test-fields with well-known pattern, which are observed from different directions. The parameters of a distortion model are calculated using bundle-block-adjustment-algorithms. This methods works well for short focal lengths, but is essentially more problematic to use with large focal lengths. Those would require very large test-fields and surrounding space. To overcome this problem, there is another common method for calibration used in remote sensing. It employs measurements using collimator and a goniometer. A third calibration method uses diffractive optical elements (DOE to project holograms of well known pattern. In this paper these three calibration methods are compared empirically, especially in terms of accuracy. A camera has been calibrated with those methods mentioned above. All methods provide a set of distortion correction parameters as used by the photogrammetric software Australis. The resulting parameter values are very similar for all investigated methods. The three sets of distortion parameters are crosscompared against all three calibration methods. This is achieved by inserting the gained distortion parameters as fixed input into the calibration algorithms and only adjusting the exterior orientation. The RMS (root mean square of the remaining image coordinate residuals are taken as a measure of distortion correction quality. There are differences resulting from the different calibration methods. Nevertheless the measure is small for every comparison, which means that all three calibration methods can be used for accurate geometric calibration.

  15. Comparative Efficacy of Daratumumab Monotherapy and Pomalidomide Plus Low-Dose Dexamethasone in the Treatment of Multiple Myeloma: A Matching Adjusted Indirect Comparison.

    Science.gov (United States)

    Van Sanden, Suzy; Ito, Tetsuro; Diels, Joris; Vogel, Martin; Belch, Andrew; Oriol, Albert

    2018-03-01

    Daratumumab (a human CD38-directed monoclonal antibody) and pomalidomide (an immunomodulatory drug) plus dexamethasone are both relatively new treatment options for patients with heavily pretreated multiple myeloma. A matching adjusted indirect comparison (MAIC) was used to compare absolute treatment effects of daratumumab versus pomalidomide + low-dose dexamethasone (LoDex; 40 mg) on overall survival (OS), while adjusting for differences between the trial populations. The MAIC method reduces the risk of bias associated with naïve indirect comparisons. Data from 148 patients receiving daratumumab (16 mg/kg), pooled from the GEN501 and SIRIUS studies, were compared separately with data from patients receiving pomalidomide + LoDex in the MM-003 and STRATUS studies. The MAIC-adjusted hazard ratio (HR) for OS of daratumumab versus pomalidomide + LoDex was 0.56 (95% confidence interval [CI], 0.38-0.83; p  = .0041) for MM-003 and 0.51 (95% CI, 0.37-0.69; p  < .0001) for STRATUS. The treatment benefit was even more pronounced when the daratumumab population was restricted to pomalidomide-naïve patients (MM-003: HR, 0.33; 95% CI, 0.17-0.66; p  = .0017; STRATUS: HR, 0.41; 95% CI, 0.21-0.79; p  = .0082). An additional analysis indicated a consistent trend of the OS benefit across subgroups based on M-protein level reduction (≥50%, ≥25%, and <25%). The MAIC results suggest that daratumumab improves OS compared with pomalidomide + LoDex in patients with heavily pretreated multiple myeloma. This matching adjusted indirect comparison of clinical trial data from four studies analyzes the survival outcomes of patients with heavily pretreated, relapsed/refractory multiple myeloma who received either daratumumab monotherapy or pomalidomide plus low-dose dexamethasone. Using this method, daratumumab conferred a significant overall survival benefit compared with pomalidomide plus low-dose dexamethasone. In the absence of head-to-head trials, these

  16. Adjusting Teacher Salaries for the Cost of Living: The Effect on Salary Comparisons and Policy Conclusions

    Science.gov (United States)

    Stoddard, C.

    2005-01-01

    Teaching salaries are commonly adjusted for the cost of living, but this incorrectly accounts for welfare differences across states. Adjusting for area amenities and opportunities, however, produces more accurate salary comparisons. Amenities and opportunities can be measured by the wage premium other workers in a state face. The two methods…

  17. Method for adjusting warp measurements to a different board dimension

    Science.gov (United States)

    William T. Simpson; John R. Shelly

    2000-01-01

    Warp in lumber is a common problem that occurs while lumber is being dried. In research or other testing programs, it is sometimes necessary to compare warp of different species or warp caused by different process variables. If lumber dimensions are not the same, then direct comparisons are not possible, and adjusting warp to a common dimension would be desirable so...

  18. Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files. SG39 meeting, December 2016

    International Nuclear Information System (INIS)

    Cabellos, Oscar; ); PELLONI, Sandro; Ivanov, Evgeny; Sobes, Vladimir; Fukushima, M.; Yokoyama, Kenji; Palmiotti, Giuseppe; Kodeli, Ivo

    2016-12-01

    The aim of WPEC subgroup 39 'Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files' is to provide criteria and practical approaches to use effectively the results of sensitivity analyses and cross section adjustments for feedback to evaluators and differential measurement experimentalists in order to improve the knowledge of neutron cross sections, uncertainties, and correlations to be used in a wide range of applications. This document is the proceedings of the eighth Subgroup 39 meeting, held at the OECD NEA, Boulogne-Billancourt, France, on 1-2 December 2016. It comprises all the available presentations (slides) given by the participants: A - Presentations: Welcome and actions review (Oscar CABELLOS); B - Methods: - Detailed comparison of Progressive Incremental Adjustment (PIA) sequence results involving adjustments of spectral indices and coolant density effects on the basis of the SG33 benchmark (Sandro PELLONI); - ND assessment alternatives: Validation matrix vs XS adjustment (Evgeny IVANOV); - Implementation of Resonance Parameter Sensitivity Coefficients Calculation in CE TSUNAMI-3D (Vladimir SOBES); C - Experiment analysis, sensitivity calculations and benchmarks: - Benchmark tests of ENDF/B-VIII.0 beta 1 using sodium void reactivity worth of FCA-XXVII-1 assembly (M. FUKUSHIMA, Kenji YOKOYAMA); D - Adjustments: - Cross-section adjustment based on JENDL-4.0 using new experiments on the basis of the SG33 benchmark (Kenji YOKOYAMA); - Comparison of adjustment trends with the Cielo evaluation (Sandro PELLONI); - Expanded adjustment in support of CIELO initiative (Giuseppe PALMIOTTI); - First preliminary results of the adjustment exercise using ASPIS Fe88 and SNEAK-7A/7B k_e_f_f and b_e_f_f benchmarks (Ivo KODELI); E - Future actions, deliverables: - Discussion on future of SG39 and possible new subgroup (Giuseppe PALMIOTTI); - WPEC sub-group proposal: Investigation of Covariance Data in

  19. Seasonal adjustment methods and real time trend-cycle estimation

    CERN Document Server

    Bee Dagum, Estela

    2016-01-01

    This book explores widely used seasonal adjustment methods and recent developments in real time trend-cycle estimation. It discusses in detail the properties and limitations of X12ARIMA, TRAMO-SEATS and STAMP - the main seasonal adjustment methods used by statistical agencies. Several real-world cases illustrate each method and real data examples can be followed throughout the text. The trend-cycle estimation is presented using nonparametric techniques based on moving averages, linear filters and reproducing kernel Hilbert spaces, taking recent advances into account. The book provides a systematical treatment of results that to date have been scattered throughout the literature. Seasonal adjustment and real time trend-cycle prediction play an essential part at all levels of activity in modern economies. They are used by governments to counteract cyclical recessions, by central banks to control inflation, by decision makers for better modeling and planning and by hospitals, manufacturers, builders, transportat...

  20. Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files. SG39 meeting, May 2014

    International Nuclear Information System (INIS)

    Aliberti, G.; Archier, P.; Dunn, M.; Dupont, E.; Hill, I.; ); Garcia, A.; Hursin, M.; Pelloni, S.; Ivanova, T.; Kodeli, I.; Palmiotti, G.; Salvatores, M.; Touran, N.; Wenming, Wang; Yokoyama, K.

    2014-05-01

    The aim of WPEC subgroup 39 'Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files' is to provide criteria and practical approaches to use effectively the results of sensitivity analyses and cross section adjustments for feedback to evaluators and differential measurement experimentalists in order to improve the knowledge of neutron cross sections, uncertainties, and correlations to be used in a wide range of applications. This document is the proceedings of the second Subgroup meeting, held at the NEA, Issy-les-Moulineaux, France, on 13 May 2014. It comprises a Summary Record of the meeting and all the available presentations (slides) given by the participants: A - Welcome: Review of actions (M. Salvatores); B - Inter-comparison of sensitivity coefficients: 1 - Sensitivity Computation with Monte Carlo Methods (T. Ivanova); 2 - Sensitivity analysis of FLATTOP-Pu (I. Kodeli); 3 - Sensitivity coefficients by means of SERPENT-2 (S. Pelloni); 4 - Demonstration - Database for ICSBEP (DICE) and Database and Analysis Tool for IRPhE (IDAT) (I. Hill); C - Specific new experiments: 1 - PROTEUS FDWR-II (HCLWR) program summary (M. Hursin); 2 - STEK and SEG Experiments, M. Salvatores 3 - Experiments related to "2"3"5U, "2"3"8U, "5"6Fe and "2"3Na, G. Palmiotti); 4 - Validation of Iron Cross Sections against ASPIS Experiments (JEF/DOC-420) (I. Kodeli); 5 - Benchmark analysis of Iron Cross-sections (EFFDOC-1221) (I. Kodeli 6 - Integral Beta-effective Measurements (K. Yokoyama on behalf of M. Ishikawa); D - Adjustment results: 1 - Impacts of Covariance Data and Interpretation of Adjustment Trends of ADJ2010, (K. Yokoyama); 2 - Revised Recommendations from ADJ2010 Adjustment (K. Yokoyama); 3 - Comparisons and Discussions on Adjustment trends from JEFF (CEA) (P. Archier); 4 - Feedback on CIELO Isotopes from ENDF/B-VII.0 Adjustment (G. Palmiotti); 5 - Demonstration - Plot comparisons of participants' results (E

  1. Risk adjustment methods for Home Care Quality Indicators (HCQIs based on the minimum data set for home care

    Directory of Open Access Journals (Sweden)

    Hirdes John P

    2005-01-01

    Full Text Available Abstract Background There has been increasing interest in enhancing accountability in health care. As such, several methods have been developed to compare the quality of home care services. These comparisons can be problematic if client populations vary across providers and no adjustment is made to account for these differences. The current paper explores the effects of risk adjustment for a set of home care quality indicators (HCQIs based on the Minimum Data Set for Home Care (MDS-HC. Methods A total of 22 home care providers in Ontario and the Winnipeg Regional Health Authority (WRHA in Manitoba, Canada, gathered data on their clients using the MDS-HC. These assessment data were used to generate HCQIs for each agency and for the two regions. Three types of risk adjustment methods were contrasted: a client covariates only; b client covariates plus an "Agency Intake Profile" (AIP to adjust for ascertainment and selection bias by the agency; and c client covariates plus the intake Case Mix Index (CMI. Results The mean age and gender distribution in the two populations was very similar. Across the 19 risk-adjusted HCQIs, Ontario CCACs had a significantly higher AIP adjustment value for eight HCQIs, indicating a greater propensity to trigger on these quality issues on admission. On average, Ontario had unadjusted rates that were 0.3% higher than the WRHA. Following risk adjustment with the AIP covariate, Ontario rates were, on average, 1.5% lower than the WRHA. In the WRHA, individual agencies were likely to experience a decline in their standing, whereby they were more likely to be ranked among the worst performers following risk adjustment. The opposite was true for sites in Ontario. Conclusions Risk adjustment is essential when comparing quality of care across providers when home care agencies provide services to populations with different characteristics. While such adjustment had a relatively small effect for the two regions, it did

  2. Risk adjustment methods for Home Care Quality Indicators (HCQIs) based on the minimum data set for home care

    Science.gov (United States)

    Dalby, Dawn M; Hirdes, John P; Fries, Brant E

    2005-01-01

    Background There has been increasing interest in enhancing accountability in health care. As such, several methods have been developed to compare the quality of home care services. These comparisons can be problematic if client populations vary across providers and no adjustment is made to account for these differences. The current paper explores the effects of risk adjustment for a set of home care quality indicators (HCQIs) based on the Minimum Data Set for Home Care (MDS-HC). Methods A total of 22 home care providers in Ontario and the Winnipeg Regional Health Authority (WRHA) in Manitoba, Canada, gathered data on their clients using the MDS-HC. These assessment data were used to generate HCQIs for each agency and for the two regions. Three types of risk adjustment methods were contrasted: a) client covariates only; b) client covariates plus an "Agency Intake Profile" (AIP) to adjust for ascertainment and selection bias by the agency; and c) client covariates plus the intake Case Mix Index (CMI). Results The mean age and gender distribution in the two populations was very similar. Across the 19 risk-adjusted HCQIs, Ontario CCACs had a significantly higher AIP adjustment value for eight HCQIs, indicating a greater propensity to trigger on these quality issues on admission. On average, Ontario had unadjusted rates that were 0.3% higher than the WRHA. Following risk adjustment with the AIP covariate, Ontario rates were, on average, 1.5% lower than the WRHA. In the WRHA, individual agencies were likely to experience a decline in their standing, whereby they were more likely to be ranked among the worst performers following risk adjustment. The opposite was true for sites in Ontario. Conclusions Risk adjustment is essential when comparing quality of care across providers when home care agencies provide services to populations with different characteristics. While such adjustment had a relatively small effect for the two regions, it did substantially affect the

  3. Public Reporting of Primary Care Clinic Quality: Accounting for Sociodemographic Factors in Risk Adjustment and Performance Comparison.

    Science.gov (United States)

    Wholey, Douglas R; Finch, Michael; Kreiger, Rob; Reeves, David

    2018-01-03

    Performance measurement and public reporting are increasingly being used to compare clinic performance. Intended consequences include quality improvement, value-based payment, and consumer choice. Unintended consequences include reducing access for riskier patients and inappropriately labeling some clinics as poor performers, resulting in tampering with stable care processes. Two analytic steps are used to maximize intended and minimize unintended consequences. First, risk adjustment is used to reduce the impact of factors outside providers' control. Second, performance categorization is used to compare clinic performance using risk-adjusted measures. This paper examines the effects of methodological choices, such as risk adjusting for sociodemographic factors in risk adjustment and accounting for patients clustering by clinics in performance categorization, on clinic performance comparison for diabetes care, vascular care, asthma, and colorectal cancer screening. The population includes all patients with commercial and public insurance served by clinics in Minnesota. Although risk adjusting for sociodemographic factors has a significant effect on quality, it does not explain much of the variation in quality. In contrast, taking into account the nesting of patients within clinics in performance categorization has a substantial effect on performance comparison.

  4. A New Scale Factor Adjustment Method for Magnetic Force Feedback Accelerometer

    Directory of Open Access Journals (Sweden)

    Xiangqing Huang

    2017-10-01

    Full Text Available A new and simple method to adjust the scale factor of a magnetic force feedback accelerometer is presented, which could be used in developing a rotating accelerometer gravity gradient instrument (GGI. Adjusting and matching the acceleration-to-current transfer function of the four accelerometers automatically is one of the basic and necessary technologies for rejecting the common mode accelerations in the development of GGI. In order to adjust the scale factor of the magnetic force rebalance accelerometer, an external current is injected and combined with the normal feedback current; they are then applied together to the torque coil of the magnetic actuator. The injected current could be varied proportionally according to the external adjustment needs, and the change in the acceleration-to-current transfer function then realized dynamically. The new adjustment method has the advantages of no extra assembly and ease of operation. Changes in the scale factors range from 33% smaller to 100% larger are verified experimentally by adjusting the different external coefficients. The static noise of the used accelerometer is compared under conditions with and without the injecting current, and the experimental results find no change at the current noise level, which further confirms the validity of the presented method.

  5. A New Scale Factor Adjustment Method for Magnetic Force Feedback Accelerometer.

    Science.gov (United States)

    Huang, Xiangqing; Deng, Zhongguang; Xie, Yafei; Li, Zhu; Fan, Ji; Tu, Liangcheng

    2017-10-27

    A new and simple method to adjust the scale factor of a magnetic force feedback accelerometer is presented, which could be used in developing a rotating accelerometer gravity gradient instrument (GGI). Adjusting and matching the acceleration-to-current transfer function of the four accelerometers automatically is one of the basic and necessary technologies for rejecting the common mode accelerations in the development of GGI. In order to adjust the scale factor of the magnetic force rebalance accelerometer, an external current is injected and combined with the normal feedback current; they are then applied together to the torque coil of the magnetic actuator. The injected current could be varied proportionally according to the external adjustment needs, and the change in the acceleration-to-current transfer function then realized dynamically. The new adjustment method has the advantages of no extra assembly and ease of operation. Changes in the scale factors range from 33% smaller to 100% larger are verified experimentally by adjusting the different external coefficients. The static noise of the used accelerometer is compared under conditions with and without the injecting current, and the experimental results find no change at the current noise level, which further confirms the validity of the presented method.

  6. HIV quality report cards: impact of case-mix adjustment and statistical methods.

    Science.gov (United States)

    Ohl, Michael E; Richardson, Kelly K; Goto, Michihiko; Vaughan-Sarrazin, Mary; Schweizer, Marin L; Perencevich, Eli N

    2014-10-15

    There will be increasing pressure to publicly report and rank the performance of healthcare systems on human immunodeficiency virus (HIV) quality measures. To inform discussion of public reporting, we evaluated the influence of case-mix adjustment when ranking individual care systems on the viral control quality measure. We used data from the Veterans Health Administration (VHA) HIV Clinical Case Registry and administrative databases to estimate case-mix adjusted viral control for 91 local systems caring for 12 368 patients. We compared results using 2 adjustment methods, the observed-to-expected estimator and the risk-standardized ratio. Overall, 10 913 patients (88.2%) achieved viral control (viral load ≤400 copies/mL). Prior to case-mix adjustment, system-level viral control ranged from 51% to 100%. Seventeen (19%) systems were labeled as low outliers (performance significantly below the overall mean) and 11 (12%) as high outliers. Adjustment for case mix (patient demographics, comorbidity, CD4 nadir, time on therapy, and income from VHA administrative databases) reduced the number of low outliers by approximately one-third, but results differed by method. The adjustment model had moderate discrimination (c statistic = 0.66), suggesting potential for unadjusted risk when using administrative data to measure case mix. Case-mix adjustment affects rankings of care systems on the viral control quality measure. Given the sensitivity of rankings to selection of case-mix adjustment methods-and potential for unadjusted risk when using variables limited to current administrative databases-the HIV care community should explore optimal methods for case-mix adjustment before moving forward with public reporting. Published by Oxford University Press on behalf of the Infectious Diseases Society of America 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  7. Improved Conjugate Gradient Bundle Adjustment of Dunhuang Wall Painting Images

    Science.gov (United States)

    Hu, K.; Huang, X.; You, H.

    2017-09-01

    Bundle adjustment with additional parameters is identified as a critical step for precise orthoimage generation and 3D reconstruction of Dunhuang wall paintings. Due to the introduction of self-calibration parameters and quasi-planar constraints, the structure of coefficient matrix of the reduced normal equation is banded-bordered, making the solving process of bundle adjustment complex. In this paper, Conjugate Gradient Bundle Adjustment (CGBA) method is deduced by calculus of variations. A preconditioning method based on improved incomplete Cholesky factorization is adopt to reduce the condition number of coefficient matrix, as well as to accelerate the iteration rate of CGBA. Both theoretical analysis and experimental results comparison with conventional method indicate that, the proposed method can effectively conquer the ill-conditioned problem of normal equation and improve the calculation efficiency of bundle adjustment with additional parameters considerably, while maintaining the actual accuracy.

  8. A novel method to adjust efficacy estimates for uptake of other active treatments in long-term clinical trials.

    Directory of Open Access Journals (Sweden)

    John Simes

    2010-01-01

    Full Text Available When rates of uptake of other drugs differ between treatment arms in long-term trials, the true benefit or harm of the treatment may be underestimated. Methods to allow for such contamination have often been limited by failing to preserve the randomization comparisons. In the Fenofibrate Intervention and Event Lowering in Diabetes (FIELD study, patients were randomized to fenofibrate or placebo, but during the trial many started additional drugs, particularly statins, more so in the placebo group. The effects of fenofibrate estimated by intention-to-treat were likely to have been attenuated. We aimed to quantify this effect and to develop a method for use in other long-term trials.We applied efficacies of statins and other cardiovascular drugs from meta-analyses of randomized trials to adjust the effect of fenofibrate in a penalized Cox model. We assumed that future cardiovascular disease events were reduced by an average of 24% by statins, and 20% by a first other major cardiovascular drug. We applied these estimates to each patient who took these drugs for the period they were on them. We also adjusted the analysis by the rate of discontinuing fenofibrate. Among 4,900 placebo patients, average statin use was 16% over five years. Among 4,895 assigned fenofibrate, statin use was 8% and nonuse of fenofibrate was 10%. In placebo patients, use of cardiovascular drugs was 1% to 3% higher. Before adjustment, fenofibrate was associated with an 11% reduction in coronary events (coronary heart disease death or myocardial infarction (P = 0.16 and an 11% reduction in cardiovascular disease events (P = 0.04. After adjustment, the effects of fenofibrate on coronary events and cardiovascular disease events were 16% (P = 0.06 and 15% (P = 0.008, respectively.This novel application of a penalized Cox model for adjustment of a trial estimate of treatment efficacy incorporates evidence-based estimates for other therapies, preserves comparisons between the

  9. Direct risk standardisation: a new method for comparing casemix adjusted event rates using complex models.

    Science.gov (United States)

    Nicholl, Jon; Jacques, Richard M; Campbell, Michael J

    2013-10-29

    Comparison of outcomes between populations or centres may be confounded by any casemix differences and standardisation is carried out to avoid this. However, when the casemix adjustment models are large and complex, direct standardisation has been described as "practically impossible", and indirect standardisation may lead to unfair comparisons. We propose a new method of directly standardising for risk rather than standardising for casemix which overcomes these problems. Using a casemix model which is the same model as would be used in indirect standardisation, the risk in individuals is estimated. Risk categories are defined, and event rates in each category for each centre to be compared are calculated. A weighted sum of the risk category specific event rates is then calculated. We have illustrated this method using data on 6 million admissions to 146 hospitals in England in 2007/8 and an existing model with over 5000 casemix combinations, and a second dataset of 18,668 adult emergency admissions to 9 centres in the UK and overseas and a published model with over 20,000 casemix combinations and a continuous covariate. Substantial differences between conventional directly casemix standardised rates and rates from direct risk standardisation (DRS) were found. Results based on DRS were very similar to Standardised Mortality Ratios (SMRs) obtained from indirect standardisation, with similar standard errors. Direct risk standardisation using our proposed method is as straightforward as using conventional direct or indirect standardisation, always enables fair comparisons of performance to be made, can use continuous casemix covariates, and was found in our examples to have similar standard errors to the SMR. It should be preferred when there is a risk that conventional direct or indirect standardisation will lead to unfair comparisons.

  10. A Fast and Effective Block Adjustment Method with Big Data

    Directory of Open Access Journals (Sweden)

    ZHENG Maoteng

    2017-02-01

    Full Text Available To deal with multi-source, complex and massive data in photogrammetry, and solve the high memory requirement and low computation efficiency of irregular normal equation caused by the randomly aligned and large scale datasets, we introduce the preconditioned conjugate gradient combined with inexact Newton method to solve the normal equation which do not have strip characteristics due to the randomly aligned images. We also use an effective sparse matrix compression format to compress the big normal matrix, a brand new workflow of bundle adjustment is developed. Our method can avoid the direct inversion of the big normal matrix, the memory requirement of the normal matrix is also decreased by the proposed sparse matrix compression format. Combining all these techniques, the proposed method can not only decrease the memory requirement of normal matrix, but also largely improve the efficiency of bundle adjustment while maintaining the same accuracy as the conventional method. Preliminary experiment results show that the bundle adjustment of a dataset with about 4500 images and 9 million image points can be done in only 15 minutes while achieving sub-pixel accuracy.

  11. Use of the method of neutrons moderation to the adjustment of the concrete dosage through the total humidity of the arid s

    International Nuclear Information System (INIS)

    Howland, J.; Morejon, D.; Simeon, G.; Gracia, R.; Desdin, L.; O'Reilly, V.

    1997-01-01

    The method of neutrons moderation to the fast determination was applied of the content of humidity in the fine and thick arid s. The measure values of humidity were employed in the adjustment of the Dosification of concrete by the total humidity of the arid. The obtained results indicate that the employment of this fitting method allows to get higher values of resistance to the compression and also reduces the dispersions in the concretes production. This method would permit a considerable saving of cement in comparison with the traditional method. (author) [es

  12. Case-mix adjustment and the comparison of community health center performance on patient experience measures.

    Science.gov (United States)

    Johnson, M Laura; Rodriguez, Hector P; Solorio, M Rosa

    2010-06-01

    To assess the effect of case-mix adjustment on community health center (CHC) performance on patient experience measures. A Medicaid-managed care plan in Washington State collected patient survey data from 33 CHCs over three fiscal quarters during 2007-2008. The survey included three composite patient experience measures (6-month reports) and two overall ratings of care. The analytic sample includes 2,247 adult patients and 2,859 adults reporting for child patients. We compared the relative importance of patient case-mix adjusters by calculating each adjuster's predictive power and variability across CHCs. We then evaluated the impact of case-mix adjustment on the relative ranking of CHCs. Important case-mix adjusters included adult self-reported health status or parent-reported child health status, adult age, and educational attainment. The effects of case-mix adjustment on patient reports and ratings were different in the adult and child samples. Adjusting for race/ethnicity and language had a greater impact on parent reports than adult reports, but it impacted ratings similarly across the samples. The impact of adjustment on composites and ratings was modest, but it affected the relative ranking of CHCs. To ensure equitable comparison of CHC performance on patient experience measures, reports and ratings should be adjusted for adult self-reported health status or parent-reported child health status, adult age, education, race/ethnicity, and survey language. Because of the differential impact of case-mix adjusters for child and adult surveys, initiatives should consider measuring and reporting adult and child scores separately.

  13. IMPROVED CONJUGATE GRADIENT BUNDLE ADJUSTMENT OF DUNHUANG WALL PAINTING IMAGES

    Directory of Open Access Journals (Sweden)

    K. Hu

    2017-09-01

    Full Text Available Bundle adjustment with additional parameters is identified as a critical step for precise orthoimage generation and 3D reconstruction of Dunhuang wall paintings. Due to the introduction of self-calibration parameters and quasi-planar constraints, the structure of coefficient matrix of the reduced normal equation is banded-bordered, making the solving process of bundle adjustment complex. In this paper, Conjugate Gradient Bundle Adjustment (CGBA method is deduced by calculus of variations. A preconditioning method based on improved incomplete Cholesky factorization is adopt to reduce the condition number of coefficient matrix, as well as to accelerate the iteration rate of CGBA. Both theoretical analysis and experimental results comparison with conventional method indicate that, the proposed method can effectively conquer the ill-conditioned problem of normal equation and improve the calculation efficiency of bundle adjustment with additional parameters considerably, while maintaining the actual accuracy.

  14. Environmental Chemicals in Urine and Blood: Improving Methods for Creatinine and Lipid Adjustment

    Science.gov (United States)

    O’Brien, Katie M.; Upson, Kristen; Cook, Nancy R.; Weinberg, Clarice R.

    2015-01-01

    Background Investigators measuring exposure biomarkers in urine typically adjust for creatinine to account for dilution-dependent sample variation in urine concentrations. Similarly, it is standard to adjust for serum lipids when measuring lipophilic chemicals in serum. However, there is controversy regarding the best approach, and existing methods may not effectively correct for measurement error. Objectives We compared adjustment methods, including novel approaches, using simulated case–control data. Methods Using a directed acyclic graph framework, we defined six causal scenarios for epidemiologic studies of environmental chemicals measured in urine or serum. The scenarios include variables known to influence creatinine (e.g., age and hydration) or serum lipid levels (e.g., body mass index and recent fat intake). Over a range of true effect sizes, we analyzed each scenario using seven adjustment approaches and estimated the corresponding bias and confidence interval coverage across 1,000 simulated studies. Results For urinary biomarker measurements, our novel method, which incorporates both covariate-adjusted standardization and the inclusion of creatinine as a covariate in the regression model, had low bias and possessed 95% confidence interval coverage of nearly 95% for most simulated scenarios. For serum biomarker measurements, a similar approach involving standardization plus serum lipid level adjustment generally performed well. Conclusions To control measurement error bias caused by variations in serum lipids or by urinary diluteness, we recommend improved methods for standardizing exposure concentrations across individuals. Citation O’Brien KM, Upson K, Cook NR, Weinberg CR. 2016. Environmental chemicals in urine and blood: improving methods for creatinine and lipid adjustment. Environ Health Perspect 124:220–227; http://dx.doi.org/10.1289/ehp.1509693 PMID:26219104

  15. Comparison of Methods for Adjusting Incorrect Assignments of Items to Subtests Oblique Multiple Group Method Versus Confirmatory Common Factor Method

    NARCIS (Netherlands)

    Stuive, Ilse; Kiers, Henk A.L.; Timmerman, Marieke E.

    2009-01-01

    A common question in test evaluation is whether an a priori assignment of items to subtests is supported by empirical data. If the analysis results indicate the assignment of items to subtests under study is not supported by data, the assignment is often adjusted. In this study the authors compare

  16. A Review on Methods of Risk Adjustment and their Use in Integrated Healthcare Systems

    Science.gov (United States)

    Juhnke, Christin; Bethge, Susanne

    2016-01-01

    Introduction: Effective risk adjustment is an aspect that is more and more given weight on the background of competitive health insurance systems and vital healthcare systems. The objective of this review was to obtain an overview of existing models of risk adjustment as well as on crucial weights in risk adjustment. Moreover, the predictive performance of selected methods in international healthcare systems should be analysed. Theory and methods: A comprehensive, systematic literature review on methods of risk adjustment was conducted in terms of an encompassing, interdisciplinary examination of the related disciplines. Results: In general, several distinctions can be made: in terms of risk horizons, in terms of risk factors or in terms of the combination of indicators included. Within these, another differentiation by three levels seems reasonable: methods based on mortality risks, methods based on morbidity risks as well as those based on information on (self-reported) health status. Conclusions and discussion: After the final examination of different methods of risk adjustment it was shown that the methodology used to adjust risks varies. The models differ greatly in terms of their included morbidity indicators. The findings of this review can be used in the evaluation of integrated healthcare delivery systems and can be integrated into quality- and patient-oriented reimbursement of care providers in the design of healthcare contracts. PMID:28316544

  17. Modified adjustable suture hang-back recession: Description of technique and comparison with conventional adjustable hang-back recession

    Directory of Open Access Journals (Sweden)

    Siddharth Agrawal

    2017-01-01

    Full Text Available Purpose: This study aims to describe and compare modified hang-back recession with the conventional hang-back recession in large angle comitant exotropia (XT. Methods: A prospective, interventional, double-blinded, randomized study on adult patients (>18 years undergoing single eye recession-resection for large angle (>30 prism diopters constant comitant XT was conducted between January 2011 and December 2015. Patients in Group A underwent modified hang-back lateral rectus recession with adjustable knot while in Group B underwent conventional hang-back recession with an adjustable knot. Outcome parameters studied were readjustment rate, change in deviation at 6 weeks, complications and need for resurgery at 6 months. Results: The groups were comparable in terms of age and preoperative deviation. The patients with the modified hang back (Group A fared significantly better (P < 0.05 than those with conventional hang back (Group B in terms of lesser need for adjustment, greater correction in deviation at 6 weeks and lesser need for resurgery at 6 months. Conclusion: This modification offers several advantages, significantly reduces resurgery requirement and has no added complications.

  18. Comparison of Thermal Properties Measured by Different Methods

    International Nuclear Information System (INIS)

    Sundberg, Jan; Kukkonen, Ilmo; Haelldahl, Lars

    2003-04-01

    one or both methods cannot be excluded. For future investigations a set of thermal conductivity standard materials should be selected for testing using the different methods of the laboratories. The material should have thermal properties in the range of typical rocks, be fine-grained and suitable for making samples of different shapes and volumes adjusted to different measurement techniques. Because of large obtained individual variations in the results, comparisons of different methods should continue and include measurements of temperature dependence of thermal properties, especially the specific heat. This should cover the relevant temperature range of about 0-90 deg C. Further comparisons would add to previous studies of temperature dependence of the present rocks

  19. Comparison of Thermal Properties Measured by Different Methods

    Energy Technology Data Exchange (ETDEWEB)

    Sundberg, Jan [Geo Innova AB, Linkoeping (Sweden); Kukkonen, Ilmo [Geological Survey of Finland, Helsinki (Finland); Haelldahl, Lars [Hot Disk AB, Uppsala (Sweden)

    2003-04-01

    one or both methods cannot be excluded. For future investigations a set of thermal conductivity standard materials should be selected for testing using the different methods of the laboratories. The material should have thermal properties in the range of typical rocks, be fine-grained and suitable for making samples of different shapes and volumes adjusted to different measurement techniques. Because of large obtained individual variations in the results, comparisons of different methods should continue and include measurements of temperature dependence of thermal properties, especially the specific heat. This should cover the relevant temperature range of about 0-90 deg C. Further comparisons would add to previous studies of temperature dependence of the present rocks.

  20. CALCULATION METHODS OF OPTIMAL ADJUSTMENT OF CONTROL SYSTEM THROUGH DISTURBANCE CHANNEL

    Directory of Open Access Journals (Sweden)

    I. M. Golinko

    2014-01-01

    Full Text Available In the process of automatic control system debugging the great attention is paid to determining formulas’ parameters of optimal dynamic adjustment of regulators, taking into account the dynamics of Objects control. In most cases the known formulas are oriented on design of automatic control system through channel “input-output definition”. But practically in all continuous processes the main task of all regulators is stabilization of output parameters. The Methods of parameters calculation for dynamic adjustment of regulations were developed. These methods allow to optimize the analog and digital regulators, taking into account minimization of regulated influences. There were suggested to use the fact of detuning and maximum value of regulated influence. As the automatic control system optimization with proportional plus reset controllers on disturbance channel is an unimodal task, the main algorithm of optimization is realized by Hooke – Jeeves method. For controllers optimization through channel external disturbance there were obtained functional dependences of parameters calculations of dynamic proportional plus reset controllers from dynamic characteristics of Object control. The obtained dependences allow to improve the work of controllers (regulators of automatic control on external disturbance channel and so it allows to improve the quality of regulation of transient processes. Calculation formulas provide high accuracy and convenience in usage. In suggested method there are no nomographs and this fact expels subjectivity of investigation in determination of parameters of dynamic adjustment of proportional plus reset controllers. Functional dependences can be used for calculation of adjustment of PR controllers in a great range of change of dynamic characteristics of Objects control.

  1. Risk adjustment of health-care performance measures in a multinational register-based study: A pragmatic approach to a complicated topic

    Directory of Open Access Journals (Sweden)

    Tron Anders Moger

    2014-03-01

    Full Text Available Objectives: Health-care performance comparisons across countries are gaining popularity. In such comparisons, the risk adjustment methodology plays a key role for meaningful comparisons. However, comparisons may be complicated by the fact that not all participating countries are allowed to share their data across borders, meaning that only simple methods are easily used for the risk adjustment. In this study, we develop a pragmatic approach using patient-level register data from Finland, Hungary, Italy, Norway, and Sweden. Methods: Data on acute myocardial infarction patients were gathered from health-care registers in several countries. In addition to unadjusted estimates, we studied the effects of adjusting for age, gender, and a number of comorbidities. The stability of estimates for 90-day mortality and length of stay of the first hospital episode following diagnosis of acute myocardial infarction is studied graphically, using different choices of reference data. Logistic regression models are used for mortality, and negative binomial models are used for length of stay. Results: Results from the sensitivity analysis show that the various models of risk adjustment give similar results for the countries, with some exceptions for Hungary and Italy. Based on the results, in Finland and Hungary, the 90-day mortality after acute myocardial infarction is higher than in Italy, Norway, and Sweden. Conclusion: Health-care registers give encouraging possibilities to performance measurement and enable the comparison of entire patient populations between countries. Risk adjustment methodology is affected by the availability of data, and thus, the building of risk adjustment methodology must be transparent, especially when doing multinational comparative research. In that case, even basic methods of risk adjustment may still be valuable.

  2. Environmental impact from different modes of transport. Method of comparison

    International Nuclear Information System (INIS)

    2002-03-01

    A prerequisite of long-term sustainable development is that activities of various kinds are adjusted to what humans and the natural world can tolerate. Transport is an activity that affects humans and the environment to a very great extent and in this project, several actors within the transport sector have together laid the foundation for the development of a comparative method to be able to compare the environmental impact at the different stages along the transport chain. The method analyses the effects of different transport concepts on the climate, noise levels, human health, acidification, land use and ozone depletion. Within the framework of the method, a calculation model has been created in Excel which acts as a basis for the comparisons. The user can choose to download the model from the Swedish EPA's on-line bookstore or order it on a floppy disk. Neither the method nor the model are as yet fully developed but our hope is that they can still be used in their present form as a basis and inspire further efforts and research in the field. In the report, we describe most of these shortcomings, the problems associated with the work and the existing development potential. This publication should be seen as the first stage in the development of a method of comparison between different modes of transport in non-monetary terms where there remains a considerable need for further development and amplification

  3. A Plant Control Technology Using Reinforcement Learning Method with Automatic Reward Adjustment

    Science.gov (United States)

    Eguchi, Toru; Sekiai, Takaaki; Yamada, Akihiro; Shimizu, Satoru; Fukai, Masayuki

    A control technology using Reinforcement Learning (RL) and Radial Basis Function (RBF) Network has been developed to reduce environmental load substances exhausted from power and industrial plants. This technology consists of the statistic model using RBF Network, which estimates characteristics of plants with respect to environmental load substances, and RL agent, which learns the control logic for the plants using the statistic model. In this technology, it is necessary to design an appropriate reward function given to the agent immediately according to operation conditions and control goals to control plants flexibly. Therefore, we propose an automatic reward adjusting method of RL for plant control. This method adjusts the reward function automatically using information of the statistic model obtained in its learning process. In the simulations, it is confirmed that the proposed method can adjust the reward function adaptively for several test functions, and executes robust control toward the thermal power plant considering the change of operation conditions and control goals.

  4. Analysis of methods to determine the latency of online movement adjustments

    NARCIS (Netherlands)

    Oostwoud Wijdenes, L.; Brenner, E.; Smeets, J.B.J.

    2014-01-01

    When studying online movement adjustments, one of the interesting parameters is their latency. We set out to compare three different methods of determining the latency: the threshold, confidence interval, and extrapolation methods. We simulated sets of movements with different movement times and

  5. Adjustment method for embedded metrology engine in an EM773 series microcontroller.

    Science.gov (United States)

    Blazinšek, Iztok; Kotnik, Bojan; Chowdhury, Amor; Kačič, Zdravko

    2015-09-01

    This paper presents the problems of implementation and adjustment (calibration) of a metrology engine embedded in NXP's EM773 series microcontroller. The metrology engine is used in a smart metering application to collect data about energy utilization and is controlled with the use of metrology engine adjustment (calibration) parameters. The aim of this research is to develop a method which would enable the operators to find and verify the optimum parameters which would ensure the best possible accuracy. Properly adjusted (calibrated) metrology engines can then be used as a base for variety of products used in smart and intelligent environments. This paper focuses on the problems encountered in the development, partial automatisation, implementation and verification of this method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Some adjustments to the human capital and the friction cost methods.

    Science.gov (United States)

    Targoutzidis, Antonis

    2018-03-21

    The cost of lost output is a major component of the total cost of illness estimates, especially those for the cost of workplace accidents and diseases. The two main methods for estimating this output, namely the human capital and the friction cost method, lead to very different results, particularly for cases of long-term absence, which makes the choice of method a critical dilemma. Two hidden assumptions, one for each method, are identified in this paper: for human capital method, the assumption that had the accident not happened the individual would remain alive, healthy and employed until retirement, and for friction cost method, the assumption that any created vacancy is covered by an unemployed person. Relevant adjustments to compensate for their impact are proposed: (a) to depreciate the estimates of the human capital method for the risks of premature death, disability or unemployment and (b) to multiply the estimates of the friction cost method with the expected number of job shifts that will be caused by a disability. The impact of these adjustments on the final estimates is very important in terms of magnitude and can lead to better results for each method.

  7. Should methods of correction for multiple comparisons be applied in pharmacovigilance?

    Directory of Open Access Journals (Sweden)

    Lorenza Scotti

    2015-12-01

    Full Text Available Purpose. In pharmacovigilance, spontaneous reporting databases are devoted to the early detection of adverse event ‘signals’ of marketed drugs. A common limitation of these systems is the wide number of concurrently investigated associations, implying a high probability of generating positive signals simply by chance. However it is not clear if the application of methods aimed to adjust for the multiple testing problems are needed when at least some of the drug-outcome relationship under study are known. To this aim we applied a robust estimation method for the FDR (rFDR particularly suitable in the pharmacovigilance context. Methods. We exploited the data available for the SAFEGUARD project to apply the rFDR estimation methods to detect potential false positive signals of adverse reactions attributable to the use of non-insulin blood glucose lowering drugs. Specifically, the number of signals generated from the conventional disproportionality measures and after the application of the rFDR adjustment method was compared. Results. Among the 311 evaluable pairs (i.e., drug-event pairs with at least one adverse event report, 106 (34% signals were considered as significant from the conventional analysis. Among them 1 resulted in false positive signals according to rFDR method. Conclusions. The results of this study seem to suggest that when a restricted number of drug-outcome pairs is considered and warnings about some of them are known, multiple comparisons methods for recognizing false positive signals are not so useful as suggested by theoretical considerations.

  8. Singularity-sensitive gauge-based radar rainfall adjustment methods for urban hydrological applications

    Directory of Open Access Journals (Sweden)

    L.-P. Wang

    2015-09-01

    Full Text Available Gauge-based radar rainfall adjustment techniques have been widely used to improve the applicability of radar rainfall estimates to large-scale hydrological modelling. However, their use for urban hydrological applications is limited as they were mostly developed based upon Gaussian approximations and therefore tend to smooth off so-called "singularities" (features of a non-Gaussian field that can be observed in the fine-scale rainfall structure. Overlooking the singularities could be critical, given that their distribution is highly consistent with that of local extreme magnitudes. This deficiency may cause large errors in the subsequent urban hydrological modelling. To address this limitation and improve the applicability of adjustment techniques at urban scales, a method is proposed herein which incorporates a local singularity analysis into existing adjustment techniques and allows the preservation of the singularity structures throughout the adjustment process. In this paper the proposed singularity analysis is incorporated into the Bayesian merging technique and the performance of the resulting singularity-sensitive method is compared with that of the original Bayesian (non singularity-sensitive technique and the commonly used mean field bias adjustment. This test is conducted using as case study four storm events observed in the Portobello catchment (53 km2 (Edinburgh, UK during 2011 and for which radar estimates, dense rain gauge and sewer flow records, as well as a recently calibrated urban drainage model were available. The results suggest that, in general, the proposed singularity-sensitive method can effectively preserve the non-normality in local rainfall structure, while retaining the ability of the original adjustment techniques to generate nearly unbiased estimates. Moreover, the ability of the singularity-sensitive technique to preserve the non-normality in rainfall estimates often leads to better reproduction of the urban

  9. Singularity-sensitive gauge-based radar rainfall adjustment methods for urban hydrological applications

    Science.gov (United States)

    Wang, L.-P.; Ochoa-Rodríguez, S.; Onof, C.; Willems, P.

    2015-09-01

    Gauge-based radar rainfall adjustment techniques have been widely used to improve the applicability of radar rainfall estimates to large-scale hydrological modelling. However, their use for urban hydrological applications is limited as they were mostly developed based upon Gaussian approximations and therefore tend to smooth off so-called "singularities" (features of a non-Gaussian field) that can be observed in the fine-scale rainfall structure. Overlooking the singularities could be critical, given that their distribution is highly consistent with that of local extreme magnitudes. This deficiency may cause large errors in the subsequent urban hydrological modelling. To address this limitation and improve the applicability of adjustment techniques at urban scales, a method is proposed herein which incorporates a local singularity analysis into existing adjustment techniques and allows the preservation of the singularity structures throughout the adjustment process. In this paper the proposed singularity analysis is incorporated into the Bayesian merging technique and the performance of the resulting singularity-sensitive method is compared with that of the original Bayesian (non singularity-sensitive) technique and the commonly used mean field bias adjustment. This test is conducted using as case study four storm events observed in the Portobello catchment (53 km2) (Edinburgh, UK) during 2011 and for which radar estimates, dense rain gauge and sewer flow records, as well as a recently calibrated urban drainage model were available. The results suggest that, in general, the proposed singularity-sensitive method can effectively preserve the non-normality in local rainfall structure, while retaining the ability of the original adjustment techniques to generate nearly unbiased estimates. Moreover, the ability of the singularity-sensitive technique to preserve the non-normality in rainfall estimates often leads to better reproduction of the urban drainage system

  10. Adjusting the general growth balance method for migration

    OpenAIRE

    Hill, Kenneth; Queiroz, Bernardo

    2010-01-01

    Death distribution methods proposed for death registration coverage by comparison with census age distributions assume no net migration. This assumption makes it problematic to apply these methods to sub-national and national populations affected by substantial net migration. In this paper, we propose and explore a two-step process in which the Growth Balance Equation is first used to estimate net migration rates, using a model of age-specific migration, and then it is used to compare the obs...

  11. Lipophilic versus hydrophilic statin therapy for heart failure: a protocol for an adjusted indirect comparison meta-analysis

    Science.gov (United States)

    2013-01-01

    Background Statins are known to reduce cardiovascular morbidity and mortality in primary and secondary prevention studies. Subsequently, a number of nonrandomised studies have shown statins improve clinical outcomes in patients with heart failure (HF). Small randomised controlled trials (RCT) also show improved cardiac function, reduced inflammation and mortality with statins in HF. However, the findings of two large RCTs do not support the evidence provided by previous studies and suggest statins lack beneficial effects in HF. Two meta-analyses have shown statins do not improve survival, whereas two others showed improved cardiac function and reduced inflammation in HF. It appears lipophilic statins produce better survival and other outcome benefits compared to hydrophilic statins. But the two types have not been compared in direct comparison trials in HF. Methods/design We will conduct a systematic review and meta-analysis of lipophilic and hydrophilic statin therapy in patients with HF. Our objectives are: 1. To determine the effects of lipophilic statins on (1) mortality, (2) hospitalisation for worsening HF, (3) cardiac function and (4) inflammation. 2. To determine the effects of hydrophilic statins on (1) mortality, (2) hospitalisation for worsening HF, (3) cardiac function and (4) inflammation. 3. To compare the efficacy of lipophilic and hydrophilic statins on HF outcomes with an adjusted indirect comparison meta-analysis. We will conduct an electronic search of databases for RCTs that evaluate statins in patients with HF. The reference lists of all identified studies will be reviewed. Two independent reviewers will conduct the search. The inclusion criteria include: 1. RCTs comparing statins with placebo or no statin in patients with symptomatic HF. 2. RCTs that employed the intention-to-treat (ITT) principle in data analysis. 3. Symptomatic HF patients of all aetiologies and on standard treatment. 4. Statin of any dose as intervention. 5. Placebo or no

  12. Methacholine challenge test: Comparison of tidal breathing and dosimeter methods in children.

    Science.gov (United States)

    Mazi, Ahlam; Lands, Larry C; Zielinski, David

    2018-02-01

    Methacholine Challenge Test (MCT) is used to confirm, assess the severity and/or rule out asthma. Two MCT methods are described as equivalent by the American Thoracic Society (ATS), the tidal breathing and the dosimeter methods. However, the majority of adult studies suggest that individuals with asthma do not react at the same PC 20 between the two methods. Additionally, the nebulizers used are no longer available and studies suggest current nebulizers are not equivalent to these. Our study investigates the difference in positive MCT tests between three methods in a pediatric population. A retrospective, chart review of all MCT performed with spirometry at the Montreal Children's Hospital from January 2006 to March 2016. A comparison of the percentage positive MCT tests with three methods, tidal breathing, APS dosimeter and dose adjusted DA-dosimeter, was performed at different cutoff points up to 8 mg/mL. A total of 747 subjects performed the tidal breathing method, 920 subjects the APS dosimeter method, and 200 subjects the DA-dosimeter method. At a PC 20 cutoff ≤4 mg/mL, the percentage positive MCT was significantly higher using the tidal breathing method (76.3%) compared to the APS dosimeter (45.1%) and DA-dosimeter (65%) methods (P < 0.0001). The choice of nebulizer and technique significantly impacts the rate of positivity when using MCT to diagnose and assess asthma. Lack of direct comparison of techniques within the same individuals and clinical assessment should be addressed in future studies to standardize MCT methodology in children. © 2017 Wiley Periodicals, Inc.

  13. Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files. SG39 meeting, November 2014

    International Nuclear Information System (INIS)

    Aufiero, Manuele; Ivanov, Evgeny; Hoefer, Axel; Yokoyama, Kenji; Da Cruz, Dirceu Ferreira; KODELI, Ivan-Alexander; Hursin, Mathieu; Pelloni, Sandro; Palmiotti, Giuseppe; Salvatores, Massimo; Barnes, Andrew; Cabellos De Francisco, Oscar; ); Ivanova, Tatiana; )

    2014-11-01

    The aim of WPEC subgroup 39 'Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files' is to provide criteria and practical approaches to use effectively the results of sensitivity analyses and cross section adjustments for feedback to evaluators and differential measurement experimentalists in order to improve the knowledge of neutron cross sections, uncertainties, and correlations to be used in a wide range of applications. This document is the proceedings of the third formal Subgroup meeting held at the NEA, Issy-les-Moulineaux, France, on 27-28 November 2014. It comprises a Summary Record of the meeting and all the available presentations (slides) given by the participants: A - Sensitivity methods: 1 - Perturbation/sensitivity calculations with Serpent (M. Aufiero); 2 - Comparison of deterministic and Monte Carlo sensitivity analysis of SNEAK-7A and FLATTOP-Pu Benchmarks (I. Kodeli); B - Integral experiments: 1 - PROTEUS experiments: selected experiments sensitivity profiles and availability, (M. Hursin, M. Salvatores - PROTEUS Experiments, HCLWR configurations); 2 - SINBAD Benchmark Database and FNS/JAEA Liquid Oxygen TOF Experiment Analysis (I. Kodeli); 3 - STEK experiment Opportunity for Validation of Fission Products Nuclear Data (D. Da Cruz); 4 - SEG (tailored adjoint flux shapes) (M. Savatores - comments) 5 - IPPE transmission experiments (Fe, 238 U) (T. Ivanova); 6 - RPI semi-integral (Fe, 238 U) (G. Palmiotti - comments); 7 - New experiments, e.g. in connection with the new NSC Expert Group on 'Improvement of Integral Experiments Data for Minor Actinide Management' (G. Palmiotti - Some comments from the Expert Group) 8 - Additional PSI adjustment studies accounting for nonlinearity (S. Pelloni); 9 - Adjustment methodology issues (G. Palmiotti); C - Am-241 and fission product issues: 1 - Am-241 validation for criticality-safety calculations (A. Barnes - Visio

  14. Comparison of clinical probability-adjusted D-dimer and age-adjusted D-dimer interpretation to exclude venous thromboembolism.

    Science.gov (United States)

    Takach Lapner, Sarah; Julian, Jim A; Linkins, Lori-Ann; Bates, Shannon; Kearon, Clive

    2017-10-05

    Two new strategies for interpreting D-dimer results have been proposed: i) using a progressively higher D-dimer threshold with increasing age (age-adjusted strategy) and ii) using a D-dimer threshold in patients with low clinical probability that is twice the threshold used in patients with moderate clinical probability (clinical probability-adjusted strategy). Our objective was to compare the diagnostic accuracy of age-adjusted and clinical probability-adjusted D-dimer interpretation in patients with a low or moderate clinical probability of venous thromboembolism (VTE). We performed a retrospective analysis of clinical data and blood samples from two prospective studies. We compared the negative predictive value (NPV) for VTE, and the proportion of patients with a negative D-dimer result, using two D-dimer interpretation strategies: the age-adjusted strategy, which uses a progressively higher D-dimer threshold with increasing age over 50 years (age in years × 10 µg/L FEU); and the clinical probability-adjusted strategy which uses a D-dimer threshold of 1000 µg/L FEU in patients with low clinical probability and 500 µg/L FEU in patients with moderate clinical probability. A total of 1649 outpatients with low or moderate clinical probability for a first suspected deep vein thrombosis or pulmonary embolism were included. The NPV of both the clinical probability-adjusted strategy (99.7 %) and the age-adjusted strategy (99.6 %) were similar. However, the proportion of patients with a negative result was greater with the clinical probability-adjusted strategy (56.1 % vs, 50.9 %; difference 5.2 %; 95 % CI 3.5 % to 6.8 %). These findings suggest that clinical probability-adjusted D-dimer interpretation is a better way of interpreting D-dimer results compared to age-adjusted interpretation.

  15. Optimal Inconsistency Repairing of Pairwise Comparison Matrices Using Integrated Linear Programming and Eigenvector Methods

    Directory of Open Access Journals (Sweden)

    Haiqing Zhang

    2014-01-01

    Full Text Available Satisfying consistency requirements of pairwise comparison matrix (PCM is a critical step in decision making methodologies. An algorithm has been proposed to find a new modified consistent PCM in which it can replace the original inconsistent PCM in analytic hierarchy process (AHP or in fuzzy AHP. This paper defines the modified consistent PCM by the original inconsistent PCM and an adjustable consistent PCM combined. The algorithm adopts a segment tree to gradually approach the greatest lower bound of the distance with the original PCM to obtain the middle value of an adjustable PCM. It also proposes a theorem to obtain the lower value and the upper value of an adjustable PCM based on two constraints. The experiments for crisp elements show that the proposed approach can preserve more of the original information than previous works of the same consistent value. The convergence rate of our algorithm is significantly faster than previous works with respect to different parameters. The experiments for fuzzy elements show that our method could obtain suitable modified fuzzy PCMs.

  16. A Cross-Section Adjustment Method for Double Heterogeneity Problem in VHTGR Analysis

    International Nuclear Information System (INIS)

    Yun, Sung Hwan; Cho, Nam Zin

    2011-01-01

    Very High Temperature Gas-Cooled Reactors (VHTGRs) draw strong interest as candidates for a Gen-IV reactor concept, in which TRISO (tristructuralisotropic) fuel is employed to enhance the fuel performance. However, randomly dispersed TRISO fuel particles in a graphite matrix induce the so-called double heterogeneity problem. For design and analysis of such reactors with the double heterogeneity problem, the Monte Carlo method is widely used due to its complex geometry and continuous-energy capabilities. However, its huge computational burden, even in the modern high computing power, is still problematic to perform wholecore analysis in reactor design procedure. To address the double heterogeneity problem using conventional lattice codes, the RPT (Reactivityequivalent Physical Transformation) method considers a homogenized fuel region that is geometrically transformed to provide equivalent self-shielding effect. Another method is the coupled Monte Carlo/Collision Probability method, in which the absorption and nu-fission resonance cross-section libraries in the deterministic CPM3 lattice code are modified group-wise by the double heterogeneity factors determined by Monte Carlo results. In this paper, a new two-step Monte Carlo homogenization method is described as an alternative to those methods above. In the new method, a single cross-section adjustment factor is introduced to provide self-shielding effect equivalent to the self-shielding in heterogeneous geometry for a unit cell of compact fuel. Then, the homogenized fuel compact material with the equivalent cross-section adjustment factor is used in continuous-energy Monte Carlo calculation for various types of fuel blocks (or assemblies). The procedure of cross-section adjustment is implemented in the MCNP5 code

  17. IC layout adjustment method and tool for improving dielectric reliability at interconnects

    Energy Technology Data Exchange (ETDEWEB)

    Kahng, Andrew B.; Chan, Tuck Boon

    2018-03-20

    Method for adjusting a layout used in making an integrated circuit includes one or more interconnects in the layout that are susceptible to dielectric breakdown are selected. One or more selected interconnects are adjusted to increase via to wire spacing with respect to at least one via and one wire of the one or more selected interconnects. Preferably, the selecting analyzes signal patterns of interconnects, and estimates the stress ratio based on state probability of routed signal nets in the layout. An annotated layout is provided that describes distances by which one or more via or wire segment edges are to be shifted. Adjustments can include thinning and shifting of wire segments, and rotation of vias.

  18. A method to adjust radiation dose-response relationships for clinical risk factors

    DEFF Research Database (Denmark)

    Appelt, Ane Lindegaard; Vogelius, Ivan R

    2012-01-01

    Several clinical risk factors for radiation induced toxicity have been identified in the literature. Here, we present a method to quantify the effect of clinical risk factors on radiation dose-response curves and apply the method to adjust the dose-response for radiation pneumonitis for patients...

  19. Fitting method of pseudo-polynomial for solving nonlinear parametric adjustment

    Institute of Scientific and Technical Information of China (English)

    陶华学; 宫秀军; 郭金运

    2001-01-01

    The optimal condition and its geometrical characters of the least-square adjustment were proposed. Then the relation between the transformed surface and least-squares was discussed. Based on the above, a non-iterative method, called the fitting method of pseudo-polynomial, was derived in detail. The final least-squares solution can be determined with sufficient accuracy in a single step and is not attained by moving the initial point in the view of iteration. The accuracy of the solution relys wholly on the frequency of Taylor's series. The example verifies the correctness and validness of the method.

  20. Magnetic field adjustment structure and method for a tapered wiggler

    Science.gov (United States)

    Halbach, Klaus

    1988-01-01

    An improved method and structure is disclosed for adjusting the magnetic field generated by a group of electromagnet poles spaced along the path of a charged particle beam to compensate for energy losses in the charged particles which comprises providing more than one winding on at least some of the electromagnet poles; connecting one respective winding on each of several consecutive adjacent electromagnet poles to a first power supply, and the other respective winding on the electromagnet pole to a different power supply in staggered order; and independently adjusting one power supply to independently vary the current in one winding on each electromagnet pole in a group whereby the magnetic field strength of each of a group of electromagnet poles may be changed in smaller increments.

  1. Short term load forecasting technique based on the seasonal exponential adjustment method and the regression model

    International Nuclear Information System (INIS)

    Wu, Jie; Wang, Jianzhou; Lu, Haiyan; Dong, Yao; Lu, Xiaoxiao

    2013-01-01

    Highlights: ► The seasonal and trend items of the data series are forecasted separately. ► Seasonal item in the data series is verified by the Kendall τ correlation testing. ► Different regression models are applied to the trend item forecasting. ► We examine the superiority of the combined models by the quartile value comparison. ► Paired-sample T test is utilized to confirm the superiority of the combined models. - Abstract: For an energy-limited economy system, it is crucial to forecast load demand accurately. This paper devotes to 1-week-ahead daily load forecasting approach in which load demand series are predicted by employing the information of days before being similar to that of the forecast day. As well as in many nonlinear systems, seasonal item and trend item are coexisting in load demand datasets. In this paper, the existing of the seasonal item in the load demand data series is firstly verified according to the Kendall τ correlation testing method. Then in the belief of the separate forecasting to the seasonal item and the trend item would improve the forecasting accuracy, hybrid models by combining seasonal exponential adjustment method (SEAM) with the regression methods are proposed in this paper, where SEAM and the regression models are employed to seasonal and trend items forecasting respectively. Comparisons of the quartile values as well as the mean absolute percentage error values demonstrate this forecasting technique can significantly improve the accuracy though models applied to the trend item forecasting are eleven different ones. This superior performance of this separate forecasting technique is further confirmed by the paired-sample T tests

  2. Adjusted Empirical Likelihood Method in the Presence of Nuisance Parameters with Application to the Sharpe Ratio

    Directory of Open Access Journals (Sweden)

    Yuejiao Fu

    2018-04-01

    Full Text Available The Sharpe ratio is a widely used risk-adjusted performance measurement in economics and finance. Most of the known statistical inferential methods devoted to the Sharpe ratio are based on the assumption that the data are normally distributed. In this article, without making any distributional assumption on the data, we develop the adjusted empirical likelihood method to obtain inference for a parameter of interest in the presence of nuisance parameters. We show that the log adjusted empirical likelihood ratio statistic is asymptotically distributed as the chi-square distribution. The proposed method is applied to obtain inference for the Sharpe ratio. Simulation results illustrate that the proposed method is comparable to Jobson and Korkie’s method (1981 and outperforms the empirical likelihood method when the data are from a symmetric distribution. In addition, when the data are from a skewed distribution, the proposed method significantly outperforms all other existing methods. A real-data example is analyzed to exemplify the application of the proposed method.

  3. LSL: a logarithmic least-squares adjustment method

    International Nuclear Information System (INIS)

    Stallmann, F.W.

    1982-01-01

    To meet regulatory requirements, spectral unfolding codes must not only provide reliable estimates for spectral parameters, but must also be able to determine the uncertainties associated with these parameters. The newer codes, which are more appropriately called adjustment codes, use the least squares principle to determine estimates and uncertainties. The principle is simple and straightforward, but there are several different mathematical models to describe the unfolding problem. In addition to a sound mathematical model, ease of use and range of options are important considerations in the construction of adjustment codes. Based on these considerations, a least squares adjustment code for neutron spectrum unfolding has been constructed some time ago and tentatively named LSL

  4. Energy-Saving Performance of Flap-Adjustment-Based Centrifugal Fan

    Directory of Open Access Journals (Sweden)

    Genglin Chen

    2018-01-01

    Full Text Available The current paper mainly focuses on finding a more appropriate way to enhance the fan performance at off-design conditions. The centrifugal fan (CF based on flap-adjustment (FA has been investigated through theoretical, experimental, and finite element methods. To obtain a more predominant performance of CF from the different adjustments, we carried out a comparative analysis on FA and leading-adjustment (LA in aerodynamic performances, which included the adjusted angle of blades, total pressure, efficiency, system-efficiency, adjustment-efficiency, and energy-saving rate. The contribution of this paper is the integrated performance curve of the CF. Finally, the results showed that the effects of FA and LA on economic performance and energy savings of the fan varied with the blade angles. Furthermore, FA was feasible, which is more sensitive than LA. Moreover, the CF with FA offered a more extended flow-range of high economic characteristic in comparison with LA. Finally, when the operation flow-range extends, energy-saving rate of the fan with FA would have improvement.

  5. Comparison of dysfunctional attitudes and social adjustment among infertile employed and unemployed women in Iran.

    Science.gov (United States)

    Fatemi, Azadeh S; Younesi, Seyed Jalal; Azkhosh, Manouchehr; Askari, Ali

    2010-04-01

    This study aims to compare dysfunctional attitudes and social adjustment in infertile employed and unemployed females. Due to the stresses of infertility, infertile females are faced with a variety of sexual and psychological problems, as well as dysfunctional attitudes that can lead to depression. Moreover, infertility problems provoke women into maladjustment and inadvertent corruption of relationships. In this regard, our goal is to consider the effects of employment in conjunction with education on dysfunctional attitudes and social adjustment among infertile women in Iran. In this work, we employed the survey method. We recruited 240 infertile women, utilizing the cluster random sampling method. These women filled out the Dysfunctional Attitudes Scale and the social adjustment part of the California Test of Personality. Next, multivariate analysis of variance was performed to test the relationship of employment status and education with dysfunctional attitudes and social adjustment. Our results indicated that dysfunctional attitudes were far more prevalent in infertile unemployed women than in infertile employed women. Also, social adjustment was better in infertile employed women than in infertile unemployed women. It was shown that education level alone does not have significant effect on dysfunctional attitudes and social adjustment. However, we demonstrated that the employment status of infertile women in conjunction with their education level significantly affects the two dimensions of dysfunctional attitudes (relationships, entitlements) and has insignificant effects on social adjustment. It was revealed that in employed infertile women in Iran, the higher education level, the less dysfunctional were attitudes in relationships and entitlements, whereas in unemployed infertile women, those with a college degree had the least and those with master's or higher degrees had the most dysfunctional attitudes in terms of relationships and entitlements.

  6. The adaptive problems of female teenage refugees and their behavioral adjustment methods for coping

    Directory of Open Access Journals (Sweden)

    Mhaidat F

    2016-04-01

    Full Text Available Fatin Mhaidat Department of Educational Psychology, Faculty of Educational Sciences, The Hashemite University, Zarqa, Jordan Abstract: This study aimed at identifying the levels of adaptive problems among teenage female refugees in the government schools and explored the behavioral methods that were used to cope with the problems. The sample was composed of 220 Syrian female students (seventh to first secondary grades enrolled at government schools within the Zarqa Directorate and who came to Jordan due to the war conditions in their home country. The study used the scale of adaptive problems that consists of four dimensions (depression, anger and hostility, low self-esteem, and feeling insecure and a questionnaire of the behavioral adjustment methods for dealing with the problem of asylum. The results indicated that the Syrian teenage female refugees suffer a moderate degree of adaptation problems, and the positive adjustment methods they have used are more than the negatives. Keywords: adaptive problems, female teenage refugees, behavioral adjustment

  7. A comparison of two sleep spindle detection methods based on all night averages: individually adjusted versus fixed frequencies

    Directory of Open Access Journals (Sweden)

    Péter Przemyslaw Ujma

    2015-02-01

    Full Text Available Sleep spindles are frequently studied for their relationship with state and trait cognitive variables, and they are thought to play an important role in sleep-related memory consolidation. Due to their frequent occurrence in NREM sleep, the detection of sleep spindles is only feasible using automatic algorithms, of which a large number is available. We compared subject averages of the spindle parameters computed by a fixed frequency (11-13 Hz for slow spindles, 13-15 Hz for fast spindles automatic detection algorithm and the individual adjustment method (IAM, which uses individual frequency bands for sleep spindle detection. Fast spindle duration and amplitude are strongly correlated in the two algorithms, but there is little overlap in fast spindle density and slow spindle parameters in general. The agreement between fixed and manually determined sleep spindle frequencies is limited, especially in case of slow spindles. This is the most likely reason for the poor agreement between the two detection methods in case of slow spindle parameters. Our results suggest that while various algorithms may reliably detect fast spindles, a more sophisticated algorithm primed to individual spindle frequencies is necessary for the detection of slow spindles as well as individual variations in the number of spindles in general.

  8. ACCELERATION RENDERING METHOD ON RAY TRACING WITH ANGLE COMPARISON AND DISTANCE COMPARISON

    Directory of Open Access Journals (Sweden)

    Liliana liliana

    2007-01-01

    Full Text Available In computer graphics applications, to produce realistic images, a method that is often used is ray tracing. Ray tracing does not only model local illumination but also global illumination. Local illumination count ambient, diffuse and specular effects only, but global illumination also count mirroring and transparency. Local illumination count effects from the lamp(s but global illumination count effects from other object(s too. Objects that are usually modeled are primitive objects and mesh objects. The advantage of mesh modeling is various, interesting and real-like shape. Mesh contains many primitive objects like triangle or square (rare. A problem in mesh object modeling is long rendering time. It is because every ray must be checked with a lot of triangle of the mesh. Added by ray from other objects checking, the number of ray that traced will increase. It causes the increasing of rendering time. To solve this problem, in this research, new methods are developed to make the rendering process of mesh object faster. The new methods are angle comparison and distance comparison. These methods are used to reduce the number of ray checking. The rays predicted will not intersect with the mesh, are not checked weather the ray intersects the mesh. With angle comparison, if using small angle to compare, the rendering process will be fast. This method has disadvantage, if the shape of each triangle is big, some triangles will be corrupted. If the angle to compare is bigger, mesh corruption can be avoided but the rendering time will be longer than without comparison. With distance comparison, the rendering time is less than without comparison, and no triangle will be corrupted.

  9. Aggregation Methods in International Comparisons

    NARCIS (Netherlands)

    B.M. Balk (Bert)

    2001-01-01

    textabstractThis paper reviews the progress that has been made over the past decade in understanding the nature of the various multilateral in- ternational comparison methods. Fifteen methods are discussed and subjected to a system of ten tests. In addition, attention is paid to recently developed

  10. Monte Carlo Method with Heuristic Adjustment for Irregularly Shaped Food Product Volume Measurement

    Directory of Open Access Journals (Sweden)

    Joko Siswantoro

    2014-01-01

    Full Text Available Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method.

  11. Monte Carlo method with heuristic adjustment for irregularly shaped food product volume measurement.

    Science.gov (United States)

    Siswantoro, Joko; Prabuwono, Anton Satria; Abdullah, Azizi; Idrus, Bahari

    2014-01-01

    Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method.

  12. Adjustment Criterion and Algorithm in Adjustment Model with Uncertain

    Directory of Open Access Journals (Sweden)

    SONG Yingchun

    2015-02-01

    Full Text Available Uncertainty often exists in the process of obtaining measurement data, which affects the reliability of parameter estimation. This paper establishes a new adjustment model in which uncertainty is incorporated into the function model as a parameter. A new adjustment criterion and its iterative algorithm are given based on uncertainty propagation law in the residual error, in which the maximum possible uncertainty is minimized. This paper also analyzes, with examples, the different adjustment criteria and features of optimal solutions about the least-squares adjustment, the uncertainty adjustment and total least-squares adjustment. Existing error theory is extended with new observational data processing method about uncertainty.

  13. Comparison of time adjustment clauses between DZ3910, AS4000 and STCC

    Directory of Open Access Journals (Sweden)

    David Finnie

    2013-03-01

    Full Text Available This article examines time adjustment clauses, as they relate to time adjustment between standard terms of construction contracts. DZ3910, AS4000 and STCC were compared on the basis of how risks are allocated, how this may impact on the contractor’s pricing, and ease of understanding for each clause. ASTCC was found to be the most easily interpreted contract, followed by AS4000 and then NZS3910. These assessments were based on the following: a whether each contract contains words with multiple meanings, b the number of words used per sentence, c the amount of internal cross-referencing, and d the clarity of the contract structure. The allowable pre-conditions for the contractor to claim a time adjustment are similar for all three contracts, and none of them expressly state which party is to bare the risk of buildability, or address the risk of a designer’s disclaimer clause. All of the contracts adopt the principle of contra preferentum which means that the employer bares the risk of variance if there are any ambiguities in the design documentation. Due to their similarities of risk allocation, all of the contracts provide the employer with a similar amount of price surety. AS4000 is the only contract to contain a stringent time-bar clause, limiting a contractor’s time adjustment claim. ASTCC requires the contractor to apply ‘immediately’ and DZ3910 provides a time-bar of 20 working days or as soon as practicable. None of the contracts clarify whether their timing requirements take precedence over the prevention principle, or over any other ground for claiming a time adjustment. The effect of DZ3910’s pre-notification clause 5.19.3 is discussed, and an alternative contents structure is recommended for DZ3910, using a project management method.

  14. Statistical implications of adjustments of raw ILI (In-Line Inspection) data

    Energy Technology Data Exchange (ETDEWEB)

    Timashev, Svyatoslav A.; Bushinskaya, Anna V. [Russian Academy of Sciences, Ekaterinburg (Russian Federation). Ural Branch. Sciences and Engineering Center ' Reliability and Safety of Large Systems and Machines'

    2009-07-01

    The paper describes the implications and inferences that inevitably arise when deliberate 'adjustments' of raw MFL ILI data are made when delivering the final report on conducted ILI and/or performing defect sizing and reliability assessments needed for pipeline integrity management plans (IMPs). The root causes of data adjustments are discussed, main types of adjustments are classified and the consequences as related to pipeline residual life, reliability and safety are described. A comparison is performed between adjustment and the full statistical analysis (FSA), as applied to raw ILI and verification data. The consequences of defect data adjustment as related to pipeline reliability and POF and possible litigation issues are discussed. Case studies are presented which demonstrate the application of the FSA method to the results of ILI and verification measurements on pipelines that are located on three continents. Some assessments of the actual reliability of pipelines with defects are given. (author)

  15. Calculations for Adjusting Endogenous Biomarker Levels During Analytical Recovery Assessments for Ligand-Binding Assay Bioanalytical Method Validation.

    Science.gov (United States)

    Marcelletti, John F; Evans, Cindy L; Saxena, Manju; Lopez, Adriana E

    2015-07-01

    It is often necessary to adjust for detectable endogenous biomarker levels in spiked validation samples (VS) and in selectivity determinations during bioanalytical method validation for ligand-binding assays (LBA) with a matrix like normal human serum (NHS). Described herein are case studies of biomarker analyses using multiplex LBA which highlight the challenges associated with such adjustments when calculating percent analytical recovery (%AR). The LBA test methods were the Meso Scale Discovery V-PLEX® proinflammatory and cytokine panels with NHS as test matrix. The NHS matrix blank exhibited varied endogenous content of the 20 individual cytokines before spiking, ranging from undetectable to readily quantifiable. Addition and subtraction methods for adjusting endogenous cytokine levels in %AR calculations are both used in the bioanalytical field. The two methods were compared in %AR calculations following spiking and analysis of VS for cytokines having detectable endogenous levels in NHS. Calculations for %AR obtained by subtracting quantifiable endogenous biomarker concentrations from the respective total analytical VS values yielded reproducible and credible conclusions. The addition method, in contrast, yielded %AR conclusions that were frequently unreliable and discordant with values obtained with the subtraction adjustment method. It is shown that subtraction of assay signal attributable to matrix is a feasible alternative when endogenous biomarkers levels are below the limit of quantitation, but above the limit of detection. These analyses confirm that the subtraction method is preferable over that using addition to adjust for detectable endogenous biomarker levels when calculating %AR for biomarker LBA.

  16. Adjusting for multiple prognostic factors in the analysis of randomised trials

    Science.gov (United States)

    2013-01-01

    Background When multiple prognostic factors are adjusted for in the analysis of a randomised trial, it is unclear (1) whether it is necessary to account for each of the strata, formed by all combinations of the prognostic factors (stratified analysis), when randomisation has been balanced within each stratum (stratified randomisation), or whether adjusting for the main effects alone will suffice, and (2) the best method of adjustment in terms of type I error rate and power, irrespective of the randomisation method. Methods We used simulation to (1) determine if a stratified analysis is necessary after stratified randomisation, and (2) to compare different methods of adjustment in terms of power and type I error rate. We considered the following methods of analysis: adjusting for covariates in a regression model, adjusting for each stratum using either fixed or random effects, and Mantel-Haenszel or a stratified Cox model depending on outcome. Results Stratified analysis is required after stratified randomisation to maintain correct type I error rates when (a) there are strong interactions between prognostic factors, and (b) there are approximately equal number of patients in each stratum. However, simulations based on real trial data found that type I error rates were unaffected by the method of analysis (stratified vs unstratified), indicating these conditions were not met in real datasets. Comparison of different analysis methods found that with small sample sizes and a binary or time-to-event outcome, most analysis methods lead to either inflated type I error rates or a reduction in power; the lone exception was a stratified analysis using random effects for strata, which gave nominal type I error rates and adequate power. Conclusions It is unlikely that a stratified analysis is necessary after stratified randomisation except in extreme scenarios. Therefore, the method of analysis (accounting for the strata, or adjusting only for the covariates) will not

  17. Using multilevel modelling to assess case-mix adjusters in consumers experience surveys in health care

    NARCIS (Netherlands)

    Damman, O.C.; Stubbe, J.H.; Hendriks, M.; Arah, O.A.; Spreeuwenberg, P.; Delnoij, D.M.J.; Groenewegen, P.P.

    2009-01-01

    Background: Ratings on the quality of healthcare from the consumer’s perspective need to be adjusted for consumer characteristics to ensure fair and accurate comparisons between healthcare providers or health plans. Although multilevel analysis is already considered an appropriate method for

  18. Using multilevel modeling to assess case-mix adjusters in consumer experience surveys in health care.

    NARCIS (Netherlands)

    Damman, O.C.; Stubbe, J.H.; Hendriks, M.; Arah, O.A.; Spreeuwenberg, P.; Delnoij, D.M.J.; Groenewegen, P.P.

    2009-01-01

    Background: Ratings on the quality of healthcare from the consumer’s perspective need to be adjusted for consumer characteristics to ensure fair and accurate comparisons between healthcare providers or health plans. Although multilevel analysis is already considered an appropriate method for

  19. Using Multilevel Modeling to Assess Case-Mix Adjusters in Consumer Experience Surveys in Health Care

    NARCIS (Netherlands)

    Damman, Olga C.; Stubbe, Janine H.; Hendriks, Michelle; Arah, Onyebuchi A.; Spreeuwenberg, Peter; Delnoij, Diana M. J.; Groenewegen, Peter P.

    2009-01-01

    Background: Ratings on the quality of healthcare from the consumer's perspective need to be adjusted for consumer characteristics to ensure fair and accurate comparisons between healthcare providers or health plans. Although multilevel analysis is already considered an appropriate method for

  20. A prospective crossover comparison of neurally adjusted ventilatory assist and pressure-support ventilation in a pediatric and neonatal intensive care unit population.

    LENUS (Irish Health Repository)

    Breatnach, Cormac

    2012-02-01

    OBJECTIVE: To compare neurally adjusted ventilatory assist ventilation with pressure-support ventilation. DESIGN: Prospective, crossover comparison study. SETTING: Tertiary care pediatric and neonatal intensive care unit. PATIENTS: Sixteen ventilated infants and children: mean age = 9.7 months (range = 2 days-4 yrs) and mean weight = 6.2 kg (range = 2.4-13.7kg). INTERVENTIONS: A modified nasogastric tube was inserted and correct positioning was confirmed. Patients were ventilated in pressure-support mode with a pneumatic trigger for a 30-min period and then in neurally adjusted ventilatory assist mode for up to 4 hrs. MEASUREMENTS AND MAIN RESULTS: Data collected for comparison included activating trigger (neural vs. pneumatic), peak and mean airway pressures, expired minute and tidal volumes, heart rate, respiratory rate, pulse oximetry, end-tidal CO2 and arterial blood gases. Synchrony was improved in neurally adjusted ventilatory assist mode with 65% (+\\/-21%) of breaths triggered neurally vs. 35% pneumatically (p < .001) and 85% (+\\/-8%) of breaths cycled-off neurally vs. 15% pneumatically (p = .0001). The peak airway pressure in neurally adjusted ventilatory assist mode was significantly lower than in pressure-support mode with a 28% decrease in pressure after 30 mins (p = .003) and 32% decrease after 3 hrs (p < .001). Mean airway pressure was reduced by 11% at 30 mins (p = .13) and 9% at 3 hrs (p = .31) in neurally adjusted ventilatory assist mode although this did not reach statistical significance. Patient hemodynamics and gas exchange remained stable for the study period. No adverse patient events or device effects were noted. CONCLUSIONS: In a neonatal and pediatric intensive care unit population, ventilation in neurally adjusted ventilatory assist mode was associated with improved patient-ventilator synchrony and lower peak airway pressure when compared with pressure-support ventilation with a pneumatic trigger. Ventilating patients in this new mode

  1. Adjusting survival time estimates to account for treatment switching in randomized controlled trials--an economic evaluation context: methods, limitations, and recommendations.

    Science.gov (United States)

    Latimer, Nicholas R; Abrams, Keith R; Lambert, Paul C; Crowther, Michael J; Wailoo, Allan J; Morden, James P; Akehurst, Ron L; Campbell, Michael J

    2014-04-01

    Treatment switching commonly occurs in clinical trials of novel interventions in the advanced or metastatic cancer setting. However, methods to adjust for switching have been used inconsistently and potentially inappropriately in health technology assessments (HTAs). We present recommendations on the use of methods to adjust survival estimates in the presence of treatment switching in the context of economic evaluations. We provide background on the treatment switching issue and summarize methods used to adjust for it in HTAs. We discuss the assumptions and limitations associated with adjustment methods and draw on results of a simulation study to make recommendations on their use. We demonstrate that methods used to adjust for treatment switching have important limitations and often produce bias in realistic scenarios. We present an analysis framework that aims to increase the probability that suitable adjustment methods can be identified on a case-by-case basis. We recommend that the characteristics of clinical trials, and the treatment switching mechanism observed within them, should be considered alongside the key assumptions of the adjustment methods. Key assumptions include the "no unmeasured confounders" assumption associated with the inverse probability of censoring weights (IPCW) method and the "common treatment effect" assumption associated with the rank preserving structural failure time model (RPSFTM). The limitations associated with switching adjustment methods such as the RPSFTM and IPCW mean that they are appropriate in different scenarios. In some scenarios, both methods may be prone to bias; "2-stage" methods should be considered, and intention-to-treat analyses may sometimes produce the least bias. The data requirements of adjustment methods also have important implications for clinical trialists.

  2. Method Based on Confidence Radius to Adjust the Location of Mobile Terminals

    DEFF Research Database (Denmark)

    García-Fernández, Juan Antonio; Jurado-Navas, Antonio; Fernández-Navarro, Mariano

    2017-01-01

    The present paper details a technique for adjusting in a smart manner the position estimates of any user equipment given by different geolocation/positioning methods in a wireless radiofrequency communication network based on different strategies (observed time difference of arrival , angle of ar...

  3. Comparison of microstickies measurement methods. Part I, sample preparation and measurement methods

    Science.gov (United States)

    Mahendra R. Doshi; Angeles Blanco; Carlos Negro; Gilles M. Dorris; Carlos C. Castro; Axel Hamann; R. Daniel Haynes; Carl Houtman; Karen Scallon; Hans-Joachim Putz; Hans Johansson; R.A. Venditti; K. Copeland; H.-M. Chang

    2003-01-01

    Recently, we completed a project on the comparison of macrostickies measurement methods. Based on the success of the project, we decided to embark on this new project on comparison of microstickies measurement methods. When we started this project, there were some concerns and doubts principally due to the lack of an accepted definition of microstickies. However, we...

  4. Risk adjustment models for interhospital comparison of CS rates using Robson's ten group classification system and other socio-demographic and clinical variables.

    Science.gov (United States)

    Colais, Paola; Fantini, Maria P; Fusco, Danilo; Carretta, Elisa; Stivanello, Elisa; Lenzi, Jacopo; Pieri, Giulia; Perucci, Carlo A

    2012-06-21

    Caesarean section (CS) rate is a quality of health care indicator frequently used at national and international level. The aim of this study was to assess whether adjustment for Robson's Ten Group Classification System (TGCS), and clinical and socio-demographic variables of the mother and the fetus is necessary for inter-hospital comparisons of CS rates. The study population includes 64,423 deliveries in Emilia-Romagna between January 1, 2003 and December 31, 2004, classified according to theTGCS. Poisson regression was used to estimate crude and adjusted hospital relative risks of CS compared to a reference category. Analyses were carried out in the overall population and separately according to the Robson groups (groups I, II, III, IV and V-X combined). Adjusted relative risks (RR) of CS were estimated using two risk-adjustment models; the first (M1) including the TGCS group as the only adjustment factor; the second (M2) including in addition demographic and clinical confounders identified using a stepwise selection procedure. Percentage variations between crude and adjusted RRs by hospital were calculated to evaluate the confounding effect of covariates. The percentage variations from crude to adjusted RR proved to be similar in M1 and M2 model. However, stratified analyses by Robson's classification groups showed that residual confounding for clinical and demographic variables was present in groups I (nulliparous, single, cephalic, ≥37 weeks, spontaneous labour) and III (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, spontaneous labour) and IV (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, induced or CS before labour) and to a minor extent in groups II (nulliparous, single, cephalic, ≥37 weeks, induced or CS before labour) and IV (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, induced or CS before labour). The TGCS classification is useful for inter-hospital comparison of CS section rates, but

  5. Sibling comparison of differential parental treatment in adolescence: gender, self-esteem, and emotionality as mediators of the parenting-adjustment association.

    Science.gov (United States)

    Feinberg, M E; Neiderhiser, J M; Simmens, S; Reiss, D; Hetherington, E M

    2000-01-01

    This study employs findings from social comparison research to investigate adolescents' comparisons with siblings with regard to parental treatment. The sibling comparison hypothesis was tested on a sample of 516 two-child families by examining whether gender, self-esteem, and emotionality-which have been found in previous research to moderate social comparison-also moderate sibling comparison as reflected by siblings' own evaluations of differential parental treatment. Results supported a moderating effect for self-esteem and emotionality but not gender. The sibling comparison process was further examined by using a structural equation model in which parenting toward each child was associated with the adjustment of that child and of the child's sibling. Evidence of the "sibling barricade" effect-that is, parenting toward one child being linked with opposite results on the child's sibling as on the target child-was found in a limited number of cases and interpreted as reflecting a sibling comparison process. For older siblings, emotionality and self-esteem moderated the sibling barricade effect but in the opposite direction as predicted. Results are discussed in terms of older siblings' increased sensitivity to parenting as well as the report of differential parenting reflecting the child's level of comfort and benign understanding of differential parenting, which buffers the child against environmental vicissitudes evoking sibling comparison processes.

  6. Implementation of the rapid cross section adjustment approach at General Electric

    International Nuclear Information System (INIS)

    Cowan, C.L.; Kujawski, E.; Protsik, R.

    1978-01-01

    The General Electric rapid cross section adjustment approach was developed to use the shielding factor method for formulating multigroup cross sections. In this approach, space- and composition-dependent cross sections for a particular reactor or shield design are prepared from a generalized cross section library by the use of resonance self-shielding factors, and by the adjustment of elastic scattering cross sections for the local neutron flux spectra. The principal tool in the cross section adjustment package is the data processing code TDOWN. This code was specified to give the user a high degree of flexibility in the analysis of advanced reactor designs. Of particular interest in the analysis of critical experiments is the ability to carry out cell heterogeneity self-shielding calculations using a multiregion equivalence relationship, and the homogenization of the cross sections over the specified cell with the flux weighting obtained from transport theory calculations. Extensive testing of the rapid cross section adjustment approach, including comparisons with Monte Carlo methods, indicated that this approach can be utilized with a high degree of confidence in the design analysis of complex fast reactor systems. 2 figures, 1 table

  7. Simulated annealing method for electronic circuits design: adaptation and comparison with other optimization methods

    International Nuclear Information System (INIS)

    Berthiau, G.

    1995-10-01

    The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. Finally, our simulated annealing program

  8. Comparison between clinical significance of height-adjusted and weight-adjusted appendicular skeletal muscle mass.

    Science.gov (United States)

    Furushima, Taishi; Miyachi, Motohiko; Iemitsu, Motoyuki; Murakami, Haruka; Kawano, Hiroshi; Gando, Yuko; Kawakami, Ryoko; Sanada, Kiyoshi

    2017-02-13

    This study aimed to compare relationships between height- or weight-adjusted appendicular skeletal muscle mass (ASM/Ht 2 or ASM/Wt) and risk factors for cardiometabolic diseases or osteoporosis in Japanese men and women. Subjects were healthy Japanese men (n = 583) and women (n = 1218). The study population included a young group (310 men and 357 women; age, 18-40 years) and a middle-aged and elderly group (273 men and 861 women; age, ≥41 years). ASM was measured by dual-energy X-ray absorptiometry. The reference values for class 1 and 2 sarcopenia in each sex were defined as values one and two standard deviations below the sex-specific means of the young group, respectively. The reference values for class 1 and 2 sarcopenia defined by ASM/Ht 2 were 7.77 and 6.89 kg/m 2 in men and 6.06 and 5.31 kg/m 2 in women, respectively. The reference values for ASM/Wt were 35.0 and 32.0% in men and 29.6 and 26.4% in women, respectively. In both men and women, ASM/Wt was negatively correlated with higher triglycerides (TG) and positively correlated with serum high-density lipoprotein cholesterol (HDL-C), but these associations were not found in height-adjusted ASM. In women, TG, systolic blood pressure, and diastolic blood pressure in sarcopenia defined by ASM/Wt were significantly higher than those in normal subjects, but these associations were not found in sarcopenia defined by ASM/Ht 2 . Whole-body and regional bone mineral density in sarcopenia defined by ASM/Ht 2 were significantly lower than those in normal subjects, but these associations were not found in sarcopenia defined by ASM/Wt. Weight-adjusted definition was able to identify cardiometabolic risk factors such as TG and HDL-C while height-adjusted definition could identify factors for osteoporosis.

  9. The choice of statistical methods for comparisons of dosimetric data in radiotherapy.

    Science.gov (United States)

    Chaikh, Abdulhamid; Giraud, Jean-Yves; Perrin, Emmanuel; Bresciani, Jean-Pierre; Balosso, Jacques

    2014-09-18

    Novel irradiation techniques are continuously introduced in radiotherapy to optimize the accuracy, the security and the clinical outcome of treatments. These changes could raise the question of discontinuity in dosimetric presentation and the subsequent need for practice adjustments in case of significant modifications. This study proposes a comprehensive approach to compare different techniques and tests whether their respective dose calculation algorithms give rise to statistically significant differences in the treatment doses for the patient. Statistical investigation principles are presented in the framework of a clinical example based on 62 fields of radiotherapy for lung cancer. The delivered doses in monitor units were calculated using three different dose calculation methods: the reference method accounts the dose without tissues density corrections using Pencil Beam Convolution (PBC) algorithm, whereas new methods calculate the dose with tissues density correction for 1D and 3D using Modified Batho (MB) method and Equivalent Tissue air ratio (ETAR) method, respectively. The normality of the data and the homogeneity of variance between groups were tested using Shapiro-Wilks and Levene test, respectively, then non-parametric statistical tests were performed. Specifically, the dose means estimated by the different calculation methods were compared using Friedman's test and Wilcoxon signed-rank test. In addition, the correlation between the doses calculated by the three methods was assessed using Spearman's rank and Kendall's rank tests. The Friedman's test showed a significant effect on the calculation method for the delivered dose of lung cancer patients (p Wilcoxon signed-rank test of paired comparisons indicated that the delivered dose was significantly reduced using density-corrected methods as compared to the reference method. Spearman's and Kendall's rank tests indicated a positive correlation between the doses calculated with the different methods

  10. A Comparison of underground opening support design methods in jointed rock mass

    International Nuclear Information System (INIS)

    Gharavi, M.; Shafiezadeh, N.

    2008-01-01

    It is of great importance to consider long-term stability of rock mass around the openings of underground structure. during design, construction and operation of the said structures in rock. In this context. three methods namely. empirical. analytical and numerical have been applied to design and analyze the stability of underground infrastructure at the Siah Bisheh Pumping Storage Hydro-Electric Power Project in Iran. The geological and geotechnical data utilized in this article were selected and based on the preliminary studies of this project. In the initial stages of design. it was recommended that, two methods of rock mass classification Q and rock mass rating should be utilized for the support system of the underground cavern. Next, based on the structural instability, the support system was adjusted by the analytical method. The performance of the recommended support system was reviewed by the comparison of the ground response curve and rock support interactions with surrounding rock mass, using FEST03 software. Moreover, for further assessment of the realistic rock mass behavior and support system, the numerical modeling was performed utilizing FEST03 software. Finally both the analytical and numerical methods were compared, to obtain satisfactory results complimenting each other

  11. [Effect of 2 methods of occlusion adjustment on occlusal balance and muscles of mastication in patient with implant restoration].

    Science.gov (United States)

    Wang, Rong; Xu, Xin

    2015-12-01

    To compare the effect of 2 methods of occlusion adjustment on occlusal balance and muscles of mastication in patients with dental implant restoration. Twenty patients, each with a single edentulous posterior dentition with no distal dentition were selected, and divided into 2 groups. Patients in group A underwent original occlusion adjustment method and patients in group B underwent occlusal plane reduction technique. Ankylos implants were implanted in the edentulous space in each patient and restored with fixed prosthodontics single unit crown. Occlusion was adjusted in each restoration accordingly. Electromyograms were conducted to determine the effect of adjustment methods on occlusion and muscles of mastication 3 months and 6 months after initial restoration and adjustment. Data was collected and measurements for balanced occlusal measuring standards were obtained, including central occlusion force (COF), asymmetry index of molar occlusal force(AMOF). Balanced muscles of mastication measuring standards were also obtained including measurements from electromyogram for the muscles of mastication and the anterior bundle of the temporalis muscle at the mandibular rest position, average electromyogram measurements of the anterior bundle of the temporalis muscle at the intercuspal position(ICP), Astot, masseter muscle asymmetry index, and anterior temporalis asymmetry index (ASTA). Statistical analysis was performed using Student 's t test with SPSS 18.0 software package. Three months after occlusion adjustment, parameters of the original occlusion adjustment method were significantly different between group A and group B in balanced occlusal measuring standards and balanced muscles of mastication measuring standards. Six months after occlusion adjustment, parameters of the original occlusion adjustment methods were significantly different between group A and group B in balanced muscles of mastication measuring standards, but was no significant difference in balanced

  12. Acute ischemic stroke prognostication, comparison between ...

    African Journals Online (AJOL)

    Ossama Y. Mansour

    2014-11-20

    Nov 20, 2014 ... patients with acute ischemic stroke in comparison with the NIHSS and the GCS. Methods: .... All patients received a CT scan of the brain on admission. Diagnostic ... adjusted for age, sex, Charlson Index and Oxfordshire. 248.

  13. Remotely adjustable fishing jar and method for using same

    International Nuclear Information System (INIS)

    Wyatt, W.B.

    1992-01-01

    This patent describes a method for providing a jarring force to dislodge objects stuck in well bores, the method it comprises: connecting a jarring tool between an operating string and an object in a well bore; selecting a jarring force to be applied to the object; setting the selected reference jarring force into a mechanical memory mechanism by progressively engaging a first latch body and a second latch body; retaining the reference jarring force in the mechanical memory mechanism during diminution of tensional force applied by the operating string; and initiating an upwardly directed impact force within the jarring tool by increasing tensional force on the operating string to a value greater than the tensional force corresponding with the selected jarring force. This patent also describes a remotely adjustable downhole fishing jar apparatus comprising: an operating mandrel; an impact release spring; a mechanical memory mechanism; and releasable latching means

  14. Methods to assess intended effects of drug treatment in observational studies are reviewed

    NARCIS (Netherlands)

    Klungel, Olaf H|info:eu-repo/dai/nl/181447649; Martens, Edwin P|info:eu-repo/dai/nl/088859010; Psaty, Bruce M; Grobbee, Diederik E; Sullivan, Sean D; Stricker, Bruno H Ch; Leufkens, Hubert G M|info:eu-repo/dai/nl/075255049; de Boer, A|info:eu-repo/dai/nl/075097346

    2004-01-01

    BACKGROUND AND OBJECTIVE: To review methods that seek to adjust for confounding in observational studies when assessing intended drug effects. METHODS: We reviewed the statistical, economical and medical literature on the development, comparison and use of methods adjusting for confounding. RESULTS:

  15. Adjust the method of the FMEA to the requirements of the aviation industry

    Directory of Open Access Journals (Sweden)

    Andrzej FELLNER

    2015-12-01

    Full Text Available The article presents a summary of current methods used in aviation and rail transport. It also contains a proposal to adjust the method of the FMEA to the latest requirements of the airline industry. The authors suggested tables of indicators Zn, Pr and Dt necessary to implement FMEA method of risk analysis taking into account current achievements aerospace and rail safety. Also proposed acceptable limits of the RPN number which allows you to classify threats.

  16. Case-Mix Adjustment of the Bereaved Family Survey.

    Science.gov (United States)

    Kutney-Lee, Ann; Carpenter, Joan; Smith, Dawn; Thorpe, Joshua; Tudose, Alina; Ersek, Mary

    2018-01-01

    Surveys of bereaved family members are increasingly being used to evaluate end-of-life (EOL) care and to measure organizational performance in EOL care quality. The Bereaved Family Survey (BFS) is used to monitor EOL care quality and benchmark performance in the Veterans Affairs (VA) health-care system. The objective of this study was to develop a case-mix adjustment model for the BFS and to examine changes in facility-level scores following adjustment, in order to provide fair comparisons across facilities. We conducted a cross-sectional secondary analysis of medical record and survey data from veterans and their family members across 146 VA medical centers. Following adjustment using model-based propensity weighting, the mean change in the BFS-Performance Measure score across facilities was -0.6 with a range of -2.6 to 0.6. Fifty-five (38%) facilities changed within ±0.5 percentage points of their unadjusted score. On average, facilities that benefited most from adjustment cared for patients with greater comorbidity burden and were located in urban areas in the Northwest and Midwestern regions of the country. Case-mix adjustment results in minor changes to facility-level BFS scores but allows for fairer comparisons of EOL care quality. Case-mix adjustment of the BFS positions this National Quality Forum-endorsed measure for use in public reporting and internal quality dashboards for VA leadership and may inform the development and refinement of case-mix adjustment models for other surveys of bereaved family members.

  17. Comparing treatment effects after adjustment with multivariable Cox proportional hazards regression and propensity score methods

    NARCIS (Netherlands)

    Martens, Edwin P; de Boer, Anthonius; Pestman, Wiebe R; Belitser, Svetlana V; Stricker, Bruno H Ch; Klungel, Olaf H

    PURPOSE: To compare adjusted effects of drug treatment for hypertension on the risk of stroke from propensity score (PS) methods with a multivariable Cox proportional hazards (Cox PH) regression in an observational study with censored data. METHODS: From two prospective population-based cohort

  18. Adjusted permutation method for multiple attribute decision making with meta-heuristic solution approaches

    Directory of Open Access Journals (Sweden)

    Hossein Karimi

    2011-04-01

    Full Text Available The permutation method of multiple attribute decision making has two significant deficiencies: high computational time and wrong priority output in some problem instances. In this paper, a novel permutation method called adjusted permutation method (APM is proposed to compensate deficiencies of conventional permutation method. We propose Tabu search (TS and particle swarm optimization (PSO to find suitable solutions at a reasonable computational time for large problem instances. The proposed method is examined using some numerical examples to evaluate the performance of the proposed method. The preliminary results show that both approaches provide competent solutions in relatively reasonable amounts of time while TS performs better to solve APM.

  19. Adjusting Quality index Log Values to Represent Local and Regional Commercial Sawlog Product Values

    Science.gov (United States)

    Orris D. McCauley; Joseph J. Mendel; Joseph J. Mendel

    1969-01-01

    The primary purpose of this paper is not only to report the results of a comparative analysis as to how well the Q.I. method predicts log product values when compared to commercial sawmill log output values, but also to develop a methodology which will facilitate the comparison and provide the adjustments needed by the sawmill operator.

  20. Estimating the subjective value of future rewards: comparison of adjusting-amount and adjusting-delay procedures.

    Science.gov (United States)

    Holt, Daniel D; Green, Leonard; Myerson, Joel

    2012-07-01

    The present study examined whether equivalent discounting of delayed rewards is observed with different experimental procedures. If the underlying decision-making process is the same, then similar patterns of results should be observed regardless of procedure, and similar estimates of the subjective value of future rewards (i.e., indifference points) should be obtained. Two experiments compared discounting on three types of procedure: adjusting-delay (AD), adjusting-immediate-amount (AIA), and adjusting-delayed-amount (ADA). For the two procedures for which discounting functions can be established (i.e., AD and AIA), a hyperboloid provided good fits to the data at both the group and individual levels, and individuals' discounting on one procedure tended to be correlated with their discounting on the other. Notably, the AIA procedure produced the more consistent estimates of the degree of discounting, and in particular, discounting on the AIA procedure was unaffected by the order in which choices were presented. Regardless of which of the three procedures was used, however, similar patterns of results were obtained: Participants systematically discounted the value of delayed rewards, and robust magnitude effects were observed. Although each procedure may have its own advantages and disadvantages, use of all three types of procedure in the present study provided converging evidence for common decision-making processes underlying the discounting of delayed rewards. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. The choice of statistical methods for comparisons of dosimetric data in radiotherapy

    International Nuclear Information System (INIS)

    Chaikh, Abdulhamid; Giraud, Jean-Yves; Perrin, Emmanuel; Bresciani, Jean-Pierre; Balosso, Jacques

    2014-01-01

    Novel irradiation techniques are continuously introduced in radiotherapy to optimize the accuracy, the security and the clinical outcome of treatments. These changes could raise the question of discontinuity in dosimetric presentation and the subsequent need for practice adjustments in case of significant modifications. This study proposes a comprehensive approach to compare different techniques and tests whether their respective dose calculation algorithms give rise to statistically significant differences in the treatment doses for the patient. Statistical investigation principles are presented in the framework of a clinical example based on 62 fields of radiotherapy for lung cancer. The delivered doses in monitor units were calculated using three different dose calculation methods: the reference method accounts the dose without tissues density corrections using Pencil Beam Convolution (PBC) algorithm, whereas new methods calculate the dose with tissues density correction for 1D and 3D using Modified Batho (MB) method and Equivalent Tissue air ratio (ETAR) method, respectively. The normality of the data and the homogeneity of variance between groups were tested using Shapiro-Wilks and Levene test, respectively, then non-parametric statistical tests were performed. Specifically, the dose means estimated by the different calculation methods were compared using Friedman’s test and Wilcoxon signed-rank test. In addition, the correlation between the doses calculated by the three methods was assessed using Spearman’s rank and Kendall’s rank tests. The Friedman’s test showed a significant effect on the calculation method for the delivered dose of lung cancer patients (p <0.001). The density correction methods yielded to lower doses as compared to PBC by on average (−5 ± 4.4 SD) for MB and (−4.7 ± 5 SD) for ETAR. Post-hoc Wilcoxon signed-rank test of paired comparisons indicated that the delivered dose was significantly reduced using density

  2. Associations of child adjustment with parent and family functioning: comparison of families of women with and without breast cancer.

    Science.gov (United States)

    Vannatta, Kathryn; Ramsey, Rachelle R; Noll, Robert B; Gerhardt, Cynthia A

    2010-01-01

    To examine the impact of maternal breast cancer on the emotional and behavioral functioning of school-age children; evaluate whether child adjustment is associated with variations in distress, marital satisfaction, and parenting behavior evidenced by mothers and fathers; and determine whether these associations differ from families that are not contending with cancer. Participants included 40 children (age 8-16 years) of mothers with breast cancer along with their parents as well as 40 families of comparison classmates not affected by parental illness. Questionnaires assessing the domains of interest were administered in families' homes. Mothers with breast cancer and their spouses reported higher levels of distress than comparison parents; child internalizing problems were inversely associated with parental adjustment in both groups. No group differences were found in any indicators of family functioning, including parent-child relationships. Warm and supportive parenting by both mothers and fathers were associated with lower levels of child internalizing behavior, but only in families affected by breast cancer. These results suggest that children of mothers with breast cancer, such as most children, may be at risk for internalizing behavior when parents are distressed. These children may particularly benefit from interactions with mothers and fathers who are warm and supportive, and maintenance of positive parenting may partially account for the apparent resilience of these youth.

  3. Using the Nudge and Shove Methods to Adjust Item Difficulty Values.

    Science.gov (United States)

    Royal, Kenneth D

    2015-01-01

    In any examination, it is important that a sufficient mix of items with varying degrees of difficulty be present to produce desirable psychometric properties and increase instructors' ability to make appropriate and accurate inferences about what a student knows and/or can do. The purpose of this "teaching tip" is to demonstrate how examination items can be affected by the quality of distractors, and to present a simple method for adjusting items to meet difficulty specifications.

  4. An experimental detrending approach to attributing change of pan evaporation in comparison with the traditional partial differential method

    Science.gov (United States)

    Wang, Tingting; Sun, Fubao; Xia, Jun; Liu, Wenbin; Sang, Yanfang

    2017-04-01

    In predicting how droughts and hydrological cycles would change in a warming climate, change of atmospheric evaporative demand measured by pan evaporation (Epan) is one crucial element to be understood. Over the last decade, the derived partial differential (PD) form of the PenPan equation is a prevailing attribution approach to attributing changes to Epan worldwide. However, the independency among climatic variables required by the PD approach cannot be met using long term observations. Here we designed a series of numerical experiments to attribute changes of Epan over China by detrending each climatic variable, i.e., an experimental detrending approach, to address the inter-correlation among climate variables, and made comparison with the traditional PD method. The results show that the detrending approach is superior not only to a complicate system with multi-variables and mixing algorithm like aerodynamic component (Ep,A) and Epan, but also to a simple case like radiative component (Ep,R), when compared with traditional PD method. The major reason for this is the strong and significant inter-correlation of input meteorological forcing. Very similar and fine attributing results have been achieved based on detrending approach and PD method after eliminating the inter-correlation of input through a randomize approach. The contribution of Rh and Ta in net radiation and thus Ep,R, which has been overlooked based on the PD method but successfully detected by detrending approach, provides some explanation to the comparing results. We adopted the control run from the detrending approach and applied it to made adjustment of PD method. Much improvement has been made and thus proven this adjustment an effective way in attributing changes to Epan. Hence, the detrending approach and the adjusted PD method are well recommended in attributing changes in hydrological models to better understand and predict water and energy cycle.

  5. A simple statistical method for catch comparison studies

    DEFF Research Database (Denmark)

    Holst, René; Revill, Andrew

    2009-01-01

    For analysing catch comparison data, we propose a simple method based on Generalised Linear Mixed Models (GLMM) and use polynomial approximations to fit the proportions caught in the test codend. The method provides comparisons of fish catch at length by the two gears through a continuous curve...... with a realistic confidence band. We demonstrate the versatility of this method, on field data obtained from the first known testing in European waters of the Rhode Island (USA) 'Eliminator' trawl. These data are interesting as they include a range of species with different selective patterns. Crown Copyright (C...

  6. Early Parental Positive Behavior Support and Childhood Adjustment: Addressing Enduring Questions with New Methods.

    Science.gov (United States)

    Waller, Rebecca; Gardner, Frances; Dishion, Thomas; Sitnick, Stephanie L; Shaw, Daniel S; Winter, Charlotte E; Wilson, Melvin

    2015-05-01

    A large literature provides strong empirical support for the influence of parenting on child outcomes. The current study addresses enduring research questions testing the importance of early parenting behavior to children's adjustment. Specifically, we developed and tested a novel multi-method observational measure of parental positive behavior support at age 2. Next, we tested whether early parental positive behavior support was related to child adjustment at school age, within a multi-agent and multi-method measurement approach and design. Observational and parent-reported data from mother-child dyads (N = 731; 49 percent female) were collected from a high-risk sample at age 2. Follow-up data were collected via teacher report and child assessment at age 7.5. The results supported combining three different observational methods to assess positive behavior support at age 2 within a latent factor. Further, parents' observed positive behavior support at age 2 predicted multiple types of teacher-reported and child-assessed problem behavior and competencies at 7.5 years old. Results supported the validity and predictive capability of a multi-method observational measure of parenting and the importance of a continued focus on the early years within preventive interventions.

  7. Comparison of n-γ discrimination by zero-crossing and digital charge comparison methods

    International Nuclear Information System (INIS)

    Wolski, D.; Moszynski, M.; Ludziejewski, T.; Johnson, A.; Klamra, W.; Skeppstedt, Oe.

    1995-01-01

    A comparative study of the n-γ discrimination done by the digital charge comparison and zero-crossing methods was carried out for a 130 mm in diameter and 130 mm high BC501A liquid scintillator coupled to a 130 mm diameter XP4512B photomultiplier. The high quality of the tested detector was reflected in a photoelectron yield of 2300±100 phe/MeV and excellent n-γ discrimination properties with energy discrimination thresholds corresponding to very low neutron (or electron) energies. The superiority of the Z/C method was demonstrated for the n-γ discrimination method alone, as well as, for the simultaneous separation by the pulse shape discrimination and the time-of-flight methods down to about 30 keV recoil electron energy. The digital charge comparison method fails for a large dynamic range of energy and its separation is weakly improved by time-of-flight method for low energies. (orig.)

  8. Arima model and exponential smoothing method: A comparison

    Science.gov (United States)

    Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri

    2013-04-01

    This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.

  9. Comparison of gas dehydration methods based on energy ...

    African Journals Online (AJOL)

    Comparison of gas dehydration methods based on energy consumption. ... PROMOTING ACCESS TO AFRICAN RESEARCH ... This study compares three conventional methods of natural gas (Associated Natural Gas) dehydration to carry out ...

  10. Impact of urine concentration adjustment method on associations between urine metals and estimated glomerular filtration rates (eGFR) in adolescents

    International Nuclear Information System (INIS)

    Weaver, Virginia M.; Vargas, Gonzalo García; Silbergeld, Ellen K.; Rothenberg, Stephen J.; Fadrowski, Jeffrey J.; Rubio-Andrade, Marisela; Parsons, Patrick J.; Steuerwald, Amy J.

    2014-01-01

    Positive associations between urine toxicant levels and measures of glomerular filtration rate (GFR) have been reported recently in a range of populations. The explanation for these associations, in a direction opposite that of traditional nephrotoxicity, is uncertain. Variation in associations by urine concentration adjustment approach has also been observed. Associations of urine cadmium, thallium and uranium in models of serum creatinine- and cystatin-C-based estimated GFR (eGFR) were examined using multiple linear regression in a cross-sectional study of adolescents residing near a lead smelter complex. Urine concentration adjustment approaches compared included urine creatinine, urine osmolality and no adjustment. Median age, blood lead and urine cadmium, thallium and uranium were 13.9 years, 4.0 μg/dL, 0.22, 0.27 and 0.04 g/g creatinine, respectively, in 512 adolescents. Urine cadmium and thallium were positively associated with serum creatinine-based eGFR only when urine creatinine was used to adjust for urine concentration (β coefficient=3.1 mL/min/1.73 m 2 ; 95% confidence interval=1.4, 4.8 per each doubling of urine cadmium). Weaker positive associations, also only with urine creatinine adjustment, were observed between these metals and serum cystatin-C-based eGFR and between urine uranium and serum creatinine-based eGFR. Additional research using non-creatinine-based methods of adjustment for urine concentration is necessary. - Highlights: • Positive associations between urine metals and creatinine-based eGFR are unexpected. • Optimal approach to urine concentration adjustment for urine biomarkers uncertain. • We compared urine concentration adjustment methods. • Positive associations observed only with urine creatinine adjustment. • Additional research using non-creatinine-based methods of adjustment needed

  11. Impact of urine concentration adjustment method on associations between urine metals and estimated glomerular filtration rates (eGFR) in adolescents

    Energy Technology Data Exchange (ETDEWEB)

    Weaver, Virginia M., E-mail: vweaver@jhsph.edu [Department of Environmental Health Sciences, Johns Hopkins Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD (United States); Johns Hopkins University School of Medicine, Baltimore, MD (United States); Welch Center for Prevention, Epidemiology, and Clinical Research, Johns Hopkins Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD (United States); Vargas, Gonzalo García [Faculty of Medicine, University of Juárez of Durango State, Durango (Mexico); Secretaría de Salud del Estado de Coahuila, Coahuila, México (Mexico); Silbergeld, Ellen K. [Department of Environmental Health Sciences, Johns Hopkins Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD (United States); Rothenberg, Stephen J. [Instituto Nacional de Salud Publica, Centro de Investigacion en Salud Poblacional, Cuernavaca, Morelos (Mexico); Fadrowski, Jeffrey J. [Johns Hopkins University School of Medicine, Baltimore, MD (United States); Welch Center for Prevention, Epidemiology, and Clinical Research, Johns Hopkins Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD (United States); Rubio-Andrade, Marisela [Faculty of Medicine, University of Juárez of Durango State, Durango (Mexico); Parsons, Patrick J. [Laboratory of Inorganic and Nuclear Chemistry, Wadsworth Center, New York State Department of Health, Albany, NY (United States); Department of Environmental Health Sciences, School of Public Health, University at Albany, Albany, NY (United States); Steuerwald, Amy J. [Laboratory of Inorganic and Nuclear Chemistry, Wadsworth Center, New York State Department of Health, Albany, NY (United States); and others

    2014-07-15

    Positive associations between urine toxicant levels and measures of glomerular filtration rate (GFR) have been reported recently in a range of populations. The explanation for these associations, in a direction opposite that of traditional nephrotoxicity, is uncertain. Variation in associations by urine concentration adjustment approach has also been observed. Associations of urine cadmium, thallium and uranium in models of serum creatinine- and cystatin-C-based estimated GFR (eGFR) were examined using multiple linear regression in a cross-sectional study of adolescents residing near a lead smelter complex. Urine concentration adjustment approaches compared included urine creatinine, urine osmolality and no adjustment. Median age, blood lead and urine cadmium, thallium and uranium were 13.9 years, 4.0 μg/dL, 0.22, 0.27 and 0.04 g/g creatinine, respectively, in 512 adolescents. Urine cadmium and thallium were positively associated with serum creatinine-based eGFR only when urine creatinine was used to adjust for urine concentration (β coefficient=3.1 mL/min/1.73 m{sup 2}; 95% confidence interval=1.4, 4.8 per each doubling of urine cadmium). Weaker positive associations, also only with urine creatinine adjustment, were observed between these metals and serum cystatin-C-based eGFR and between urine uranium and serum creatinine-based eGFR. Additional research using non-creatinine-based methods of adjustment for urine concentration is necessary. - Highlights: • Positive associations between urine metals and creatinine-based eGFR are unexpected. • Optimal approach to urine concentration adjustment for urine biomarkers uncertain. • We compared urine concentration adjustment methods. • Positive associations observed only with urine creatinine adjustment. • Additional research using non-creatinine-based methods of adjustment needed.

  12. Meta-analysis: adjusted indirect comparison of drug-eluting bead transarterial chemoembolization versus 90Y-radioembolization for hepatocellular carcinoma

    International Nuclear Information System (INIS)

    Ludwig, Johannes M.; Xing, Minzhi; Zhang, Di; Kim, Hyun S.

    2017-01-01

    To investigate comparative effectiveness of drug-eluting bead transarterial chemoembolization (DEB-TACE) versus Yttrium-90 ( 90 Y)-radioembolization for hepatocellular carcinoma (HCC). Studies comparing conventional (c)TACE versus 90 Y-radioembolization or DEB-TACE for HCC treatment were identified using PubMed/Medline, Embase, and Cochrane databases. The adjusted indirect meta-analytic method for effectiveness comparison of DEB-TACE versus 90 Y-radioembolization was used. Wilcoxon rank-sum test was used to compare baseline characteristics. A priori defined sensitivity analysis of stratified study subgroups was performed for primary outcome analyses. Publication bias was tested by Egger's and Begg's tests. Fourteen studies comparing DEB-TACE or 90 Y-radioembolization with cTACE were included. Analysis revealed a 1-year overall survival benefit for DEB-TACE over 90 Y-radioembolization (79 % vs. 54.8 %; OR: 0.57; 95 %CI: 0.355-0.915; p = 0.02; I-squared: 0 %; p > 0.5), but not for the 2-year (61 % vs. 34 %; OR: 0.65; 95%CI: 0.294-1.437; p = 0.29) and 3-year survival (56.4 % vs. 20.9 %; OR: 0.713; 95 % CI: 0.21-2.548; p = 0.62). There was significant heterogeneity in the 2- and 3-year survival analyses. The pooled median overall survival was longer for DEB-TACE (22.6 vs. 14.7 months). There was no significant difference in tumour response rate. DEB-TACE and 90 Y-radioembolization are efficacious treatments for patients suffering from HCC; DEB-TACE demonstrated survival benefit at 1-year compared to 90 Y-radioembolization but direct comparison is warranted for further evaluation. (orig.)

  13. A 5-trial adjusting delay discounting task: Accurate discount rates in less than 60 seconds

    Science.gov (United States)

    Koffarnus, Mikhail N.; Bickel, Warren K.

    2014-01-01

    Individuals who discount delayed rewards at a high rate are more likely to engage in substance abuse, overeating, or problem gambling. Findings such as these suggest the value of methods to obtain an accurate and fast measurement of discount rate that can be easily deployed in variety of settings. In the present study, we developed and evaluated the 5-trial adjusting delay task, a novel method of obtaining discount rate in less than one minute. We hypothesized that discount rates from the 5-trial adjusting delay task would be similar and correlated with discount rates from a lengthier task we have used previously, and that four known effects relating to delay discounting would be replicable with this novel task. To test these hypotheses, the 5-trial adjusting delay task was administered to 111 college students six times to obtain discount rates for six different commodities, along with a lengthier adjusting amount discounting task. We found that discount rates were similar and correlated between the 5-trial adjusting delay task and the adjusting amount task. Each of the four known effects relating to delay discounting was replicated with the 5-trial adjusting delay task to varying degrees. First, discount rates were inversely correlated with amount. Second, discount rates between past and future outcomes were correlated. Third, discount rates were greater for consumable rewards than with money, although we did not control for amount in this comparison. Fourth, discount rates were lower when zero amounts opposing the chosen time point were explicitly described. Results indicate that the 5-trial adjusting delay task is a viable, rapid method to assess discount rate. PMID:24708144

  14. A 5-trial adjusting delay discounting task: accurate discount rates in less than one minute.

    Science.gov (United States)

    Koffarnus, Mikhail N; Bickel, Warren K

    2014-06-01

    Individuals who discount delayed rewards at a high rate are more likely to engage in substance abuse, overeating, or problem gambling. Such findings suggest the value of methods to obtain an accurate and fast measurement of discount rate that can be easily deployed in variety of settings. In the present study, we developed and evaluated the 5-trial adjusting delay task, a novel method of obtaining a discount rate in less than 1 min. We hypothesized that discount rates from the 5-trial adjusting delay task would be similar and would correlate with discount rates from a lengthier task we have used previously, and that 4 known effects relating to delay discounting would be replicable with this novel task. To test these hypotheses, the 5-trial adjusting delay task was administered to 111 college students 6 times to obtain discount rates for 6 different commodities, along with a lengthier adjusting amount discounting task. We found that discount rates were similar and correlated between the 5-trial adjusting delay task and the adjusting amount task. Each of the 4 known effects relating to delay discounting was replicated with the 5-trial adjusting delay task to varying degrees. First, discount rates were inversely correlated with amount. Second, discount rates between past and future outcomes were correlated. Third, discount rates were greater for consumable rewards than with money, although we did not control for amount in this comparison. Fourth, discount rates were lower when $0 amounts opposing the chosen time point were explicitly described. Results indicate that the 5-trial adjusting delay task is a viable, rapid method to assess discount rate. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  15. Comparison of Standard and Fast Charging Methods for Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Petr Chlebis

    2014-01-01

    Full Text Available This paper describes a comparison of standard and fast charging methods used in the field of electric vehicles and also comparison of their efficiency in terms of electrical energy consumption. The comparison was performed on three-phase buck converter, which was designed for EV’s fast charging station. The results were obtained by both mathematical and simulation methods. The laboratory model of entire physical application, which will be further used for simulation results verification, is being built in these days.

  16. Simple method for generating adjustable trains of picosecond electron bunches

    Directory of Open Access Journals (Sweden)

    P. Muggli

    2010-05-01

    Full Text Available A simple, passive method for producing an adjustable train of picosecond electron bunches is demonstrated. The key component of this method is an electron beam mask consisting of an array of parallel wires that selectively spoils the beam emittance. This mask is positioned in a high magnetic dispersion, low beta-function region of the beam line. The incoming electron beam striking the mask has a time/energy correlation that corresponds to a time/position correlation at the mask location. The mask pattern is transformed into a time pattern or train of bunches when the dispersion is brought back to zero downstream of the mask. Results are presented of a proof-of-principle experiment demonstrating this novel technique that was performed at the Brookhaven National Laboratory Accelerator Test Facility. This technique allows for easy tailoring of the bunch train for a particular application, including varying the bunch width and spacing, and enabling the generation of a trailing witness bunch.

  17. Adjusting for treatment switching in randomised controlled trials - A simulation study and a simplified two-stage method.

    Science.gov (United States)

    Latimer, Nicholas R; Abrams, K R; Lambert, P C; Crowther, M J; Wailoo, A J; Morden, J P; Akehurst, R L; Campbell, M J

    2017-04-01

    Estimates of the overall survival benefit of new cancer treatments are often confounded by treatment switching in randomised controlled trials (RCTs) - whereby patients randomised to the control group are permitted to switch onto the experimental treatment upon disease progression. In health technology assessment, estimates of the unconfounded overall survival benefit associated with the new treatment are needed. Several switching adjustment methods have been advocated in the literature, some of which have been used in health technology assessment. However, it is unclear which methods are likely to produce least bias in realistic RCT-based scenarios. We simulated RCTs in which switching, associated with patient prognosis, was permitted. Treatment effect size and time dependency, switching proportions and disease severity were varied across scenarios. We assessed the performance of alternative adjustment methods based upon bias, coverage and mean squared error, related to the estimation of true restricted mean survival in the absence of switching in the control group. We found that when the treatment effect was not time-dependent, rank preserving structural failure time models (RPSFTM) and iterative parameter estimation methods produced low levels of bias. However, in the presence of a time-dependent treatment effect, these methods produced higher levels of bias, similar to those produced by an inverse probability of censoring weights method. The inverse probability of censoring weights and structural nested models produced high levels of bias when switching proportions exceeded 85%. A simplified two-stage Weibull method produced low bias across all scenarios and provided the treatment switching mechanism is suitable, represents an appropriate adjustment method.

  18. Adjustment technique without explicit formation of normal equations /conjugate gradient method/

    Science.gov (United States)

    Saxena, N. K.

    1974-01-01

    For a simultaneous adjustment of a large geodetic triangulation system, a semiiterative technique is modified and used successfully. In this semiiterative technique, known as the conjugate gradient (CG) method, original observation equations are used, and thus the explicit formation of normal equations is avoided, 'huge' computer storage space being saved in the case of triangulation systems. This method is suitable even for very poorly conditioned systems where solution is obtained only after more iterations. A detailed study of the CG method for its application to large geodetic triangulation systems was done that also considered constraint equations with observation equations. It was programmed and tested on systems as small as two unknowns and three equations up to those as large as 804 unknowns and 1397 equations. When real data (573 unknowns, 965 equations) from a 1858-km-long triangulation system were used, a solution vector accurate to four decimal places was obtained in 2.96 min after 1171 iterations (i.e., 2.0 times the number of unknowns).

  19. Comparison of Social Adjustment in Blind Children and Normal in Primary School in Mashhad

    Directory of Open Access Journals (Sweden)

    AA ModdaresMoghaddam

    2014-05-01

    Methods:This is a descriptive, analytical study in which 270 blind and viewing students of primary schools in Mashhad participated in the academic years 2012-13. The blind students were chosen by census and viewing students through a stratified random sampling from normal schools. For data collection, a standard questionnaire that was social adjustment questionnaire was used. It was made in 1974 in America by Lambert, Wind Miller, Cole & Figueroa, which was translated in 1992 by Dr. Shahny Yeylagh, to be used for children of 7 to 13 years. It was conducted on 1500 boy and girl students of the first to fifth grade in elementary schools of Ahvaz. This test consists of 11 subscales, 38 sub-categories and 260 items. For data analysis, descriptive and inferential statistics such as t-test was used. (P <0.05. Results:The mean scores of social adjustment of students showed no statistically significant difference between the viewing and the blind (p=0.8. Also the mean of social adjustment in blind girls and boys was not statistically significant (p=0.1, but the incompatibility was found in more boys than the girls. Conclusion: Regarding the results, social incompatibility was higher in the blind girl students than the viewing girl students. Also this incompatibility was higher in the boys than the girls thereby requiring scientific and coherent planning for them.

  20. Improved methods for the mathematically controlled comparison of biochemical systems

    Directory of Open Access Journals (Sweden)

    Schwacke John H

    2004-06-01

    Full Text Available Abstract The method of mathematically controlled comparison provides a structured approach for the comparison of alternative biochemical pathways with respect to selected functional effectiveness measures. Under this approach, alternative implementations of a biochemical pathway are modeled mathematically, forced to be equivalent through the application of selected constraints, and compared with respect to selected functional effectiveness measures. While the method has been applied successfully in a variety of studies, we offer recommendations for improvements to the method that (1 relax requirements for definition of constraints sufficient to remove all degrees of freedom in forming the equivalent alternative, (2 facilitate generalization of the results thus avoiding the need to condition those findings on the selected constraints, and (3 provide additional insights into the effect of selected constraints on the functional effectiveness measures. We present improvements to the method and related statistical models, apply the method to a previously conducted comparison of network regulation in the immune system, and compare our results to those previously reported.

  1. Resonant frequency detection and adjustment method for a capacitive transducer with differential transformer bridge

    Energy Technology Data Exchange (ETDEWEB)

    Hu, M.; Bai, Y. Z., E-mail: abai@mail.hust.edu.cn; Zhou, Z. B., E-mail: zhouzb@mail.hust.edu.cn; Li, Z. X.; Luo, J. [MOE Key Laboratory of Fundamental Physical Quantities Measurement, School of Physics, Huazhong University of Science and Technology, Wuhan 430074 (China)

    2014-05-15

    The capacitive transducer with differential transformer bridge is widely used in ultra-sensitive space accelerometers due to their simple structure and high resolution. In this paper, the front-end electronics of an inductive-capacitive resonant bridge transducer is analyzed. The analysis result shows that the performance of this transducer depends upon the case that the AC pumping frequency operates at the resonance point of the inductive-capacitive bridge. The effect of possible mismatch between the AC pumping frequency and the actual resonant frequency is discussed, and the theoretical analysis indicates that the output voltage noise of the front-end electronics will deteriorate by a factor of about 3 due to either a 5% variation of the AC pumping frequency or a 10% variation of the tuning capacitance. A pre-scanning method to determine the actual resonant frequency is proposed followed by the adjustment of the operating frequency or the change of the tuning capacitance in order to maintain expected high resolution level. An experiment to verify the mismatching effect and the adjustment method is provided.

  2. Resonant frequency detection and adjustment method for a capacitive transducer with differential transformer bridge

    International Nuclear Information System (INIS)

    Hu, M.; Bai, Y. Z.; Zhou, Z. B.; Li, Z. X.; Luo, J.

    2014-01-01

    The capacitive transducer with differential transformer bridge is widely used in ultra-sensitive space accelerometers due to their simple structure and high resolution. In this paper, the front-end electronics of an inductive-capacitive resonant bridge transducer is analyzed. The analysis result shows that the performance of this transducer depends upon the case that the AC pumping frequency operates at the resonance point of the inductive-capacitive bridge. The effect of possible mismatch between the AC pumping frequency and the actual resonant frequency is discussed, and the theoretical analysis indicates that the output voltage noise of the front-end electronics will deteriorate by a factor of about 3 due to either a 5% variation of the AC pumping frequency or a 10% variation of the tuning capacitance. A pre-scanning method to determine the actual resonant frequency is proposed followed by the adjustment of the operating frequency or the change of the tuning capacitance in order to maintain expected high resolution level. An experiment to verify the mismatching effect and the adjustment method is provided

  3. Introducing conjoint analysis method into delayed lotteries studies: its validity and time stability are higher than in adjusting.

    Science.gov (United States)

    Białek, Michał; Markiewicz, Łukasz; Sawicki, Przemysław

    2015-01-01

    The delayed lotteries are much more common in everyday life than are pure lotteries. Usually, we need to wait to find out the outcome of the risky decision (e.g., investing in a stock market, engaging in a relationship). However, most research has studied the time discounting and probability discounting in isolation using the methodologies designed specifically to track changes in one parameter. Most commonly used method is adjusting, but its reported validity and time stability in research on discounting are suboptimal. The goal of this study was to introduce the novel method for analyzing delayed lotteries-conjoint analysis-which hypothetically is more suitable for analyzing individual preferences in this area. A set of two studies compared the conjoint analysis with adjusting. The results suggest that individual parameters of discounting strength estimated with conjoint have higher predictive value (Study 1 and 2), and they are more stable over time (Study 2) compared to adjusting. We discuss these findings, despite the exploratory character of reported studies, by suggesting that future research on delayed lotteries should be cross-validated using both methods.

  4. Introducing conjoint analysis method into delayed lotteries studies: Its validity and time stability are higher than in adjusting

    Directory of Open Access Journals (Sweden)

    Michal eBialek

    2015-01-01

    Full Text Available The delayed lotteries are much more common in everyday life than are pure lotteries. Usually, we need to wait to find out the outcome of the risky decision (e.g., investing in a stock market, engaging in a relationship. However, most research has studied the time discounting and probability discounting in isolation using the methodologies designed specifically to track changes in one parameter. Most commonly used method is adjusting, but its reported validity and time stability in research on discounting are suboptimal.The goal of this study was to introduce the novel method for analyzing delayed lotteries - conjoint analysis - which hypothetically is more suitable for analyzing individual preferences in this area. A set of two studies compared the conjoint analysis with adjusting. The results suggest that individual parameters of discounting strength estimated with conjoint have higher predictive value (Study 1 & 2, and they are more stable over time (Study 2 compared to adjusting. We discuss these findings, despite the exploratory character of reported studies, by suggesting that future research on delayed lotteries should be cross-validated using both methods.

  5. Meta-analysis: adjusted indirect comparison of drug-eluting bead transarterial chemoembolization versus {sup 90}Y-radioembolization for hepatocellular carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Ludwig, Johannes M.; Xing, Minzhi [Yale School of Medicine, Division of Interventional Radiology, Department of Radiology and Biomedical Imaging, New Haven, CT (United States); Zhang, Di [University of Pittsburgh Graduate School of Public Health, Department of Biostatistics, Pittsburgh, PA (United States); Kim, Hyun S. [Yale School of Medicine, Division of Interventional Radiology, Department of Radiology and Biomedical Imaging, New Haven, CT (United States); Yale School of Medicine, Yale Cancer Center, New Haven, CT (United States)

    2017-05-15

    To investigate comparative effectiveness of drug-eluting bead transarterial chemoembolization (DEB-TACE) versus Yttrium-90 ({sup 90}Y)-radioembolization for hepatocellular carcinoma (HCC). Studies comparing conventional (c)TACE versus {sup 90}Y-radioembolization or DEB-TACE for HCC treatment were identified using PubMed/Medline, Embase, and Cochrane databases. The adjusted indirect meta-analytic method for effectiveness comparison of DEB-TACE versus {sup 90}Y-radioembolization was used. Wilcoxon rank-sum test was used to compare baseline characteristics. A priori defined sensitivity analysis of stratified study subgroups was performed for primary outcome analyses. Publication bias was tested by Egger's and Begg's tests. Fourteen studies comparing DEB-TACE or {sup 90}Y-radioembolization with cTACE were included. Analysis revealed a 1-year overall survival benefit for DEB-TACE over {sup 90}Y-radioembolization (79 % vs. 54.8 %; OR: 0.57; 95 %CI: 0.355-0.915; p = 0.02; I-squared: 0 %; p > 0.5), but not for the 2-year (61 % vs. 34 %; OR: 0.65; 95%CI: 0.294-1.437; p = 0.29) and 3-year survival (56.4 % vs. 20.9 %; OR: 0.713; 95 % CI: 0.21-2.548; p = 0.62). There was significant heterogeneity in the 2- and 3-year survival analyses. The pooled median overall survival was longer for DEB-TACE (22.6 vs. 14.7 months). There was no significant difference in tumour response rate. DEB-TACE and {sup 90}Y-radioembolization are efficacious treatments for patients suffering from HCC; DEB-TACE demonstrated survival benefit at 1-year compared to {sup 90}Y-radioembolization but direct comparison is warranted for further evaluation. (orig.)

  6. GPU Parallel Bundle Block Adjustment

    Directory of Open Access Journals (Sweden)

    ZHENG Maoteng

    2017-09-01

    Full Text Available To deal with massive data in photogrammetry, we introduce the GPU parallel computing technology. The preconditioned conjugate gradient and inexact Newton method are also applied to decrease the iteration times while solving the normal equation. A brand new workflow of bundle adjustment is developed to utilize GPU parallel computing technology. Our method can avoid the storage and inversion of the big normal matrix, and compute the normal matrix in real time. The proposed method can not only largely decrease the memory requirement of normal matrix, but also largely improve the efficiency of bundle adjustment. It also achieves the same accuracy as the conventional method. Preliminary experiment results show that the bundle adjustment of a dataset with about 4500 images and 9 million image points can be done in only 1.5 minutes while achieving sub-pixel accuracy.

  7. INTER LABORATORY COMBAT HELMET BLUNT IMPACT TEST METHOD COMPARISON

    Science.gov (United States)

    2018-03-26

    data by Instrumentation for Impact  Test , SAE standard J211‐1 [4]. Although the entire curve is collected, the interest of this  project  team  solely...HELMET BLUNT IMPACT TEST METHOD COMPARISON by Tony J. Kayhart Charles A. Hewitt and Jonathan Cyganik March 2018 Final...INTER-LABORATORY COMBAT HELMET BLUNT IMPACT TEST METHOD COMPARISON 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR

  8. Crop Condition Assessment with Adjusted NDVI Using the Uncropped Arable Land Ratio

    Directory of Open Access Journals (Sweden)

    Miao Zhang

    2014-06-01

    Full Text Available Crop condition assessment in the early growing stage is essential for crop monitoring and crop yield prediction. A normalized difference vegetation index (NDVI-based method is employed to evaluate crop condition by inter-annual comparisons of both spatial variability (using NDVI images and seasonal dynamics (based on crop condition profiles. Since this type of method will generate false information if there are changes in crop rotation, cropping area or crop phenology, information on cropped/uncropped arable land is integrated to improve the accuracy of crop condition monitoring. The study proposes a new method to retrieve adjusted NDVI for cropped arable land during the growing season of winter crops by integrating 16-day composite Moderate Resolution Imaging Spectroradiometer (MODIS reflectance data at 250-m resolution with a cropped and uncropped arable land map derived from the multi-temporal China Environmental Satellite (Huan Jing Satellite charge-coupled device (HJ-1 CCD images at 30-m resolution. Using the land map’s data on cropped and uncropped arable land, a pixel-based uncropped arable land ratio (UALR at 250-m resolution was generated. Next, the UALR-adjusted NDVI was produced by assuming that the MODIS reflectance value for each pixel is a linear mixed signal composed of the proportional reflectance of cropped and uncropped arable land. When UALR-adjusted NDVI data are used for crop condition assessment, results are expected to be more accurate, because: (i pixels with only uncropped arable land are not included in the assessment; and (ii the adjusted NDVI corrects for interannual variation in cropping area. On the provincial level, crop growing profiles based on the two kinds of NDVI data illustrate the difference between the regular and the adjusted NDVI, with the difference depending on the total area of uncropped arable land in the region. The results suggested that the proposed method can be used to improve the assessment of

  9. Adjusted State Teacher Salaries and the Decision to Teach

    OpenAIRE

    Rickman, Dan S.; Wang, Hongbo; Winters, John V.

    2015-01-01

    Using the 3-year sample of the American Community Survey (ACS) for 2009 to 2011, we compute public school teacher salaries for comparison across U.S. states. Teacher salaries are adjusted for state differences in teacher characteristics, cost of living, household amenity attractiveness and federal tax rates. Salaries of non-teaching college graduates, defined as those with occupations outside of education, are used to adjust for state household amenity attractiveness. We then find that state ...

  10. Quantifying the indirect impacts of climate on agriculture: an inter-method comparison

    Science.gov (United States)

    Calvin, Kate; Fisher-Vanden, Karen

    2017-11-01

    Climate change and increases in CO2 concentration affect the productivity of land, with implications for land use, land cover, and agricultural production. Much of the literature on the effect of climate on agriculture has focused on linking projections of changes in climate to process-based or statistical crop models. However, the changes in productivity have broader economic implications that cannot be quantified in crop models alone. How important are these socio-economic feedbacks to a comprehensive assessment of the impacts of climate change on agriculture? In this paper, we attempt to measure the importance of these interaction effects through an inter-method comparison between process models, statistical models, and integrated assessment model (IAMs). We find the impacts on crop yields vary widely between these three modeling approaches. Yield impacts generated by the IAMs are 20%-40% higher than the yield impacts generated by process-based or statistical crop models, with indirect climate effects adjusting yields by between -12% and +15% (e.g. input substitution and crop switching). The remaining effects are due to technological change.

  11. Autogenic training as a therapy for adjustment disorder in adults

    Directory of Open Access Journals (Sweden)

    Jojić Boris R.

    2005-01-01

    Full Text Available Introduction. Autogenic training is a widely recognised psychotherapy technique. The British School of Autogenic Training cites a large list of disorders, states, and changes, where autogenic training may prove to be of help. We wanted to explore the application of autogenic training as a therapy for adjustment disorder in adults. Our sample consisted of a homogeneous group of 35 individuals, with an average age of 39.3±1.6 years, who were diagnosed with adjustment disorder, F 43.2, in accordance with ICD 10 search criteria. Aim. The aim of our study was to research the effectiveness of autogenic training as a therapy for adjustment disorder in adults, by checking the influence of autogenic training on the biophysical and biochemical indicators of adjustment disorder. Method. We measured the indicators of adjustment disorder and their changes in three phases: before the beginning, immediately after the beginning, and six months after the completion, of a practical course in autogenic training. We measured systolic and diastolic arterial blood pressure, brachial pulse rate as well as the levels of cortisol in plasma, of cholesterol in blood, and of glucose. During that period, autogenic training functioned as the sole therapy. Results. The study confirmed our preliminary assumptions. The measurements we performed demonstrated that arterial blood pressure, pulse rate, concentration of cholesterol and cortisol, after the application of autogenic training among the subjects suffering from adjustment disorder, were lower in comparison to the initial values. These values remained lower even six months after the completion of the practical course in autogenic training. Conclusion. Autogenic training significantly decreases the values of physiological indicators of adjustment disorder, diminishes the effects of stress in an individual, and helps adults to cope with stress, facilitating their recuperation.

  12. Comparison of genetic algorithms with conjugate gradient methods

    Science.gov (United States)

    Bosworth, J. L.; Foo, N. Y.; Zeigler, B. P.

    1972-01-01

    Genetic algorithms for mathematical function optimization are modeled on search strategies employed in natural adaptation. Comparisons of genetic algorithms with conjugate gradient methods, which were made on an IBM 1800 digital computer, show that genetic algorithms display superior performance over gradient methods for functions which are poorly behaved mathematically, for multimodal functions, and for functions obscured by additive random noise. Genetic methods offer performance comparable to gradient methods for many of the standard functions.

  13. Intra-operative adjustment of standard planes in C-arm CT image data.

    Science.gov (United States)

    Brehler, Michael; Görres, Joseph; Franke, Jochen; Barth, Karl; Vetter, Sven Y; Grützner, Paul A; Meinzer, Hans-Peter; Wolf, Ivo; Nabers, Diana

    2016-03-01

    With the help of an intra-operative mobile C-arm CT, medical interventions can be verified and corrected, avoiding the need for a post-operative CT and a second intervention. An exact adjustment of standard plane positions is necessary for the best possible assessment of the anatomical regions of interest but the mobility of the C-arm causes the need for a time-consuming manual adjustment. In this article, we present an automatic plane adjustment at the example of calcaneal fractures. We developed two feature detection methods (2D and pseudo-3D) based on SURF key points and also transferred the SURF approach to 3D. Combined with an atlas-based registration, our algorithm adjusts the standard planes of the calcaneal C-arm images automatically. The robustness of the algorithms is evaluated using a clinical data set. Additionally, we tested the algorithm's performance for two registration approaches, two resolutions of C-arm images and two methods for metal artifact reduction. For the feature extraction, the novel 3D-SURF approach performs best. As expected, a higher resolution ([Formula: see text] voxel) leads also to more robust feature points and is therefore slightly better than the [Formula: see text] voxel images (standard setting of device). Our comparison of two different artifact reduction methods and the complete removal of metal in the images shows that our approach is highly robust against artifacts and the number and position of metal implants. By introducing our fast algorithmic processing pipeline, we developed the first steps for a fully automatic assistance system for the assessment of C-arm CT images.

  14. The method and program system CABEI for adjusting consistency between natural element and its isotopes data

    Energy Technology Data Exchange (ETDEWEB)

    Tingjin, Liu; Zhengjun, Sun [Chinese Nuclear Data Center, Beijing, BJ (China)

    1996-06-01

    To meet the requirement of nuclear engineering, especially nuclear fusion reactor, now the data in the major evaluated libraries are given not only for natural element but also for its isotopes. Inconsistency between element and its isotopes data is one of the main problem in present evaluated neutron libraries. The formulas for adjusting to satisfy simultaneously the two kinds of consistent relationships were derived by means of least square method, the program system CABEI were developed. This program was tested by calculating the Fe data in CENDL-2.1. The results show that adjusted values satisfy the two kinds of consistent relationships.

  15. A Water Hammer Protection Method for Mine Drainage System Based on Velocity Adjustment of Hydraulic Control Valve

    Directory of Open Access Journals (Sweden)

    Yanfei Kou

    2016-01-01

    Full Text Available Water hammer analysis is a fundamental work of pipeline systems design process for water distribution networks. The main characteristics for mine drainage system are the limited space and high cost of equipment and pipeline changing. In order to solve the protection problem of valve-closing water hammer for mine drainage system, a water hammer protection method for mine drainage system based on velocity adjustment of HCV (Hydraulic Control Valve is proposed in this paper. The mathematic model of water hammer fluctuations is established based on the characteristic line method. Then, boundary conditions of water hammer controlling for mine drainage system are determined and its simplex model is established. The optimization adjustment strategy is solved from the mathematic model of multistage valve-closing. Taking a mine drainage system as an example, compared results between simulations and experiments show that the proposed method and the optimized valve-closing strategy are effective.

  16. A new comparison method for dew-point generators

    Science.gov (United States)

    Heinonen, Martti

    1999-12-01

    A new method for comparing dew-point generators was developed at the Centre for Metrology and Accreditation. In this method, the generators participating in a comparison are compared with a transportable saturator unit using a dew-point comparator. The method was tested by constructing a test apparatus and by comparing it with the MIKES primary dew-point generator several times in the dew-point temperature range from -40 to +75 °C. The expanded uncertainty (k = 2) of the apparatus was estimated to be between 0.05 and 0.07 °C and the difference between the comparator system and the generator is well within these limits. In particular, all of the results obtained in the range below 0 °C are within ±0.03 °C. It is concluded that a new type of a transfer standard with characteristics most suitable for dew-point comparisons can be developed on the basis of the principles presented in this paper.

  17. Set up of a method for the adjustment of resonance parameters on integral experiments

    International Nuclear Information System (INIS)

    Blaise, P.

    1996-01-01

    Resonance parameters for actinides play a significant role in the neutronic characteristics of all reactor types. All the major integral parameters strongly depend on the nuclear data of the isotopes in the resonance-energy regions.The author sets up a method for the adjustment of resonance parameters taking into account the self-shielding effects and restricting the cross section deconvolution problem to a limited energy region. (N.T.)

  18. Comparison results on preconditioned SOR-type iterative method for Z-matrices linear systems

    Science.gov (United States)

    Wang, Xue-Zhong; Huang, Ting-Zhu; Fu, Ying-Ding

    2007-09-01

    In this paper, we present some comparison theorems on preconditioned iterative method for solving Z-matrices linear systems, Comparison results show that the rate of convergence of the Gauss-Seidel-type method is faster than the rate of convergence of the SOR-type iterative method.

  19. Comparison of different methods for thoron progeny measurement

    International Nuclear Information System (INIS)

    Bi Lei; Zhu Li; Shang Bing; Cui Hongxing; Zhang Qingzhao

    2009-01-01

    Four popular methods for thoron progeny measurement were discussed, including the aspects of detector,principle, precondition, calculation advantages and disadvantages. Comparison experiments were made in mine and houses with high background in Yunnan Province. Since indoor thoron progeny changes with time obviously and with no rule, α track method is recommended in the area of radiation protection for environmental detection and assessment. (authors)

  20. Research on the phase adjustment method for dispersion interferometer on HL-2A tokamak

    Science.gov (United States)

    Tongyu, WU; Wei, ZHANG; Haoxi, WANG; Yan, ZHOU; Zejie, YIN

    2018-06-01

    A synchronous demodulation system is proposed and deployed for CO2 dispersion interferometer on HL-2A, which aims at high plasma density measurements and real-time feedback control. In order to make sure that the demodulator and the interferometer signal are synchronous in phase, a phase adjustment (PA) method has been developed for the demodulation system. The method takes advantages of the field programmable gate array parallel and pipeline process capabilities to carry out high performance and low latency PA. Some experimental results presented show that the PA method is crucial to the synchronous demodulation system and reliable to follow the fast change of the electron density. The system can measure the line-integrated density with a high precision of 2.0 × 1018 m‑2.

  1. Comparison of infusion pumps calibration methods

    Science.gov (United States)

    Batista, Elsa; Godinho, Isabel; do Céu Ferreira, Maria; Furtado, Andreia; Lucas, Peter; Silva, Claudia

    2017-12-01

    Nowadays, several types of infusion pump are commonly used for drug delivery, such as syringe pumps and peristaltic pumps. These instruments present different measuring features and capacities according to their use and therapeutic application. In order to ensure the metrological traceability of these flow and volume measuring equipment, it is necessary to use suitable calibration methods and standards. Two different calibration methods can be used to determine the flow error of infusion pumps. One is the gravimetric method, considered as a primary method, commonly used by National Metrology Institutes. The other calibration method, a secondary method, relies on an infusion device analyser (IDA) and is typically used by hospital maintenance offices. The suitability of the IDA calibration method was assessed by testing several infusion instruments at different flow rates using the gravimetric method. In addition, a measurement comparison between Portuguese Accredited Laboratories and hospital maintenance offices was performed under the coordination of the Portuguese Institute for Quality, the National Metrology Institute. The obtained results were directly related to the used calibration method and are presented in this paper. This work has been developed in the framework of the EURAMET projects EMRP MeDD and EMPIR 15SIP03.

  2. Analysis of Longitudinal Studies With Repeated Outcome Measures: Adjusting for Time-Dependent Confounding Using Conventional Methods.

    Science.gov (United States)

    Keogh, Ruth H; Daniel, Rhian M; VanderWeele, Tyler J; Vansteelandt, Stijn

    2018-05-01

    Estimation of causal effects of time-varying exposures using longitudinal data is a common problem in epidemiology. When there are time-varying confounders, which may include past outcomes, affected by prior exposure, standard regression methods can lead to bias. Methods such as inverse probability weighted estimation of marginal structural models have been developed to address this problem. However, in this paper we show how standard regression methods can be used, even in the presence of time-dependent confounding, to estimate the total effect of an exposure on a subsequent outcome by controlling appropriately for prior exposures, outcomes, and time-varying covariates. We refer to the resulting estimation approach as sequential conditional mean models (SCMMs), which can be fitted using generalized estimating equations. We outline this approach and describe how including propensity score adjustment is advantageous. We compare the causal effects being estimated using SCMMs and marginal structural models, and we compare the two approaches using simulations. SCMMs enable more precise inferences, with greater robustness against model misspecification via propensity score adjustment, and easily accommodate continuous exposures and interactions. A new test for direct effects of past exposures on a subsequent outcome is described.

  3. Comparison of estimation methods for fitting weibull distribution to ...

    African Journals Online (AJOL)

    Comparison of estimation methods for fitting weibull distribution to the natural stand of Oluwa Forest Reserve, Ondo State, Nigeria. ... Journal of Research in Forestry, Wildlife and Environment ... The result revealed that maximum likelihood method was more accurate in fitting the Weibull distribution to the natural stand.

  4. Accuracy of two face-bow/semi-adjustable articulator systems in transferring the maxillary occlusal cant

    Directory of Open Access Journals (Sweden)

    Nazia Nazir

    2012-01-01

    Full Text Available Context: The precision of an arbitrary face-bow in accurately transferring the orientation of the maxillary cast to the articulator has been questioned because the maxillary cast is mounted in relation to arbitrary measurements and anatomic landmarks that vary among individuals. Aim: This study was intended to evaluate the sagittal inclination of mounted maxillary casts on two semi-adjustable articulator/face-bow systems in comparison to the occlusal cant on lateral cephalograms. Materials and Methods: Maxillary casts were mounted on the Hanau and Girrbach semi-adjustable articulators following face-bow transfer with their respective face-bows. The sagittal inclination of these casts was measured in relation to the fixed horizontal reference plane using physical measurements. Occlusal cant was measured on lateral cephalograms. SPSS software (version 11.0, Chicago, IL, USA was used for statistical analysis. Repeated measures analysis of variance and Tukey′s tests were used to evaluate the results (P < 0.05. Results: Comparison of the occlusal cant on the articulators and cephalogram revealed statistically significant differences. Occlusal plane was steeper on Girrbach Artex articulator in comparison to the Hanau articulator. Conclusion: Within the limitations of this study, it was found that the sagittal inclination of the mounted maxillary cast achieved with Hanau articulator was closer to the cephalometric occlusal cant as compared to that of the Girrbach articulator. Among the two articulators and face-bow systems, the steepness of sagittal inclination was greater on Girrbach semi-adjustable articulator. Different face-bow/articulator systems could result in different orientation of the maxillary cast, resulting in variation in stability, cuspal inclines and cuspal heights.

  5. Method and apparatus for rapid adjustment of process gas inventory in gaseous diffusion cascades

    International Nuclear Information System (INIS)

    Dyer, R.H.; Fowler, A.H.; Vanstrum, P.R.

    1977-01-01

    The invention relates to an improved method and system for making relatively large and rapid adjustments in the process gas inventory of an electrically powered gaseous diffusion cascade in order to accommodate scheduled changes in the electrical power available for cascade operation. In the preferred form of the invention, the cascade is readied for a decrease in electrical input by simultaneously withdrawing substreams of the cascade B stream into respective process-gas-freezing and storage zones while decreasing the datum-pressure inputs to the positioning systems for the cascade control valves in proportion to the weight of process gas so removed. Consequently, the control valve positions are substantially unchanged by the reduction in invention, and there is minimal disturbance of the cascade isotopic gradient. The cascade is readied for restoration of the power cut by simultaneously evaporating the solids in the freezing zones to regenerate the process gas substreams and introducing them to the cascade A stream while increasing the aforementioned datum pressure inputs in proportion to the weight of process gas so returned. In the preferred form of the system for accomplishing these operations, heat exchangers are provided for freezing, storing, and evaporating the various substreams. Preferably, the heat exchangers are connected to use existing cascade auxiliary systems as a heat sink. A common control is employed to adjust and coordinate the necessary process gas transfers and datum pressure adjustments

  6. Asymmetric adjustment

    NARCIS (Netherlands)

    2010-01-01

    A method of adjusting a signal processing parameter for a first hearing aid and a second hearing aid forming parts of a binaural hearing aid system to be worn by a user is provided. The binaural hearing aid system comprises a user specific model representing a desired asymmetry between a first ear

  7. Approach to Multi-Criteria Group Decision-Making Problems Based on the Best-Worst-Method and ELECTRE Method

    Directory of Open Access Journals (Sweden)

    Xinshang You

    2016-09-01

    Full Text Available This paper proposes a novel approach to cope with the multi-criteria group decision-making problems. We give the pairwise comparisons based on the best-worst-method (BWM, which can decrease comparison times. Additionally, our comparison results are determined with the positive and negative aspects. In order to deal with the decision matrices effectively, we consider the elimination and choice translation reality (ELECTRE III method under the intuitionistic multiplicative preference relations environment. The ELECTRE III method is designed for a double-automatic system. Under a certain limitation, without bothering the decision-makers to reevaluate the alternatives, this system can adjust some special elements that have the most influence on the group’s satisfaction degree. Moreover, the proposed method is suitable for both the intuitionistic multiplicative preference relation and the interval valued fuzzy preference relations through the transformation formula. An illustrative example is followed to demonstrate the rationality and availability of the novel method.

  8. Comparisons of ratchetting analysis methods using RCC-M, RCC-MR and ASME codes

    International Nuclear Information System (INIS)

    Yang Yu; Cabrillat, M.T.

    2005-01-01

    The present paper compares the simplified ratcheting analysis methods used in RCC-M, RCC-MR and ASME with some examples. Firstly, comparisons of the methods in RCC-M and efficiency diagram in RCC-MR are investigated. A special method is used to describe these two methods with curves in one coordinate, and the different conservation is demonstrated. RCC-M method is also be interpreted by SR (second ratio) and v (efficiency index) which is used in RCC-MR. Hence, we can easily compare the previous two methods by defining SR as abscissa and v as ordinate and plotting two curves of them. Secondly, comparisons of the efficiency curve in RCC-MR and methods in ASME-NH APPENDIX T are investigated, with significant creep. At last, two practical evaluations are performed to show the comparisons of aforementioned methods. (authors)

  9. Covariate-adjusted measures of discrimination for survival data

    DEFF Research Database (Denmark)

    White, Ian R; Rapsomaniki, Eleni; Frikke-Schmidt, Ruth

    2015-01-01

    by the study design (e.g. age and sex) influence discrimination and can make it difficult to compare model discrimination between studies. Although covariate adjustment is a standard procedure for quantifying disease-risk factor associations, there are no covariate adjustment methods for discrimination...... statistics in censored survival data. OBJECTIVE: To develop extensions of the C-index and D-index that describe the prognostic ability of a model adjusted for one or more covariate(s). METHOD: We define a covariate-adjusted C-index and D-index for censored survival data, propose several estimators......, and investigate their performance in simulation studies and in data from a large individual participant data meta-analysis, the Emerging Risk Factors Collaboration. RESULTS: The proposed methods perform well in simulations. In the Emerging Risk Factors Collaboration data, the age-adjusted C-index and D-index were...

  10. The Comparison and Relationship between Religious Orientation and Practical Commitment to Religious Beliefs with Marital Adjustment in Seminary Scholars and University Students

    Directory of Open Access Journals (Sweden)

    رویا رسولی

    2015-04-01

    Full Text Available Spirituality and faith are powerful aspects of human experience. So, it is important to consider the relation between faith, beliefs, and marriage. The purpose of this study was to compare the relationship between religious orientation and practical commitment to religious beliefs with marital adjustment among seminary scholars and Yazd university students. Research sample consists 200 subjects including 50 student couples and 50 couples of seminary scholars collected via available sampling method from Yazd University and seminary scholars. Research instruments included: 1 Religious Orientation Scale 2 Test of Practical Commitment to Religious Beliefs, and 3 Dyadic Adjustment Scale. Correlation analyses showed that a relationship between religious orientation and marital adjustment. Marital adjustment has positive correlation with religiosity and negatively associated with unconstructed religiosity. Also there was a relationship between practical commitments to religious beliefs with marital adjustment in the groups. Relationship between practical commitments to religious beliefs with marital adjustment was higher than relationship between religious orientation and marital adjustment. the results of independent t-test analysis, showed signifycant differences between university students and seminary scholars in terms of religious orientation, practical commitent to religious beliefs and marital adjustment. Also, practical commitment to religious beliefs, marital adjustment and religious orientation in seminary schoolars were higher than students. Marital adjustment in seminary scholars was higher than students due to marital satisfaction because religious persons have faith beliefs. We conclude that faith beliefs impact marital satisfaction, marital adjustment conflict solving, and forgiveness. Negative beliefs about divorce and the believe that god supports marriage, may explain the relationship between commitment to religious beliefs and

  11. A comparison of interface tracking methods

    International Nuclear Information System (INIS)

    Kothe, D.B.; Rider, W.J.

    1995-01-01

    In this Paper we provide a direct comparison of several important algorithms designed to track fluid interfaces. In the process we propose improved criteria by which these methods are to be judged. We compare and contrast the behavior of the following interface tracking methods: high order monotone capturing schemes, level set methods, volume-of-fluid (VOF) methods, and particle-based (particle-in-cell, or PIC) methods. We compare these methods by first applying a set of standard test problems, then by applying a new set of enhanced problems designed to expose the limitations and weaknesses of each method. We find that the properties of these methods are not adequately assessed until they axe tested with flows having spatial and temporal vorticity gradients. Our results indicate that the particle-based methods are easily the most accurate of those tested. Their practical use, however, is often hampered by their memory and CPU requirements. Particle-based methods employing particles only along interfaces also have difficulty dealing with gross topology changes. Full PIC methods, on the other hand, do not in general have topology restrictions. Following the particle-based methods are VOF volume tracking methods, which are reasonably accurate, physically based, robust, low in cost, and relatively easy to implement. Recent enhancements to the VOF methods using multidimensional interface reconstruction and improved advection provide excellent results on a wide range of test problems

  12. Which is Easier, Adjusting to a Similar or to a Dissimilar Culture?

    DEFF Research Database (Denmark)

    Selmer, Jan

    2007-01-01

    for general adjustment, interaction adjustment, work adjustment and psychological adjustment.  Neither was there a difference in the time-related variable; time to proficiency. Although highly tentative, the suggestion that the degree of cultural similarity/dissimilarity may be irrelevant as to how easily......The intuitively paradoxical research proposition that it could be as difficult for business expatriates to adjust to a similar as to a dissimilar host culture is tested in this exploratory study. Based on data from a mail survey, a comparison of American business expatriates in Canada and Germany...... respectively did not reveal any difference in their extent of adjustment. Besides a significant between-group difference in cultural distance, confirming that the American expatriates perceived Canada as more culturally similar to America than Germany, no significant inter-group differences were detected...

  13. Demography-adjusted tests of neutrality based on genome-wide SNP data

    KAUST Repository

    Rafajlović, Marina

    2014-08-01

    Tests of the neutral evolution hypothesis are usually built on the standard model which assumes that mutations are neutral and the population size remains constant over time. However, it is unclear how such tests are affected if the last assumption is dropped. Here, we extend the unifying framework for tests based on the site frequency spectrum, introduced by Achaz and Ferretti, to populations of varying size. Key ingredients are the first two moments of the site frequency spectrum. We show how these moments can be computed analytically if a population has experienced two instantaneous size changes in the past. We apply our method to data from ten human populations gathered in the 1000 genomes project, estimate their demographies and define demography-adjusted versions of Tajima\\'s D, Fay & Wu\\'s H, and Zeng\\'s E. Our results show that demography-adjusted test statistics facilitate the direct comparison between populations and that most of the differences among populations seen in the original unadjusted tests can be explained by their underlying demographies. Upon carrying out whole-genome screens for deviations from neutrality, we identify candidate regions of recent positive selection. We provide track files with values of the adjusted and unadjusted tests for upload to the UCSC genome browser. © 2014 Elsevier Inc.

  14. Using multilevel modeling to assess case-mix adjusters in consumer experience surveys in health care.

    Science.gov (United States)

    Damman, Olga C; Stubbe, Janine H; Hendriks, Michelle; Arah, Onyebuchi A; Spreeuwenberg, Peter; Delnoij, Diana M J; Groenewegen, Peter P

    2009-04-01

    Ratings on the quality of healthcare from the consumer's perspective need to be adjusted for consumer characteristics to ensure fair and accurate comparisons between healthcare providers or health plans. Although multilevel analysis is already considered an appropriate method for analyzing healthcare performance data, it has rarely been used to assess case-mix adjustment of such data. The purpose of this article is to investigate whether multilevel regression analysis is a useful tool to detect case-mix adjusters in consumer assessment of healthcare. We used data on 11,539 consumers from 27 Dutch health plans, which were collected using the Dutch Consumer Quality Index health plan instrument. We conducted multilevel regression analyses of consumers' responses nested within health plans to assess the effects of consumer characteristics on consumer experience. We compared our findings to the results of another methodology: the impact factor approach, which combines the predictive effect of each case-mix variable with its heterogeneity across health plans. Both multilevel regression and impact factor analyses showed that age and education were the most important case-mix adjusters for consumer experience and ratings of health plans. With the exception of age, case-mix adjustment had little impact on the ranking of health plans. On both theoretical and practical grounds, multilevel modeling is useful for adequate case-mix adjustment and analysis of performance ratings.

  15. Comparison of different methods for the solution of sets of linear equations

    International Nuclear Information System (INIS)

    Bilfinger, T.; Schmidt, F.

    1978-06-01

    The application of the conjugate-gradient methods as novel general iterative methods for the solution of sets of linear equations with symmetrical systems matrices led to this paper, where a comparison of these methods with the conventional differently accelerated Gauss-Seidel iteration was carried out. In additon, the direct Cholesky method was also included in the comparison. The studies referred mainly to memory requirement, computing time, speed of convergence, and accuracy of different conditions of the systems matrices, by which also the sensibility of the methods with respect to the influence of truncation errors may be recognized. (orig.) 891 RW [de

  16. Using Anchoring Vignettes to Adjust Self-Reported Personality: A Comparison Between Countries

    Science.gov (United States)

    Weiss, Selina; Roberts, Richard D.

    2018-01-01

    Data from self-report tools cannot be readily compared between cultures due to culturally specific ways of using a response scale. As such, anchoring vignettes have been proposed as a suitable methodology for correcting against this difference. We developed anchoring vignettes for the Big Five Inventory-44 (BFI-44) to supplement its Likert-type response options. Based on two samples (Rwanda: n = 423; Philippines: n = 143), we evaluated the psychometric properties of the measure both before and after applying the anchoring vignette adjustment. Results show that adjusted scores had better measurement properties, including improved reliability and a more orthogonal correlational structure, relative to scores based on the original Likert scale. Correlations of the Big Five Personality Factors with life satisfaction were essentially unchanged after the vignette-adjustment while correlations with counterproductive were noticeably lower. Overall, these changed findings suggest that the use of anchoring vignette methodology improves the cross-cultural comparability of self-reported personality, a finding of potential interest to the field of global workforce research and development as well as educational policymakers. PMID:29593621

  17. Using Anchoring Vignettes to Adjust Self-Reported Personality: A Comparison Between Countries

    Directory of Open Access Journals (Sweden)

    Selina Weiss

    2018-03-01

    Full Text Available Data from self-report tools cannot be readily compared between cultures due to culturally specific ways of using a response scale. As such, anchoring vignettes have been proposed as a suitable methodology for correcting against this difference. We developed anchoring vignettes for the Big Five Inventory-44 (BFI-44 to supplement its Likert-type response options. Based on two samples (Rwanda: n = 423; Philippines: n = 143, we evaluated the psychometric properties of the measure both before and after applying the anchoring vignette adjustment. Results show that adjusted scores had better measurement properties, including improved reliability and a more orthogonal correlational structure, relative to scores based on the original Likert scale. Correlations of the Big Five Personality Factors with life satisfaction were essentially unchanged after the vignette-adjustment while correlations with counterproductive were noticeably lower. Overall, these changed findings suggest that the use of anchoring vignette methodology improves the cross-cultural comparability of self-reported personality, a finding of potential interest to the field of global workforce research and development as well as educational policymakers.

  18. Sensitivity analysis for missing dichotomous outcome data in multi-visit randomized clinical trial with randomization-based covariance adjustment.

    Science.gov (United States)

    Li, Siying; Koch, Gary G; Preisser, John S; Lam, Diana; Sanchez-Kam, Matilde

    2017-01-01

    Dichotomous endpoints in clinical trials have only two possible outcomes, either directly or via categorization of an ordinal or continuous observation. It is common to have missing data for one or more visits during a multi-visit study. This paper presents a closed form method for sensitivity analysis of a randomized multi-visit clinical trial that possibly has missing not at random (MNAR) dichotomous data. Counts of missing data are redistributed to the favorable and unfavorable outcomes mathematically to address possibly informative missing data. Adjusted proportion estimates and their closed form covariance matrix estimates are provided. Treatment comparisons over time are addressed with Mantel-Haenszel adjustment for a stratification factor and/or randomization-based adjustment for baseline covariables. The application of such sensitivity analyses is illustrated with an example. An appendix outlines an extension of the methodology to ordinal endpoints.

  19. Adjustment Following Disability: Representative Case Studies.

    Science.gov (United States)

    Heinemann, Allen W.; Shontz, Franklin C.

    1984-01-01

    Examined adjustment following physical disability using the representative case method with two persons with quadriplegia. Results highlighted the importance of previously established coping styles as well as the role of the environment in adjustment. Willingness to mourn aided in later growth. (JAC)

  20. Comparison methods between methane and hydrogen combustion for useful transfer in furnaces

    International Nuclear Information System (INIS)

    Ghiea, V.V.

    2009-01-01

    The advantages and disadvantages of hydrogen use by industrial combustion are critically presented. Greenhouse effect due natural water vapors from atmosphere and these produced by hydrogen industrial combustion is critically analyzed, together with problems of gas fuels containing hydrogen as the relative largest component. A comparison method between methane and hydrogen combustion for pressure loss in burner feeding pipe, is conceived. It is deduced the ratio of radiation useful heat transfer characteristics and convection heat transfer coefficients from combustion gases at industrial furnaces and heat recuperators for hydrogen and methane combustion, establishing specific comparison methods. Using criterial equations special processed for convection heat transfer determination, a calculation generalizing formula is established. The proposed comparison methods are general valid for different gaseous fuels. (author)

  1. An algebraic topological method for multimodal brain networks comparison

    Directory of Open Access Journals (Sweden)

    Tiago eSimas

    2015-07-01

    Full Text Available Understanding brain connectivity is one of the most important issues in neuroscience. Nonetheless, connectivity data can reflect either functional relationships of brain activities or anatomical connections between brain areas. Although both representations should be related, this relationship is not straightforward. We have devised a powerful method that allows different operations between networks that share the same set of nodes, by embedding them in a common metric space, enforcing transitivity to the graph topology. Here, we apply this method to construct an aggregated network from a set of functional graphs, each one from a different subject. Once this aggregated functional network is constructed, we use again our method to compare it with the structural connectivity to identify particular brain regions that differ in both modalities (anatomical and functional. Remarkably, these brain regions include functional areas that form part of the classical resting state networks. We conclude that our method -based on the comparison of the aggregated functional network- reveals some emerging features that could not be observed when the comparison is performed with the classical averaged functional network.

  2. Comparison on genomic predictions using GBLUP models and two single-step blending methods with different relationship matrices in the Nordic Holstein population

    DEFF Research Database (Denmark)

    Gao, Hongding; Christensen, Ole Fredslund; Madsen, Per

    2012-01-01

    Background A single-step blending approach allows genomic prediction using information of genotyped and non-genotyped animals simultaneously. However, the combined relationship matrix in a single-step method may need to be adjusted because marker-based and pedigree-based relationship matrices may...... not be on the same scale. The same may apply when a GBLUP model includes both genomic breeding values and residual polygenic effects. The objective of this study was to compare single-step blending methods and GBLUP methods with and without adjustment of the genomic relationship matrix for genomic prediction of 16......) a simple GBLUP method, 2) a GBLUP method with a polygenic effect, 3) an adjusted GBLUP method with a polygenic effect, 4) a single-step blending method, and 5) an adjusted single-step blending method. In the adjusted GBLUP and single-step methods, the genomic relationship matrix was adjusted...

  3. Comparison of Nested-PCR technique and culture method in ...

    African Journals Online (AJOL)

    USER

    2010-04-05

    Apr 5, 2010 ... Full Length Research Paper. Comparison of ... The aim of the present study was to evaluate the diagnostic value of nested PCR in genitourinary ... method. Based on obtained results, the positivity rate of urine samples in this study was 5.0% by using culture and PCR methods and 2.5% for acid fast staining.

  4. Kinematic synthesis of adjustable robotic mechanisms

    Science.gov (United States)

    Chuenchom, Thatchai

    1993-01-01

    Conventional hard automation, such as a linkage-based or a cam-driven system, provides high speed capability and repeatability but not the flexibility required in many industrial applications. The conventional mechanisms, that are typically single-degree-of-freedom systems, are being increasingly replaced by multi-degree-of-freedom multi-actuators driven by logic controllers. Although this new trend in sophistication provides greatly enhanced flexibility, there are many instances where the flexibility needs are exaggerated and the associated complexity is unnecessary. Traditional mechanism-based hard automation, on the other hand, neither can fulfill multi-task requirements nor are cost-effective mainly due to lack of methods and tools to design-in flexibility. This dissertation attempts to bridge this technological gap by developing Adjustable Robotic Mechanisms (ARM's) or 'programmable mechanisms' as a middle ground between high speed hard automation and expensive serial jointed-arm robots. This research introduces the concept of adjustable robotic mechanisms towards cost-effective manufacturing automation. A generalized analytical synthesis technique has been developed to support the computational design of ARM's that lays the theoretical foundation for synthesis of adjustable mechanisms. The synthesis method developed in this dissertation, called generalized adjustable dyad and triad synthesis, advances the well-known Burmester theory in kinematics to a new level. While this method provides planar solutions, a novel patented scheme is utilized for converting prescribed three-dimensional motion specifications into sets of planar projections. This provides an analytical and a computational tool for designing adjustable mechanisms that satisfy multiple sets of three-dimensional motion specifications. Several design issues were addressed, including adjustable parameter identification, branching defect, and mechanical errors. An efficient mathematical scheme for

  5. Precision of GNSS instruments by static method comparing in real time

    Directory of Open Access Journals (Sweden)

    Slavomír Labant

    2009-09-01

    Full Text Available Tablet paper describes comparison of measuring accuracy two apparatus from the firm Leica. One of them recieve signals onlyfrom GPS satelites and another instrument is working with GPS and also with GLONASS satelites. Measuring is carry out by RTK staticmethod with 2 minutes observations. Measurement processing is separated to X, Y (position and h (heigh. Adjustment of directobservations is used as a adjusting method.

  6. A cross-benchmark comparison of 87 learning to rank methods

    NARCIS (Netherlands)

    Tax, N.; Bockting, S.; Hiemstra, D.

    2015-01-01

    Learning to rank is an increasingly important scientific field that comprises the use of machine learning for the ranking task. New learning to rank methods are generally evaluated on benchmark test collections. However, comparison of learning to rank methods based on evaluation results is hindered

  7. Longitudinal Relationships between Sibling Behavioral Adjustment and Behavior Problems of Children with Developmental Disabilities

    Science.gov (United States)

    Hastings, Richard P.

    2007-01-01

    Siblings of children with developmental disabilities were assessed twice, 2 years apart (N = 75 at Time 1, N = 56 at Time 2). Behavioral adjustment of the siblings and their brother or sister with developmental disability was assessed. Comparisons of adjustment for siblings of children with autism, Down syndrome, and mixed etiology mental…

  8. Performance comparison of resistance-trained subjects by different methods of adjusting for body mass. DOI: http://dx.doi.org/10.5007/1980-0037.2012v14n3p313

    Directory of Open Access Journals (Sweden)

    Wladymir Külkamp

    2012-05-01

    Full Text Available The aim of this study was to compare the performance (1RM of resistance-trained subjects, using different methods of adjusting for body mass (BM: ratio standard, theoretical allometric exponent (0.67, and specific allometric exponents. The study included 11 male and 11 female healthy non-athletes (mean age = 22 years engaged in regular resistance training for at least 6 months. Bench press (BP, 45° leg press (LP and arm curl (AC exercises were performed, and the participants were ranked (in descending order according to each method. The specific allometric exponents for each exercise were: for men – BP (0.73, LP (0.35, and AC (0.71; and for women – BP (1.22, LP (1.02, and AC (0.85. The Kruskal-Wallis test revealed no differences between the rankings. However, visual inspection indicated that the participants were often classified differently in relation to performance by the methods used. Furthermore, no adjusted strength score was equal to the absolute strength values (1RM. The results suggest that there is a range of values in which the differences between exponents do not reflect different rankings (below 0.07 points and a range in which rankings can be fundamentally different (above 0.14 points. This may be important in long-term selection of universally accepted allometric exponents, considering the range of values found in different studies. The standardization of exponents may allow the use of allometry as an additional tool in the prescription of resistance training.

  9. [Adjustment disorder and DSM-5: A review].

    Science.gov (United States)

    Appart, A; Lange, A-K; Sievert, I; Bihain, F; Tordeurs, D

    2017-02-01

    This paper exposes the complexity and discrete characteristic of the adjustment disorder with reference to its clinical and scientific diagnosis. Even though the disorder occurs in frequent clinical circumstances after important life events, such as mobbing, burn-out, unemployment, divorce or separation, pregnancy denial, surgical operation or cancer, the adjustment disorder is often not considered in the diagnosis since better known disorders with similar symptoms prevail, such as major depression and anxiety disorder. Ten years ago, Bottéro had already noticed that the adjustment disorder diagnosis remained rather uncommon with reference to patients he was working with while Langlois assimilated this disorder with an invisible diagnosis. In order to maximize the data collection, we used the article review below and challenged their surveys and results: National Center for Biotechnology Information (NBCI - Pubmed) for international articles and Cairn.info for French literature. Moreover, we targeted the following keywords on the search engine and used articles, which had been published from 1 February 1975 to 31 January 2015: "adjustment", "adjustment disorder" and the French translation "trouble de l'adaptation". One hundred and ninety-one articles matched our search criteria. However, after a closer analysis, solely 105 articles were selected as being of interest. Many articles were excluded since they were related to non-psychiatric fields induced by the term "adaptation". Indeed, the number of corresponding articles found for the adjustment disorder literally pointed-out the lack of existing literature on that topic in comparison to more known disorders such as anxiety disorder (2661 articles) or major depression (5481 articles). This represents up to 50 times more articles in comparison to the number of articles we found on adjustment disorder and up to 20 times more articles for the eating disorder (1994), although the prevalence is not significantly

  10. Proper comparison among methods using a confusion matrix

    CSIR Research Space (South Africa)

    Salmon

    2015-07-01

    Full Text Available -1 IGARSS 2015, Milan, Italy, 26-31 July 2015 Proper comparison among methods using a confusion matrix 1,2 B.P. Salmon, 2,3 W. Kleynhans, 2,3 C.P. Schwegmann and 1J.C. Olivier 1School of Engineering and ICT, University of Tasmania, Australia 2...

  11. Comparison of Sentinel-2A and Landsat-8 Nadir BRDF Adjusted Reflectance (NBAR) over Southern Africa

    Science.gov (United States)

    Li, J.; Roy, D. P.; Zhang, H.

    2016-12-01

    The Landsat satellites have been providing moderate resolution imagery of the Earth's surface for over 40 years with continuity provided by the Landsat 8 and planned Landsat 9 missions. The European Space Agency Sentinel-2 satellite was successfully launched into a polar sun-synchronous orbit in 2015 and carries the Multi Spectral Instrument (MSI) that has Landsat-like bands and acquisition coverage. These new sensors acquire images at view angles ± 7.5° (Landsat) and ± 10.3° (Sentinel-2) from nadir that result in small directional effects in the surface reflectance. When data from adjoining paths, or from long time series are used, a model of the surface anisotropy is required to adjust observations to a uniform nadir view (primarily for visual consistency, vegetation monitoring, or detection of subtle surface changes). Recently a generalized approach was published that provides consistent Landsat view angle corrections to provide nadir BRDF-adjusted reflectance (NBAR). Because the BRDF shapes of different terrestrial surfaces are sufficiently similar over the narrow 15° Landsat field of view, a fixed global set of MODIS BRDF spectral model parameters was shown to be adequate for Landsat NBAR derivation with little sensitivity to the land cover type, condition, or surface disturbance. This poster demonstrates the application of this methodology to Sentinel-2 data over a west-east transect across southern Africa. The reflectance differences between adjacent overlapping paths in the forward and backward scatter directions are quantified for both before and after BRDF correction. Sentinel-2 and Landsat-8 reflectance and NBAR inter-comparison results considering different stages of cloud and saturation filtering, and filtering to reduce surface state differences caused by acquisition time differences, demonstrate the utility of the approach. The relevance and limitations of the corrections for providing consistent moderate resolution reflectance are discussed.

  12. Calculation of Credit Valuation Adjustment Based on Least Square Monte Carlo Methods

    Directory of Open Access Journals (Sweden)

    Qian Liu

    2015-01-01

    Full Text Available Counterparty credit risk has become one of the highest-profile risks facing participants in the financial markets. Despite this, relatively little is known about how counterparty credit risk is actually priced mathematically. We examine this issue using interest rate swaps. This largely traded financial product allows us to well identify the risk profiles of both institutions and their counterparties. Concretely, Hull-White model for rate and mean-reverting model for default intensity have proven to be in correspondence with the reality and to be well suited for financial institutions. Besides, we find that least square Monte Carlo method is quite efficient in the calculation of credit valuation adjustment (CVA, for short as it avoids the redundant step to generate inner scenarios. As a result, it accelerates the convergence speed of the CVA estimators. In the second part, we propose a new method to calculate bilateral CVA to avoid double counting in the existing bibliographies, where several copula functions are adopted to describe the dependence of two first to default times.

  13. A GRAMMATICAL ADJUSTMENT ANALYSIS OF STATISTICAL MACHINE TRANSLATION METHOD USED BY GOOGLE TRANSLATE COMPARED TO HUMAN TRANSLATION IN TRANSLATING ENGLISH TEXT TO INDONESIAN

    Directory of Open Access Journals (Sweden)

    Eko Pujianto

    2017-04-01

    Full Text Available Google translate is a program which provides fast, free and effortless translating service. This service uses a unique method to translate. The system is called ―Statistical Machine Translation‖, the newest method in automatic translation. Machine translation (MT is an area of many kinds of different subjects of study and technique from linguistics, computers science, artificial intelligent (AI, translation theory, and statistics. SMT works by using statistical methods and mathematics to process the training data. The training data is corpus-based. It is a compilation of sentences and words of the languages (SL and TL from translation done by human. By using this method, Google let their machine discovers the rules for themselves. They do this by analyzing millions of documents that have already been translated by human translators and then generate the result based on the corpus/training data. However, questions arise when the results of the automatic translation prove to be unreliable in some extent. This paper questions the dependability of Google translate in comparison with grammatical adjustment that naturally characterizes human translators' specific advantage. The attempt is manifested through the analysis of the TL of some texts translated by the SMT. It is expected that by using the sample of TL produced by SMT we can learn the potential flaws of the translation. If such exists, the partial of more substantial undependability of SMT may open more windows to the debates of whether this service may suffice the users‘ need.

  14. Adjusting the Parameters of Metal Oxide Gapless Surge Arresters’ Equivalent Circuits Using the Harmony Search Method

    Directory of Open Access Journals (Sweden)

    Christos A. Christodoulou

    2017-12-01

    Full Text Available The appropriate circuit modeling of metal oxide gapless surge arresters is critical for insulation coordination studies. Metal oxide arresters present a dynamic behavior for fast front surges; namely, their residual voltage is dependent on the peak value, as well as the duration of the injected impulse current, and should therefore not only be represented by non-linear elements. The aim of the current work is to adjust the parameters of the most frequently used surge arresters’ circuit models by considering the magnitude of the residual voltage, as well as the dissipated energy for given pulses. In this aim, the harmony search method is implemented to adjust parameter values of the arrester equivalent circuit models. This functions by minimizing a defined objective function that compares the simulation outcomes with the manufacturer’s data and the results obtained from previous methodologies.

  15. Does Fall History Influence Residential Adjustments?

    Science.gov (United States)

    Leland, Natalie; Porell, Frank; Murphy, Susan L.

    2011-01-01

    Purpose of the study: To determine whether reported falls at baseline are associated with an older adult's decision to make a residential adjustment (RA) and the type of adjustment made in the subsequent 2 years. Design and Methods: Observations (n = 25,036) were from the Health and Retirement Study, a nationally representative sample of…

  16. Comparison of sine dwell and broadband methods for modal testing

    Science.gov (United States)

    Chen, Jay-Chung

    1989-01-01

    The objectives of modal tests for large complex spacecraft structural systems are outlined. The comparison criteria for the modal test methods, namely, the broadband excitation and the sine dwell methods, are established. Using the Galileo spacecraft modal test and the Centaur G Prime upper stage vehicle modal test as examples, the relative advantage or disadvantage of each method is examined. The usefulness or shortcomings of the methods are given from a practical engineering viewpoint.

  17. A Comparison of Affect Ratings Obtained with Ecological Momentary Assessment and the Day Reconstruction Method

    Science.gov (United States)

    Dockray, Samantha; Grant, Nina; Stone, Arthur A.; Kahneman, Daniel; Wardle, Jane

    2010-01-01

    Measurement of affective states in everyday life is of fundamental importance in many types of quality of life, health, and psychological research. Ecological momentary assessment (EMA) is the recognized method of choice, but the respondent burden can be high. The day reconstruction method (DRM) was developed by Kahneman and colleagues (Science, 2004, 306, 1776–1780) to assess affect, activities and time use in everyday life. We sought to validate DRM affect ratings by comparison with contemporaneous EMA ratings in a sample of 94 working women monitored over work and leisure days. Six EMA ratings of happiness, tiredness, stress, and anger/frustration were obtained over each 24 h period, and were compared with DRM ratings for the same hour, recorded retrospectively at the end of the day. Similar profiles of affect intensity were recorded with the two techniques. The between-person correlations adjusted for attenuation ranged from 0.58 (stress, working day) to 0.90 (happiness, leisure day). The strength of associations was not related to age, educational attainment, or depressed mood. We conclude that the DRM provides reasonably reliable estimates both of the intensity of affect and variations in affect over the day, so is a valuable instrument for the measurement of everyday experience in health and social research. PMID:21113328

  18. Comparison of measurement methods with a mixed effects procedure accounting for replicated evaluations (COM3PARE): method comparison algorithm implementation for head and neck IGRT positional verification.

    Science.gov (United States)

    Roy, Anuradha; Fuller, Clifton D; Rosenthal, David I; Thomas, Charles R

    2015-08-28

    Comparison of imaging measurement devices in the absence of a gold-standard comparator remains a vexing problem; especially in scenarios where multiple, non-paired, replicated measurements occur, as in image-guided radiotherapy (IGRT). As the number of commercially available IGRT presents a challenge to determine whether different IGRT methods may be used interchangeably, an unmet need conceptually parsimonious and statistically robust method to evaluate the agreement between two methods with replicated observations. Consequently, we sought to determine, using an previously reported head and neck positional verification dataset, the feasibility and utility of a Comparison of Measurement Methods with the Mixed Effects Procedure Accounting for Replicated Evaluations (COM3PARE), a unified conceptual schema and analytic algorithm based upon Roy's linear mixed effects (LME) model with Kronecker product covariance structure in a doubly multivariate set-up, for IGRT method comparison. An anonymized dataset consisting of 100 paired coordinate (X/ measurements from a sequential series of head and neck cancer patients imaged near-simultaneously with cone beam CT (CBCT) and kilovoltage X-ray (KVX) imaging was used for model implementation. Software-suggested CBCT and KVX shifts for the lateral (X), vertical (Y) and longitudinal (Z) dimensions were evaluated for bias, inter-method (between-subject variation), intra-method (within-subject variation), and overall agreement using with a script implementing COM3PARE with the MIXED procedure of the statistical software package SAS (SAS Institute, Cary, NC, USA). COM3PARE showed statistically significant bias agreement and difference in inter-method between CBCT and KVX was observed in the Z-axis (both p - value<0.01). Intra-method and overall agreement differences were noted as statistically significant for both the X- and Z-axes (all p - value<0.01). Using pre-specified criteria, based on intra-method agreement, CBCT was deemed

  19. Simulated annealing method for electronic circuits design: adaptation and comparison with other optimization methods; La methode du recuit simule pour la conception des circuits electroniques: adaptation et comparaison avec d`autres methodes d`optimisation

    Energy Technology Data Exchange (ETDEWEB)

    Berthiau, G

    1995-10-01

    The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. (Abstract Truncated)

  20. Enzyme sequence similarity improves the reaction alignment method for cross-species pathway comparison

    Energy Technology Data Exchange (ETDEWEB)

    Ovacik, Meric A. [Chemical and Biochemical Engineering Department, Rutgers University, Piscataway, NJ 08854 (United States); Androulakis, Ioannis P., E-mail: yannis@rci.rutgers.edu [Chemical and Biochemical Engineering Department, Rutgers University, Piscataway, NJ 08854 (United States); Biomedical Engineering Department, Rutgers University, Piscataway, NJ 08854 (United States)

    2013-09-15

    Pathway-based information has become an important source of information for both establishing evolutionary relationships and understanding the mode of action of a chemical or pharmaceutical among species. Cross-species comparison of pathways can address two broad questions: comparison in order to inform evolutionary relationships and to extrapolate species differences used in a number of different applications including drug and toxicity testing. Cross-species comparison of metabolic pathways is complex as there are multiple features of a pathway that can be modeled and compared. Among the various methods that have been proposed, reaction alignment has emerged as the most successful at predicting phylogenetic relationships based on NCBI taxonomy. We propose an improvement of the reaction alignment method by accounting for sequence similarity in addition to reaction alignment method. Using nine species, including human and some model organisms and test species, we evaluate the standard and improved comparison methods by analyzing glycolysis and citrate cycle pathways conservation. In addition, we demonstrate how organism comparison can be conducted by accounting for the cumulative information retrieved from nine pathways in central metabolism as well as a more complete study involving 36 pathways common in all nine species. Our results indicate that reaction alignment with enzyme sequence similarity results in a more accurate representation of pathway specific cross-species similarities and differences based on NCBI taxonomy.

  1. Enzyme sequence similarity improves the reaction alignment method for cross-species pathway comparison

    International Nuclear Information System (INIS)

    Ovacik, Meric A.; Androulakis, Ioannis P.

    2013-01-01

    Pathway-based information has become an important source of information for both establishing evolutionary relationships and understanding the mode of action of a chemical or pharmaceutical among species. Cross-species comparison of pathways can address two broad questions: comparison in order to inform evolutionary relationships and to extrapolate species differences used in a number of different applications including drug and toxicity testing. Cross-species comparison of metabolic pathways is complex as there are multiple features of a pathway that can be modeled and compared. Among the various methods that have been proposed, reaction alignment has emerged as the most successful at predicting phylogenetic relationships based on NCBI taxonomy. We propose an improvement of the reaction alignment method by accounting for sequence similarity in addition to reaction alignment method. Using nine species, including human and some model organisms and test species, we evaluate the standard and improved comparison methods by analyzing glycolysis and citrate cycle pathways conservation. In addition, we demonstrate how organism comparison can be conducted by accounting for the cumulative information retrieved from nine pathways in central metabolism as well as a more complete study involving 36 pathways common in all nine species. Our results indicate that reaction alignment with enzyme sequence similarity results in a more accurate representation of pathway specific cross-species similarities and differences based on NCBI taxonomy

  2. A SURVEY OF DEATH ADJUSTMENT IN THE INDIAN SUBCONTINENT.

    Science.gov (United States)

    Hossain, Mohammad Samir; Irfan, Muhammad; Balhara, Yatan Pal Singh; Giasuddin, Noor Ahmed; Sultana, Syeda Naheed

    2015-01-01

    The Death Adjustment Hypothesis (DAH) postulates two key themes. Its first part postulates that death should not be considered the end of existence and the second part emphasizes that the belief in immortal pattern of human existence can only be adopted in a morally rich life with the attitude towards morality and materialism balanced mutually. We wanted to explore Death Adjustment in the Indian subcontinent and the differences among, Indians, Pakistanis and Bangladeshis. We also wanted to find the relationship between death adjustment (i.e., adaptation to death), materialistic thoughts and death adjustment thoughts. This was a cross-sectional study, conducted from May 2010 to June 2013. Using a purposive sampling strategy, a sample of 296 participants from the Indian subcontinent [Pakistan (n=100), Bangladesh (n=98) and India (n=98)] was selected. Multidimensional Fear of Death Scale (MFODS) was used to measure death adjustment. The rest of the variables were measured using lists of respective thoughts, described in elaborated DAH. Analyses were carried out using SPSSv13. The mean death adjustment score for Pakistani, Indian and Bangaldeshi population were 115.26 +/- 26.4, 125.87 +/- 24.3 and 114.91 +/- 21.2, respectively. Death adjustment was better with older age (r=0.20) and with lower scores on materialistic thoughts (r = -0.26). However, this was a weak relation. The three nationalities were compared with each other by using Analysis of variance. Death adjustment thoughts and death adjustment were significantly different when Indians were compared with Bangladeshis (p=0.00) and Pakistanis (p=0.006) but comparison between Bangladeshis and Pakistanis showed no significant difference. Subjects with lesser materialistic thoughts showed better death adjustment. There are differences between Muslims and non-Muslims in adjusting to death.

  3. a comparison of methods in a behaviour study of the south african ...

    African Journals Online (AJOL)

    A COMPARISON OF METHODS IN A BEHAVIOUR STUDY OF THE ... Three methods are outlined in this paper and the results obtained from each method were .... There was definitely no aggressive response towards the Sky pointing mate.

  4. Adjusted neutron spectra of STEK cores for reactivity calculations

    International Nuclear Information System (INIS)

    Dekker, J.W.M.; Dragt, J.B.; Janssen, A.J.; Heijboer, R.J.; Klippel, H.Th.

    1978-02-01

    Neutron flux and adjoint flux spectra form a pre-requisite in the analysis of reactivity worth data measured in the STEK facility. First, a survey of all available information about these spectra is given. Next a special application of a general adjustment method is described. This method has been used to obtain adjusted STEK group flux and adjoint flux spectra, starting from calculated spectra. These theoretical spectra were adjusted to reactivity worths of natural boron (nat. B) and 235 U as well as a number of fission reaction rates. As a by-product in this adjustment calculation adjusted fission group cross sections of 235 U were obtained. The results, viz. group fluxes and adjoint fluxes and adjusted fission cross sections of 235 U are given. They have been used for the interpretation of fission product reactivity worth measurements made in STEK

  5. Quantitative comparison of two particle tracking methods in fluorescence microscopy images

    CSIR Research Space (South Africa)

    Mabaso, M

    2013-09-01

    Full Text Available that cannot be analysed efficiently by means of manual analysis. In this study we compare the performance of two computer-based tracking methods for tracking of bright particles in fluorescence microscopy image sequences. The methods under comparison are...

  6. Interpersonal ambivalence, perceived relationship adjustment, and conjugal loss.

    Science.gov (United States)

    Bonanno, G A; Notarius, C I; Gunzerath, L; Keltner, D; Horowitz, M J

    1998-12-01

    Ambivalence is widely assumed to prolong grief. To examine this hypothesis, the authors developed a measure of ambivalence based on an algorithmic combination of separate positive and negative evaluations of one's spouse. Preliminary construct validity was evidenced in relation to emotional difficulties and to facial expressions of emotion. Bereaved participants, relative to a nonbereaved comparison sample, recollected their relationships as better adjusted but were more ambivalent. Ambivalence about spouses was generally associated with increased distress and poorer perceived health but did not predict long-term grief outcome once initial outcome was controlled. In contrast, initial grief and distress predicted increased ambivalence and decreased Dyadic Adjustment Scale scores at 14 months postloss, regardless of initial scores on these measures. Limitations and implications of the findings are discussed.

  7. WTA estimates using the method of paired comparison: tests of robustness

    Science.gov (United States)

    Patricia A. Champ; John B. Loomis

    1998-01-01

    The method of paired comparison is modified to allow choices between two alternative gains so as to estimate willingness to accept (WTA) without loss aversion. The robustness of WTA values for two public goods is tested with respect to sensitivity of theWTA measure to the context of the bundle of goods used in the paired comparison exercise and to the scope (scale) of...

  8. A comparison of non-invasive versus invasive methods of ...

    African Journals Online (AJOL)

    Puneet Khanna

    for Hb estimation from the laboratory [total haemoglobin mass (tHb)] and arterial blood gas (ABG) machine (aHb), using ... A comparison of non-invasive versus invasive methods of haemoglobin estimation in patients undergoing intracranial surgery. 161 .... making decisions for blood transfusions based on these results.

  9. Structural pattern recognition methods based on string comparison for fusion databases

    International Nuclear Information System (INIS)

    Dormido-Canto, S.; Farias, G.; Dormido, R.; Vega, J.; Sanchez, J.; Duro, N.; Vargas, H.; Ratta, G.; Pereira, A.; Portas, A.

    2008-01-01

    Databases for fusion experiments are designed to store several million waveforms. Temporal evolution signals show the same patterns under the same plasma conditions and, therefore, pattern recognition techniques allow the identification of similar plasma behaviours. This article is focused on the comparison of structural pattern recognition methods. A pattern can be composed of simpler sub-patterns, where the most elementary sub-patterns are known as primitives. Selection of primitives is an essential issue in structural pattern recognition methods, because they determine what types of structural components can be constructed. However, it should be noted that there is not a general solution to extract structural features (primitives) from data. So, four different ways to compute the primitives of plasma waveforms are compared: (1) constant length primitives, (2) adaptive length primitives, (3) concavity method and (4) concavity method for noisy signals. Each method defines a code alphabet and, in this way, the pattern recognition problem is carried out via string comparisons. Results of the four methods with the TJ-II stellarator databases will be discussed

  10. Structural pattern recognition methods based on string comparison for fusion databases

    Energy Technology Data Exchange (ETDEWEB)

    Dormido-Canto, S. [Dpto. Informatica y Automatica - UNED 28040, Madrid (Spain)], E-mail: sebas@dia.uned.es; Farias, G.; Dormido, R. [Dpto. Informatica y Automatica - UNED 28040, Madrid (Spain); Vega, J. [Asociacion EURATOM/CIEMAT para Fusion, 28040, Madrid (Spain); Sanchez, J.; Duro, N.; Vargas, H. [Dpto. Informatica y Automatica - UNED 28040, Madrid (Spain); Ratta, G.; Pereira, A.; Portas, A. [Asociacion EURATOM/CIEMAT para Fusion, 28040, Madrid (Spain)

    2008-04-15

    Databases for fusion experiments are designed to store several million waveforms. Temporal evolution signals show the same patterns under the same plasma conditions and, therefore, pattern recognition techniques allow the identification of similar plasma behaviours. This article is focused on the comparison of structural pattern recognition methods. A pattern can be composed of simpler sub-patterns, where the most elementary sub-patterns are known as primitives. Selection of primitives is an essential issue in structural pattern recognition methods, because they determine what types of structural components can be constructed. However, it should be noted that there is not a general solution to extract structural features (primitives) from data. So, four different ways to compute the primitives of plasma waveforms are compared: (1) constant length primitives, (2) adaptive length primitives, (3) concavity method and (4) concavity method for noisy signals. Each method defines a code alphabet and, in this way, the pattern recognition problem is carried out via string comparisons. Results of the four methods with the TJ-II stellarator databases will be discussed.

  11. Neutron spectrum adjustment. The role of covariances

    International Nuclear Information System (INIS)

    Remec, I.

    1992-01-01

    Neutron spectrum adjustment method is shortly reviewed. Practical example dealing with power reactor pressure vessel exposure rates determination is analysed. Adjusted exposure rates are found only slightly affected by the covariances of measured reaction rates and activation cross sections, while the multigroup spectra covariances were found important. Approximate spectra covariance matrices, as suggested in Astm E944-89, were found useful but care is advised if they are applied in adjustments of spectra at locations without dosimetry. (author) [sl

  12. Comparison of vibrational conductivity and radiative energy transfer methods

    Science.gov (United States)

    Le Bot, A.

    2005-05-01

    This paper is concerned with the comparison of two methods well suited for the prediction of the wideband response of built-up structures subjected to high-frequency vibrational excitation. The first method is sometimes called the vibrational conductivity method and the second one is rather known as the radiosity method in the field of acoustics, or the radiative energy transfer method. Both are based on quite similar physical assumptions i.e. uncorrelated sources, mean response and high-frequency excitation. Both are based on analogies with some equations encountered in the field of heat transfer. However these models do not lead to similar results. This paper compares the two methods. Some numerical simulations on a pair of plates joined along one edge are provided to illustrate the discussion.

  13. Ensemble of trees approaches to risk adjustment for evaluating a hospital's performance.

    Science.gov (United States)

    Liu, Yang; Traskin, Mikhail; Lorch, Scott A; George, Edward I; Small, Dylan

    2015-03-01

    A commonly used method for evaluating a hospital's performance on an outcome is to compare the hospital's observed outcome rate to the hospital's expected outcome rate given its patient (case) mix and service. The process of calculating the hospital's expected outcome rate given its patient mix and service is called risk adjustment (Iezzoni 1997). Risk adjustment is critical for accurately evaluating and comparing hospitals' performances since we would not want to unfairly penalize a hospital just because it treats sicker patients. The key to risk adjustment is accurately estimating the probability of an Outcome given patient characteristics. For cases with binary outcomes, the method that is commonly used in risk adjustment is logistic regression. In this paper, we consider ensemble of trees methods as alternatives for risk adjustment, including random forests and Bayesian additive regression trees (BART). Both random forests and BART are modern machine learning methods that have been shown recently to have excellent performance for prediction of outcomes in many settings. We apply these methods to carry out risk adjustment for the performance of neonatal intensive care units (NICU). We show that these ensemble of trees methods outperform logistic regression in predicting mortality among babies treated in NICU, and provide a superior method of risk adjustment compared to logistic regression.

  14. Comparison of three flaw-location methods for automated ultrasonic testing

    International Nuclear Information System (INIS)

    Seiger, H.

    1982-01-01

    Two well-known methods for locating flaws by measurement of the transit time of ultrasonic pulses are examined theoretically. It is shown that neither is sufficiently reliable for use in automated ultrasonic testing. A third method, which takes into account the shape of the sound field from the probe and the uncertainty in measurement of probe-flaw distance and probe position, is introduced. An experimental comparison of the three methods indicates that use of the advanced method results in more accurate location of flaws. (author)

  15. Comparison of the direct enzyme assay method with the membrane ...

    African Journals Online (AJOL)

    Comparison of the direct enzyme assay method with the membrane filtration technique in the quantification and monitoring of microbial indicator organisms – seasonal variations in the activities of coliforms and E. coli, temperature and pH.

  16. Comparison of sampling methods for the assessment of indoor microbial exposure

    DEFF Research Database (Denmark)

    Frankel, M; Timm, Michael; Hansen, E W

    2012-01-01

    revealed. This study thus facilitates comparison between methods and may therefore be used as a frame of reference when studying the literature or when conducting further studies on indoor microbial exposure. Results also imply that the relatively simple EDC method for the collection of settled dust may...

  17. Comparison of radioimmunology and serology methods for LH determination

    International Nuclear Information System (INIS)

    Szymanski, W.; Jakowicki, J.

    1976-01-01

    A comparison is presented of LH determinations by immunoassay and radioimmunoassay using the 125 I-labelled HCG double antibody system. The results obtained by both methods are well comparable and in normal and elevated LH levels the serological determinations may be used for estimating concentration levels found by RIA. (L.O.)

  18. A comparison of two search methods for determining the scope of systematic reviews and health technology assessments.

    Science.gov (United States)

    Forsetlund, Louise; Kirkehei, Ingvild; Harboe, Ingrid; Odgaard-Jensen, Jan

    2012-01-01

    This study aims to compare two different search methods for determining the scope of a requested systematic review or health technology assessment. The first method (called the Direct Search Method) included performing direct searches in the Cochrane Database of Systematic Reviews (CDSR), Database of Abstracts of Reviews of Effects (DARE) and the Health Technology Assessments (HTA). Using the comparison method (called the NHS Search Engine) we performed searches by means of the search engine of the British National Health Service, NHS Evidence. We used an adapted cross-over design with a random allocation of fifty-five requests for systematic reviews. The main analyses were based on repeated measurements adjusted for the order in which the searches were conducted. The Direct Search Method generated on average fewer hits (48 percent [95 percent confidence interval {CI} 6 percent to 72 percent], had a higher precision (0.22 [95 percent CI, 0.13 to 0.30]) and more unique hits than when searching by means of the NHS Search Engine (50 percent [95 percent CI, 7 percent to 110 percent]). On the other hand, the Direct Search Method took longer (14.58 minutes [95 percent CI, 7.20 to 21.97]) and was perceived as somewhat less user-friendly than the NHS Search Engine (-0.60 [95 percent CI, -1.11 to -0.09]). Although the Direct Search Method had some drawbacks such as being more time-consuming and less user-friendly, it generated more unique hits than the NHS Search Engine, retrieved on average fewer references and fewer irrelevant results.

  19. A comparison of internal versus external risk-adjustment for monitoring clinical outcomes

    NARCIS (Netherlands)

    Koetsier, Antonie; de Keizer, Nicolette; Peek, Niels

    2011-01-01

    Internal and external prognostic models can be used to calculate severity of illness adjusted mortality risks. However, it is unclear what the consequences are of using an external model instead of an internal model when monitoring an institution's clinical performance. Theoretically, using an

  20. Comparison of Video Steganography Methods for Watermark Embedding

    Directory of Open Access Journals (Sweden)

    Griberman David

    2016-05-01

    Full Text Available The paper focuses on the comparison of video steganography methods for the purpose of digital watermarking in the context of copyright protection. Four embedding methods that use Discrete Cosine and Discrete Wavelet Transforms have been researched and compared based on their embedding efficiency and fidelity. A video steganography program has been developed in the Java programming language with all of the researched methods implemented for experiments. The experiments used 3 video containers with different amounts of movement. The impact of the movement has been addressed in the paper as well as the ways of potential improvement of embedding efficiency using adaptive embedding based on the movement amount. Results of the research have been verified using a survey with 17 participants.

  1. Estimating Total Claim Size in the Auto Insurance Industry: a Comparison between Tweedie and Zero-Adjusted Inverse Gaussian Distribution

    Directory of Open Access Journals (Sweden)

    Adriana Bruscato Bortoluzzo

    2011-01-01

    Full Text Available The objective of this article is to estimate insurance claims from an auto dataset using the Tweedie and zero-adjusted inverse Gaussian (ZAIG methods. We identify factors that influence claim size and probability, and compare the results of these methods which both forecast outcomes accurately. Vehicle characteristics like territory, age, origin and type distinctly influence claim size and probability. This distinct impact is not always present in the Tweedie estimated model. Auto insurers should consider estimating total claim size using both the Tweedie and ZAIG methods. This allows for an estimation of confidence interval based on empirical quantiles using bootstrap simulation. Furthermore, the fitted models may be useful in developing a strategy to obtain premium pricing.

  2. Future Orientation, Social Support, and Psychological Adjustment among Left-behind Children in Rural China: A Longitudinal Study.

    Science.gov (United States)

    Su, Shaobing; Li, Xiaoming; Lin, Danhua; Zhu, Maoling

    2017-01-01

    Existing research has found that parental migration may negatively impact the psychological adjustment of left-behind children. However, limited longitudinal research has examined if and how future orientation (individual protective factor) and social support (contextual protective factor) are associated with the indicators of psychological adjustment (i.e., life satisfaction, school satisfaction, happiness, and loneliness) of left-behind children. In the current longitudinal study, we examined the differences in psychological adjustment between left-behind children and non-left behind children (comparison children) in rural areas, and explored the protective roles of future orientation and social support on the immediate (cross-sectional effects) and subsequent (lagged effects) status of psychological adjustment for both groups of children, respectively. The sample included 897 rural children ( M age = 14.09, SD = 1.40) who participated in two waves of surveys across six months. Among the participants, 227 were left-behind children with two parents migrating, 176 were with one parent migrating, and 485 were comparison children. Results showed that, (1) left-behind children reported lower levels of life satisfaction, school satisfaction, and happiness, as well as a higher level of loneliness in both waves; (2) After controlling for several demographics and characteristics of parental migration among left-behind children, future orientation significantly predicted life satisfaction, school satisfaction, and happiness in both cross-sectional and longitudinal regression models, as well as loneliness in the longitudinal regression analysis. Social support predicted immediate life satisfaction, school satisfaction, and happiness, as well as subsequent school satisfaction. Similar to left-behind children, comparison children who reported higher scores in future orientation, especially future expectation, were likely to have higher scores in most indicators of

  3. Future Orientation, Social Support, and Psychological Adjustment among Left-behind Children in Rural China: A Longitudinal Study

    Directory of Open Access Journals (Sweden)

    Shaobing Su

    2017-08-01

    Full Text Available Existing research has found that parental migration may negatively impact the psychological adjustment of left-behind children. However, limited longitudinal research has examined if and how future orientation (individual protective factor and social support (contextual protective factor are associated with the indicators of psychological adjustment (i.e., life satisfaction, school satisfaction, happiness, and loneliness of left-behind children. In the current longitudinal study, we examined the differences in psychological adjustment between left-behind children and non-left behind children (comparison children in rural areas, and explored the protective roles of future orientation and social support on the immediate (cross-sectional effects and subsequent (lagged effects status of psychological adjustment for both groups of children, respectively. The sample included 897 rural children (Mage = 14.09, SD = 1.40 who participated in two waves of surveys across six months. Among the participants, 227 were left-behind children with two parents migrating, 176 were with one parent migrating, and 485 were comparison children. Results showed that, (1 left-behind children reported lower levels of life satisfaction, school satisfaction, and happiness, as well as a higher level of loneliness in both waves; (2 After controlling for several demographics and characteristics of parental migration among left-behind children, future orientation significantly predicted life satisfaction, school satisfaction, and happiness in both cross-sectional and longitudinal regression models, as well as loneliness in the longitudinal regression analysis. Social support predicted immediate life satisfaction, school satisfaction, and happiness, as well as subsequent school satisfaction. Similar to left-behind children, comparison children who reported higher scores in future orientation, especially future expectation, were likely to have higher scores in most indicators of

  4. Ares I-X Launch Abort System, Crew Module, and Upper Stage Simulator Vibroacoustic Flight Data Evaluation, Comparison to Predictions, and Recommendations for Adjustments to Prediction Methodology and Assumptions

    Science.gov (United States)

    Smith, Andrew; Harrison, Phil

    2010-01-01

    The National Aeronautics and Space Administration (NASA) Constellation Program (CxP) has identified a series of tests to provide insight into the design and development of the Crew Launch Vehicle (CLV) and Crew Exploration Vehicle (CEV). Ares I-X was selected as the first suborbital development flight test to help meet CxP objectives. The Ares I-X flight test vehicle (FTV) is an early operational model of CLV, with specific emphasis on CLV and ground operation characteristics necessary to meet Ares I-X flight test objectives. The in-flight part of the test includes a trajectory to simulate maximum dynamic pressure during flight and perform a stage separation of the Upper Stage Simulator (USS) from the First Stage (FS). The in-flight test also includes recovery of the FS. The random vibration response from the ARES 1-X flight will be reconstructed for a few specific locations that were instrumented with accelerometers. This recorded data will be helpful in validating and refining vibration prediction tools and methodology. Measured vibroacoustic environments associated with lift off and ascent phases of the Ares I-X mission will be compared with pre-flight vibration predictions. The measured flight data was given as time histories which will be converted into power spectral density plots for comparison with the maximum predicted environments. The maximum predicted environments are documented in the Vibroacoustics and Shock Environment Data Book, AI1-SYS-ACOv4.10 Vibration predictions made using statistical energy analysis (SEA) VAOne computer program will also be incorporated in the comparisons. Ascent and lift off measured acoustics will also be compared to predictions to assess whether any discrepancies between the predicted vibration levels and measured vibration levels are attributable to inaccurate acoustic predictions. These comparisons will also be helpful in assessing whether adjustments to prediction methodologies are needed to improve agreement between the

  5. A comparison of three time-domain anomaly detection methods

    Energy Technology Data Exchange (ETDEWEB)

    Schoonewelle, H.; Hagen, T.H.J.J. van der; Hoogenboom, J.E. [Delft University of Technology (Netherlands). Interfaculty Reactor Institute

    1996-01-01

    Three anomaly detection methods based on a comparison of signal values with predictions from an autoregressive model are presented. These methods are: the extremes method, the {chi}{sup 2} method and the sequential probability ratio test. The methods are used to detect a change of the standard deviation of the residual noise obtained from applying an autoregressive model. They are fast and can be used in on-line applications. For each method some important anomaly detection parameters are determined by calculation or simulation. These parameters are: the false alarm rate, the average time to alarm and - being of minor importance -the alarm failure rate. Each method is optimized with respect to the average time to alarm for a given value of the false alarm rate. The methods are compared with each other, resulting in the sequential probability ratio test being clearly superior. (author).

  6. A comparison of three time-domain anomaly detection methods

    International Nuclear Information System (INIS)

    Schoonewelle, H.; Hagen, T.H.J.J. van der; Hoogenboom, J.E.

    1996-01-01

    Three anomaly detection methods based on a comparison of signal values with predictions from an autoregressive model are presented. These methods are: the extremes method, the χ 2 method and the sequential probability ratio test. The methods are used to detect a change of the standard deviation of the residual noise obtained from applying an autoregressive model. They are fast and can be used in on-line applications. For each method some important anomaly detection parameters are determined by calculation or simulation. These parameters are: the false alarm rate, the average time to alarm and - being of minor importance -the alarm failure rate. Each method is optimized with respect to the average time to alarm for a given value of the false alarm rate. The methods are compared with each other, resulting in the sequential probability ratio test being clearly superior. (author)

  7. Comparison of methods to identify crop productivity constraints in developing countries. A review

    NARCIS (Netherlands)

    Kraaijvanger, R.G.M.; Sonneveld, M.P.W.; Almekinders, C.J.M.; Veldkamp, T.

    2015-01-01

    Selecting a method for identifying actual crop productivity constraints is an important step for triggering innovation processes. Applied methods can be diverse and although such methods have consequences for the design of intervention strategies, documented comparisons between various methods are

  8. Adjustment of nursing home quality indicators

    Directory of Open Access Journals (Sweden)

    Hirdes John P

    2010-04-01

    Full Text Available Abstract Background This manuscript describes a method for adjustment of nursing home quality indicators (QIs defined using the Center for Medicaid & Medicare Services (CMS nursing home resident assessment system, the Minimum Data Set (MDS. QIs are intended to characterize quality of care delivered in a facility. Threats to the validity of the measurement of presumed quality of care include baseline resident health and functional status, pattern of comorbidities, and facility case mix. The goal of obtaining a valid facility-level estimate of true quality of care should include adjustment for resident- and facility-level sources of variability. Methods We present a practical and efficient method to achieve risk adjustment using restriction and indirect and direct standardization. We present information on validity by comparing QIs estimated with the new algorithm to one currently used by CMS. Results More than half of the new QIs achieved a "Moderate" validation level. Conclusions Given the comprehensive approach and the positive findings to date, research using the new quality indicators is warranted to provide further evidence of their validity and utility and to encourage their use in quality improvement activities.

  9. Using Linear and Non-Linear Temporal Adjustments to Align Multiple Phenology Curves, Making Vegetation Status and Health Directly Comparable

    Science.gov (United States)

    Hargrove, W. W.; Norman, S. P.; Kumar, J.; Hoffman, F. M.

    2017-12-01

    National-scale polar analysis of MODIS NDVI allows quantification of degree of seasonality expressed by local vegetation, and also selects the most optimum start/end of a local "phenological year" that is empirically customized for the vegetation that is growing at each location. Interannual differences in timing of phenology make direct comparisons of vegetation health and performance between years difficult, whether at the same or different locations. By "sliding" the two phenologies in time using a Procrustean linear time shift, any particular phenological event or "completion milestone" can be synchronized, allowing direct comparison of differences in timing of other remaining milestones. Going beyond a simple linear translation, time can be "rubber-sheeted," compressed or dilated. Considering one phenology curve to be a reference, the second phenology can be "rubber-sheeted" to fit that baseline as well as possible by stretching or shrinking time to match multiple control points, which can be any recognizable phenological events. Similar to "rubber sheeting" to georectify a map inside a GIS, rubber sheeting a phenology curve also yields a warping signature that shows at every time and every location how many days the adjusted phenology is ahead or behind the phenological development of the reference vegetation. Using such temporal methods to "adjust" phenologies may help to quantify vegetation impacts from frost, drought, wildfire, insects and diseases by permitting the most commensurate quantitative comparisons with unaffected vegetation.

  10. Shaft adjuster

    Science.gov (United States)

    Harry, Herbert H.

    1989-01-01

    Apparatus and method for the adjustment and alignment of shafts in high power devices. A plurality of adjacent rotatable angled cylinders are positioned between a base and the shaft to be aligned which when rotated introduce an axial offset. The apparatus is electrically conductive and constructed of a structurally rigid material. The angled cylinders allow the shaft such as the center conductor in a pulse line machine to be offset in any desired alignment position within the range of the apparatus.

  11. cp-R, an interface the R programming language for clinical laboratory method comparisons.

    Science.gov (United States)

    Holmes, Daniel T

    2015-02-01

    Clinical scientists frequently need to compare two different bioanalytical methods as part of assay validation/monitoring. As a matter necessity, regression methods for quantitative comparison in clinical chemistry, hematology and other clinical laboratory disciplines must allow for error in both the x and y variables. Traditionally the methods popularized by 1) Deming and 2) Passing and Bablok have been recommended. While commercial tools exist, no simple open source tool is available. The purpose of this work was to develop and entirely open-source GUI-driven program for bioanalytical method comparisons capable of performing these regression methods and able to produce highly customized graphical output. The GUI is written in python and PyQt4 with R scripts performing regression and graphical functions. The program can be run from source code or as a pre-compiled binary executable. The software performs three forms of regression and offers weighting where applicable. Confidence bands of the regression are calculated using bootstrapping for Deming and Passing Bablok methods. Users can customize regression plots according to the tools available in R and can produced output in any of: jpg, png, tiff, bmp at any desired resolution or ps and pdf vector formats. Bland Altman plots and some regression diagnostic plots are also generated. Correctness of regression parameter estimates was confirmed against existing R packages. The program allows for rapid and highly customizable graphical output capable of conforming to the publication requirements of any clinical chemistry journal. Quick method comparisons can also be performed and cut and paste into spreadsheet or word processing applications. We present a simple and intuitive open source tool for quantitative method comparison in a clinical laboratory environment. Copyright © 2014 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  12. A comparison of methods for evaluating structure during ship collisions

    International Nuclear Information System (INIS)

    Ammerman, D.J.; Daidola, J.C.

    1996-01-01

    A comparison is provided of the results of various methods for evaluating structure during a ship-to-ship collision. The baseline vessel utilized in the analyses is a 67.4 meter in length displacement hull struck by an identical vessel traveling at speeds ranging from 10 to 30 knots. The structural response of the struck vessel and motion of both the struck and striking vessels are assessed by finite element analysis. These same results are then compared to predictions utilizing the open-quotes Tanker Structural Analysis for Minor Collisionsclose quotes (TSAMC) Method, the Minorsky Method, the Haywood Collision Process, and comparison to full-scale tests. Consideration is given to the nature of structural deformation, absorbed energy, penetration, rigid body motion, and virtual mass affecting the hydrodynamic response. Insights are provided with regard to the calibration of the finite element model which was achievable through utilizing the more empirical analyses and the extent to which the finite element analysis is able to simulate the entire collision event. 7 refs., 8 figs., 4 tabs

  13. Case-mix adjustment approach to benchmarking prevalence rates of nosocomial infection in hospitals in Cyprus and Greece.

    Science.gov (United States)

    Kritsotakis, Evangelos I; Dimitriadis, Ioannis; Roumbelaki, Maria; Vounou, Emelia; Kontou, Maria; Papakyriakou, Panikos; Koliou-Mazeri, Maria; Varthalitis, Ioannis; Vrouchos, George; Troulakis, George; Gikas, Achilleas

    2008-08-01

    To examine the effect of heterogeneous case mix for a benchmarking analysis and interhospital comparison of the prevalence rates of nosocomial infection. Cross-sectional survey. Eleven hospitals located in Cyprus and in the region of Crete in Greece. The survey included all inpatients in the medical, surgical, pediatric, and gynecology-obstetrics wards, as well as those in intensive care units. Centers for Disease Control and Prevention criteria were used to define nosocomial infection. The information collected for all patients included demographic characteristics, primary admission diagnosis, Karnofsky functional status index, Charlson comorbidity index, McCabe-Jackson severity of illness classification, use of antibiotics, and prior exposures to medical and surgical risk factors. Outcome data were also recorded for all patients. Case mix-adjusted rates were calculated by using a multivariate logistic regression model for nosocomial infection risk and an indirect standardization method.Results. The overall prevalence rate of nosocomial infection was 7.0% (95% confidence interval, 5.9%-8.3%) among 1,832 screened patients. Significant variation in nosocomial infection rates was observed across hospitals (range, 2.2%-9.6%). Logistic regression analysis indicated that the mean predicted risk of nosocomial infection across hospitals ranged from 3.7% to 10.3%, suggesting considerable variation in patient risk. Case mix-adjusted rates ranged from 2.6% to 12.4%, and the relative ranking of hospitals was affected by case-mix adjustment in 8 cases (72.8%). Nosocomial infection was significantly and independently associated with mortality (adjusted odds ratio, 3.6 [95% confidence interval, 2.1-6.1]). The first attempt to rank the risk of nosocomial infection in these regions demonstrated the importance of accounting for heterogeneous case mix before attempting interhospital comparisons.

  14. Comparison of the analysis result between two laboratories using different methods

    International Nuclear Information System (INIS)

    Sri Murniasih; Agus Taftazani

    2017-01-01

    Comparison of the analysis result of volcano ash sample between two laboratories using different analysis methods. The research aims to improve the testing laboratory quality and cooperate with the testing laboratory from other country. Samples were tested at the Center for Accelerator of Science and Technology (CAST)-NAA laboratory using NAA, while at the University of Texas (UT) USA using ICP-MS and ENAA method. From 12 elements of target, CAST-NAA able to present 11 elements of data analysis. The comparison results shows that the analysis of the K, Mn, Ti and Fe elements from both laboratories have a very good comparison and close one to other. It is known from RSD values and correlation coefficients of the both laboratories analysis results. While observed of the results difference known that the analysis results of Al, Na, K, Fe, V, Mn, Ti, Cr and As elements from both laboratories is not significantly different. From 11 elements were reported, only Zn which have significantly different values for both laboratories. (author)

  15. A system and method for adjusting and presenting stereoscopic content

    DEFF Research Database (Denmark)

    2013-01-01

    on the basis of one or more vision specific parameters (0M, ThetaMuAlphaChi, ThetaMuIotaNu, DeltaTheta) indicating abnormal vision for the user. In this way, presenting stereoscopic content is enabled that is adjusted specifically to the given person. This may e.g. be used for training purposes or for improved...

  16. Comparison of methods for the analysis of relatively simple mediation models.

    Science.gov (United States)

    Rijnhart, Judith J M; Twisk, Jos W R; Chinapaw, Mai J M; de Boer, Michiel R; Heymans, Martijn W

    2017-09-01

    Statistical mediation analysis is an often used method in trials, to unravel the pathways underlying the effect of an intervention on a particular outcome variable. Throughout the years, several methods have been proposed, such as ordinary least square (OLS) regression, structural equation modeling (SEM), and the potential outcomes framework. Most applied researchers do not know that these methods are mathematically equivalent when applied to mediation models with a continuous mediator and outcome variable. Therefore, the aim of this paper was to demonstrate the similarities between OLS regression, SEM, and the potential outcomes framework in three mediation models: 1) a crude model, 2) a confounder-adjusted model, and 3) a model with an interaction term for exposure-mediator interaction. Secondary data analysis of a randomized controlled trial that included 546 schoolchildren. In our data example, the mediator and outcome variable were both continuous. We compared the estimates of the total, direct and indirect effects, proportion mediated, and 95% confidence intervals (CIs) for the indirect effect across OLS regression, SEM, and the potential outcomes framework. OLS regression, SEM, and the potential outcomes framework yielded the same effect estimates in the crude mediation model, the confounder-adjusted mediation model, and the mediation model with an interaction term for exposure-mediator interaction. Since OLS regression, SEM, and the potential outcomes framework yield the same results in three mediation models with a continuous mediator and outcome variable, researchers can continue using the method that is most convenient to them.

  17. Are comparisons of patient experiences across hospitals fair? A study in Veterans Health Administration hospitals.

    Science.gov (United States)

    Cleary, Paul D; Meterko, Mark; Wright, Steven M; Zaslavsky, Alan M

    2014-07-01

    Surveys are increasingly used to assess patient experiences with health care. Comparisons of hospital scores based on patient experience surveys should be adjusted for patient characteristics that might affect survey results. Such characteristics are commonly drawn from patient surveys that collect little, if any, clinical information. Consequently some hospitals, especially those treating particularly complex patients, have been concerned that standard adjustment methods do not adequately reflect the challenges of treating their patients. To compare scores for different types of hospitals after making adjustments using only survey-reported patient characteristics and using more complete clinical and hospital information. We used clinical and survey data from a national sample of 1858 veterans hospitalized for an initial acute myocardial infarction (AMI) in a Department of Veterans Affairs (VA) medical center during fiscal years 2003 and 2004. We used VA administrative data to characterize hospitals. The survey asked patients about their experiences with hospital care. The clinical data included 14 measures abstracted from medical records that are predictive of survival after an AMI. Comparisons of scores across hospitals adjusted only for patient-reported health status and sociodemographic characteristics were similar to those that also adjusted for patient clinical characteristics; the Spearman rank-order correlations between the 2 sets of adjusted scores were >0.97 across 9 dimensions of inpatient experience. This study did not support concerns that measures of patient care experiences are unfair because commonly used models do not adjust adequately for potentially confounding patient clinical characteristics.

  18. An enquiry into the method of paired comparison: reliability, scaling, and Thurstone's Law of Comparative Judgment

    Science.gov (United States)

    Thomas C. Brown; George L. Peterson

    2009-01-01

    The method of paired comparisons is used to measure individuals' preference orderings of items presented to them as discrete binary choices. This paper reviews the theory and application of the paired comparison method, describes a new computer program available for eliciting the choices, and presents an analysis of methods for scaling paired choice data to...

  19. Method for optimum determination of adjustable parameters in the boiling water reactor core simulator using operating data on flux distribution

    International Nuclear Information System (INIS)

    Kiguchi, T.; Kawai, T.

    1975-01-01

    A method has been developed to optimally and automatically determine the adjustable parameters of the boiling water reactor three-dimensional core simulator FLARE. The steepest gradient method is adopted for the optimization. The parameters are adjusted to best fit the operating data on power distribution measured by traversing in-core probes (TIP). The average error in the calculated TIP readings normalized by the core average is 0.053 at the rated power. The k-infinity correction term has also been derived theoretically to reduce the relatively large error in the calculated TIP readings near the tips of control rods, which is induced by the coarseness of mesh points. By introducing this correction, the average error decreases to 0.047. The void-quality relation is recognized as a function of coolant flow rate. The relation is estimated to fit the measured distributions of TIP reading at the partial power states

  20. Comparison of non-parametric methods for ungrouping coarsely aggregated data

    DEFF Research Database (Denmark)

    Rizzi, Silvia; Thinggaard, Mikael; Engholm, Gerda

    2016-01-01

    group at the highest ages. When histogram intervals are too coarse, information is lost and comparison between histograms with different boundaries is arduous. In these cases it is useful to estimate detailed distributions from grouped data. Methods From an extensive literature search we identify five...

  1. Kinematic adjustments to seismic recordings

    Energy Technology Data Exchange (ETDEWEB)

    Telegin, A.N.; Levii, N.V.; Volovik, U.M.

    1981-01-01

    The introduction of kinematic adjustments by adding the displaced blocks is studied theoretically and in test seismograms. The advantage to this method resulting from the weight variation in the trace is demonstrated together with its kinematic drawback. A variation on the displaced block addition method that does not involve realignment of the travel time curves and that has improved amplitude characteristics is proposed.

  2. Method, system and apparatus for monitoring and adjusting the quality of indoor air

    Science.gov (United States)

    Hartenstein, Steven D.; Tremblay, Paul L.; Fryer, Michael O.; Hohorst, Frederick A.

    2004-03-23

    A system, method and apparatus is provided for monitoring and adjusting the quality of indoor air. A sensor array senses an air sample from the indoor air and analyzes the air sample to obtain signatures representative of contaminants in the air sample. When the level or type of contaminant poses a threat or hazard to the occupants, the present invention takes corrective actions which may include introducing additional fresh air. The corrective actions taken are intended to promote overall health of personnel, prevent personnel from being overexposed to hazardous contaminants and minimize the cost of operating the HVAC system. The identification of the contaminants is performed by comparing the signatures provided by the sensor array with a database of known signatures. Upon identification, the system takes corrective actions based on the level of contaminant present. The present invention is capable of learning the identity of previously unknown contaminants, which increases its ability to identify contaminants in the future. Indoor air quality is assured by monitoring the contaminants not only in the indoor air, but also in the outdoor air and the air which is to be recirculated. The present invention is easily adaptable to new and existing HVAC systems. In sum, the present invention is able to monitor and adjust the quality of indoor air in real time by sensing the level and type of contaminants present in indoor air, outdoor and recirculated air, providing an intelligent decision about the quality of the air, and minimizing the cost of operating an HVAC system.

  3. Sequential and simultaneous SLAR block adjustment. [spline function analysis for mapping

    Science.gov (United States)

    Leberl, F.

    1975-01-01

    Two sequential methods of planimetric SLAR (Side Looking Airborne Radar) block adjustment, with and without splines, and three simultaneous methods based on the principles of least squares are evaluated. A limited experiment with simulated SLAR images indicates that sequential block formation with splines followed by external interpolative adjustment is superior to the simultaneous methods such as planimetric block adjustment with similarity transformations. The use of the sequential block formation is recommended, since it represents an inexpensive tool for satisfactory point determination from SLAR images.

  4. COMPARISON OF HOLOGRAPHIC AND ITERATIVE METHODS FOR AMPLITUDE OBJECT RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    I. A. Shevkunov

    2015-01-01

    Full Text Available Experimental comparison of four methods for the wavefront reconstruction is presented. We considered two iterative and two holographic methods with different mathematical models and algorithms for recovery. The first two of these methods do not use a reference wave recording scheme that reduces requirements for stability of the installation. A major role in phase information reconstruction by such methods is played by a set of spatial intensity distributions, which are recorded as the recording matrix is being moved along the optical axis. The obtained data are used consistently for wavefront reconstruction using an iterative procedure. In the course of this procedure numerical distribution of the wavefront between the planes is performed. Thus, phase information of the wavefront is stored in every plane and calculated amplitude distributions are replaced for the measured ones in these planes. In the first of the compared methods, a two-dimensional Fresnel transform and iterative calculation in the object plane are used as a mathematical model. In the second approach, an angular spectrum method is used for numerical wavefront propagation, and the iterative calculation is carried out only between closely located planes of data registration. Two digital holography methods, based on the usage of the reference wave in the recording scheme and differing from each other by numerical reconstruction algorithm of digital holograms, are compared with the first two methods. The comparison proved that the iterative method based on 2D Fresnel transform gives results comparable with the result of common holographic method with the Fourier-filtering. It is shown that holographic method for reconstructing of the object complex amplitude in the process of the object amplitude reduction is the best among considered ones.

  5. Contact angle adjustment in equation-of-state-based pseudopotential model.

    Science.gov (United States)

    Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong

    2016-05-01

    The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.

  6. Comparison between the KBS-3 method and the deep borehole for final disposal of spent nuclear fuel

    International Nuclear Information System (INIS)

    Grundfelt, Bertil

    2010-09-01

    In this report a comparison is made between disposal of spent nuclear fuel according to the KBS-3 method with disposal in very deep boreholes. The objective has been to make a broad comparison between the two methods, and by doing so to pinpoint factors that distinguish them from each other. The ambition has been to make an as fair comparison as possible despite that the quality of the data of relevance is very different between the methods

  7. Convexity Adjustments

    DEFF Research Database (Denmark)

    M. Gaspar, Raquel; Murgoci, Agatha

    2010-01-01

    A convexity adjustment (or convexity correction) in fixed income markets arises when one uses prices of standard (plain vanilla) products plus an adjustment to price nonstandard products. We explain the basic and appealing idea behind the use of convexity adjustments and focus on the situations...

  8. Speed of adjustment: Evidence from Borsa Istanbul

    Directory of Open Access Journals (Sweden)

    Emrah Arioglu

    2014-06-01

    Full Text Available In this study, we investigate the speed of adjustment for leverage ratios of firms listed on Borsa Istanbul, in order to investigate the prediction of the trade-off theory, regarding capital structure rebalancing. For this purpose, we estimate the speed of adjustment by using Generalized Method of Moments system estimation technique. The results of this estimation suggest the speed of adjustment as approximately 29%. This significant speed of adjustment is consistent with the prediction of trade-off theory, which suggests that firms follow target capital structures and when the firms' leverage ratios deviate from these targets; they make financial decisions with the goal of closing the gap between the previous year's leverage and target leverage of the current period.

  9. Sickness presence, sick leave and adjustment latitude

    Directory of Open Access Journals (Sweden)

    Joachim Gerich

    2014-10-01

    Full Text Available Objectives: Previous research on the association between adjustment latitude (defined as the opportunity to adjust work efforts in case of illness and sickness absence and sickness presence has produced inconsistent results. In particular, low adjustment latitude has been identified as both a risk factor and a deterrent of sick leave. The present study uses an alternative analytical strategy with the aim of joining these results together. Material and Methods: Using a cross-sectional design, a random sample of employees covered by the Upper Austrian Sickness Fund (N = 930 was analyzed. Logistic and ordinary least square (OLS regression models were used to examine the association between adjustment latitude and days of sickness absence, sickness presence, and an estimator for the individual sickness absence and sickness presence propensity. Results: A high level of adjustment latitude was found to be associated with a reduced number of days of sickness absence and sickness presence, but an elevated propensity for sickness absence. Conclusions: Employees with high adjustment latitude experience fewer days of health complaints associated with lower rates of sick leave and sickness presence compared to those with low adjustment latitude. In case of illness, however, high adjustment latitude is associated with a higher pro­bability of taking sick leave rather than sickness presence.

  10. Localization of an Underwater Control Network Based on Quasi-Stable Adjustment

    Science.gov (United States)

    Chen, Xinhua; Zhang, Hongmei; Feng, Jie

    2018-01-01

    There exists a common problem in the localization of underwater control networks that the precision of the absolute coordinates of known points obtained by marine absolute measurement is poor, and it seriously affects the precision of the whole network in traditional constraint adjustment. Therefore, considering that the precision of underwater baselines is good, we use it to carry out quasi-stable adjustment to amend known points before constraint adjustment so that the points fit the network shape better. In addition, we add unconstrained adjustment for quality control of underwater baselines, the observations of quasi-stable adjustment and constrained adjustment, to eliminate the unqualified baselines and improve the results’ accuracy of the two adjustments. Finally, the modified method is applied to a practical LBL (Long Baseline) experiment and obtains a mean point location precision of 0.08 m, which improves by 38% compared with the traditional method. PMID:29570627

  11. A Comparison of Surface Acoustic Wave Modeling Methods

    Science.gov (United States)

    Wilson, W. c.; Atkinson, G. M.

    2009-01-01

    Surface Acoustic Wave (SAW) technology is low cost, rugged, lightweight, extremely low power and can be used to develop passive wireless sensors. For these reasons, NASA is investigating the use of SAW technology for Integrated Vehicle Health Monitoring (IVHM) of aerospace structures. To facilitate rapid prototyping of passive SAW sensors for aerospace applications, SAW models have been developed. This paper reports on the comparison of three methods of modeling SAWs. The three models are the Impulse Response Method a first order model, and two second order matrix methods; the conventional matrix approach, and a modified matrix approach that is extended to include internal finger reflections. The second order models are based upon matrices that were originally developed for analyzing microwave circuits using transmission line theory. Results from the models are presented with measured data from devices.

  12. Comparison of the performance of the CMS Hierarchical Condition Category (CMS-HCC) risk adjuster with the Charlson and Elixhauser comorbidity measures in predicting mortality.

    Science.gov (United States)

    Li, Pengxiang; Kim, Michelle M; Doshi, Jalpa A

    2010-08-20

    The Centers for Medicare and Medicaid Services (CMS) has implemented the CMS-Hierarchical Condition Category (CMS-HCC) model to risk adjust Medicare capitation payments. This study intends to assess the performance of the CMS-HCC risk adjustment method and to compare it to the Charlson and Elixhauser comorbidity measures in predicting in-hospital and six-month mortality in Medicare beneficiaries. The study used the 2005-2006 Chronic Condition Data Warehouse (CCW) 5% Medicare files. The primary study sample included all community-dwelling fee-for-service Medicare beneficiaries with a hospital admission between January 1st, 2006 and June 30th, 2006. Additionally, four disease-specific samples consisting of subgroups of patients with principal diagnoses of congestive heart failure (CHF), stroke, diabetes mellitus (DM), and acute myocardial infarction (AMI) were also selected. Four analytic files were generated for each sample by extracting inpatient and/or outpatient claims for each patient. Logistic regressions were used to compare the methods. Model performance was assessed using the c-statistic, the Akaike's information criterion (AIC), the Bayesian information criterion (BIC) and their 95% confidence intervals estimated using bootstrapping. The CMS-HCC had statistically significant higher c-statistic and lower AIC and BIC values than the Charlson and Elixhauser methods in predicting in-hospital and six-month mortality across all samples in analytic files that included claims from the index hospitalization. Exclusion of claims for the index hospitalization generally led to drops in model performance across all methods with the highest drops for the CMS-HCC method. However, the CMS-HCC still performed as well or better than the other two methods. The CMS-HCC method demonstrated better performance relative to the Charlson and Elixhauser methods in predicting in-hospital and six-month mortality. The CMS-HCC model is preferred over the Charlson and Elixhauser methods

  13. A comparison between progressive extension method (PEM) and iterative method (IM) for magnetic field extrapolations in the solar atmosphere

    Science.gov (United States)

    Wu, S. T.; Sun, M. T.; Sakurai, Takashi

    1990-01-01

    This paper presents a comparison between two numerical methods for the extrapolation of nonlinear force-free magnetic fields, viz the Iterative Method (IM) and the Progressive Extension Method (PEM). The advantages and disadvantages of these two methods are summarized, and the accuracy and numerical instability are discussed. On the basis of this investigation, it is claimed that the two methods do resemble each other qualitatively.

  14. Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files. SG39 meeting, November 2013

    International Nuclear Information System (INIS)

    De Saint Jean, C.; Dupont, E.; ); Dyrda, J.; Hursin, M.; Pelloni, S.; Ishikawa, M.; Ivanov, E.; Ivanova, T.; Kim, D.H.; Ee, Y.O.; Kodeli, I.; Leal, L.; Leichtle, D.; Palmiotti, G.; Salvatores, M.; Pronyaev, V.; Simakov, S.; )

    2013-11-01

    The aim of WPEC subgroup 39 'Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files' is to provide criteria and practical approaches to use effectively the results of sensitivity analyses and cross section adjustments for feedback to evaluators and differential measurement experimentalists in order to improve the knowledge of neutron cross sections, uncertainties, and correlations to be used in a wide range of applications. This document is the proceedings of the first formal Subgroup 39 meeting held at the NEA, Issy-les-Moulineaux, France, on 28-29 November 2013. It comprises a Summary Record of the meeting and all the available presentations (slides) given by the participants: A - Recent data adjustments performances and trends: 1 - Recommendations from ADJ2010 adjustment (M. Ishikawa); 2 - Feedback on CIELO isotopes from ENDF/B-VII.0 adjustment (G. Palmiotti); 3 - Sensitivity and uncertainty results on FLATTOP-Pu (I. Kodeli); 4 - SG33 benchmark: Comparative adjustment results (S. Pelloni) 5 - Integral benchmarks for data assimilation: selection of a consistent set and establishment of integral correlations (E. Ivanov); 6 - PROTEUS experimental data (M. Hursin); 7 - Additional information on High Conversion Light Water Reactor (HCLWR aka FDWR-II) experiments (14 January 2014); 8 - Data assimilation of benchmark experiments for homogenous thermal/epithermal uranium systems (J. Dyrda); B - Methodology issues: 1 - Adjustment methodology issues (G. Palmiotti); 2 - Marginalisation, methodology issues and nuclear data parameter adjustment (C. De Saint Jean); 3 - Nuclear data parameter adjustment (G. Palmiotti). A list of issues and actions conclude the document

  15. Critical comparison between equation of motion-Green's function methods and configuration interaction methods: analysis of methods and applications

    International Nuclear Information System (INIS)

    Freed, K.F.; Herman, M.F.; Yeager, D.L.

    1980-01-01

    A description is provided of the common conceptual origins of many-body equations of motion and Green's function methods in Liouville operator formulations of the quantum mechanics of atomic and molecular electronic structure. Numerical evidence is provided to show the inadequacies of the traditional strictly perturbative approaches to these methods. Nonperturbative methods are introduced by analogy with techniques developed for handling large configuration interaction calculations and by evaluating individual matrix elements to higher accuracy. The important role of higher excitations is exhibited by the numerical calculations, and explicit comparisons are made between converged equations of motion and configuration interaction calculations for systems where a fundamental theorem requires the equality of the energy differences produced by these different approaches. (Auth.)

  16. The application of the Ten Group classification system (TGCS in caesarean delivery case mix adjustment. A multicenter prospective study.

    Directory of Open Access Journals (Sweden)

    Gianpaolo Maso

    Full Text Available BACKGROUND: Caesarean delivery (CD rates are commonly used as an indicator of quality in obstetric care and risk adjustment evaluation is recommended to assess inter-institutional variations. The aim of this study was to evaluate whether the Ten Group classification system (TGCS can be used in case-mix adjustment. METHODS: Standardized data on 15,255 deliveries from 11 different regional centers were prospectively collected. Crude Risk Ratios of CDs were calculated for each center. Two multiple logistic regression models were herein considered by using: Model 1- maternal (age, Body Mass Index, obstetric variables (gestational age, fetal presentation, single or multiple, previous scar, parity, neonatal birth weight and presence of risk factors; Model 2- TGCS either with or without maternal characteristics and presence of risk factors. Receiver Operating Characteristic (ROC curves of the multivariate logistic regression analyses were used to assess the diagnostic accuracy of each model. The null hypothesis that Areas under ROC Curve (AUC were not different from each other was verified with a Chi Square test and post hoc pairwise comparisons by using a Bonferroni correction. RESULTS: Crude evaluation of CD rates showed all centers had significantly higher Risk Ratios than the referent. Both multiple logistic regression models reduced these variations. However the two methods ranked institutions differently: model 1 and model 2 (adjusted for TGCS identified respectively nine and eight centers with significantly higher CD rates than the referent with slightly different AUCs (0.8758 and 0.8929 respectively. In the adjusted model for TGCS and maternal characteristics/presence of risk factors, three centers had CD rates similar to the referent with the best AUC (0.9024. CONCLUSIONS: The TGCS might be considered as a reliable variable to adjust CD rates. The addition of maternal characteristics and risk factors to TGCS substantially increase the

  17. Culture, cross-role consistency, and adjustment: testing trait and cultural psychology perspectives.

    Science.gov (United States)

    Church, A Timothy; Anderson-Harumi, Cheryl A; del Prado, Alicia M; Curtis, Guy J; Tanaka-Matsumi, Junko; Valdez Medina, José L; Mastor, Khairul A; White, Fiona A; Miramontes, Lilia A; Katigbak, Marcia S

    2008-09-01

    Trait and cultural psychology perspectives on cross-role consistency and its relation to adjustment were examined in 2 individualistic cultures, the United States (N=231) and Australia (N=195), and 4 collectivistic cultures, Mexico (N=199), the Philippines (N=195), Malaysia (N=217), and Japan (N=180). Cross-role consistency in trait ratings was evident in all cultures, supporting trait perspectives. Cultural comparisons of mean consistency provided support for cultural psychology perspectives as applied to East Asian cultures (i.e., Japan) but not collectivistic cultures more generally. Some but not all of the hypothesized predictors of consistency were supported across cultures. Cross-role consistency predicted aspects of adjustment in all cultures, but prediction was most reliable in the U.S. sample and weakest in the Japanese sample. Alternative constructs proposed by cultural psychologists--personality coherence, social appraisal, and relationship harmony--predicted adjustment in all cultures but were not, as hypothesized, better predictors of adjustment in collectivistic cultures than in individualistic cultures.

  18. Comparison of breeding methods for forage yield in red clover

    Directory of Open Access Journals (Sweden)

    Libor Jalůvka

    2009-01-01

    Full Text Available Three methods of red clover (Trifolium pratense L. breeding for forage yield in two harvest years on locations in Bredelokke (Denmark, Hladké Životice (Czech Republic and Les Alleuds (France were compared. Three types of 46 candivars1, developed by A recurrent selection in subsequent generations (37 candivars, divided into early and late group, B polycross progenies (4 candivars and C ge­no-phe­no­ty­pic selection (5 candivars were compared. The trials were sown in 2005 and cut three times in 2006 and 2007; their evaluation is based primarily on total yield of dry matter. The candivars developed by polycross and geno-phenotypic selections gave significantly higher yields than candivars from the recurrent selection. However, the candivars developed by the methods B and C did not differ significantly. The candivars developed by these progressive methods were suitable for higher yielding and drier environment in Hladké Životice (where was the highest yield level even if averaged annual precipitation were lower by 73 and 113 mm in comparison to other locations, respectively; here was ave­ra­ge yield higher by 19 and 13% for B and C in comparison to A method. Highly significant interaction of the candivars with locations was found. It can be concluded that varieties specifically aimed to different locations by the methods B and C should be bred; also the parental entries should be selected there.

  19. VerSi. A method for the quantitative comparison of repository systems

    Energy Technology Data Exchange (ETDEWEB)

    Kaempfer, T.U.; Ruebel, A.; Resele, G. [AF-Consult Switzerland Ltd, Baden (Switzerland); Moenig, J. [GRS Braunschweig (Germany)

    2015-07-01

    Decision making and design processes for radioactive waste repositories are guided by safety goals that need to be achieved. In this context, the comparison of different disposal concepts can provide relevant support to better understand the performance of the repository systems. Such a task requires a method for a traceable comparison that is as objective as possible. We present a versatile method that allows for the comparison of different disposal concepts in potentially different host rocks. The condition for the method to work is that the repository systems are defined to a comparable level including designed repository structures, disposal concepts, and engineered and geological barriers which are all based on site-specific safety requirements. The method is primarily based on quantitative analyses and probabilistic model calculations regarding the long-term safety of the repository systems under consideration. The crucial evaluation criteria for the comparison are statistical key figures of indicators that characterize the radiotoxicity flux out of the so called containment-providing rock zone (einschlusswirksamer Gebirgsbereich). The key figures account for existing uncertainties with respect to the actual site properties, the safety relevant processes, and the potential future impact of external processes on the repository system, i.e., they include scenario-, process-, and parameter-uncertainties. The method (1) leads to an evaluation of the retention and containment capacity of the repository systems and its robustness with respect to existing uncertainties as well as to potential external influences; (2) specifies the procedures for the system analyses and the calculation of the statistical key figures as well as for the comparative interpretation of the key figures; and (3) also gives recommendations and sets benchmarks for the comparative assessment of the repository systems under consideration based on the key figures and additional qualitative

  20. Adjustment of pipe flow explicit friction factor equations for application to tube bundles

    International Nuclear Information System (INIS)

    Wiltz, Christopher L.; Bowen, Mike D.; Von Olnhausen, Wayne A.

    2005-01-01

    Full text of publication follows: The accurate determination of single phase friction losses or friction pressure drop in tube bundles is essential in the thermal-hydraulic analyses of components such as nuclear fuel assemblies, heat exchangers and steam generators. Such friction losses are normally calculated using a friction factor, f, along with the experimental observation that the friction pressure drop in a pipe is proportional to the dynamic pressure (1/2 ρV 2 ) of the flow: ΔP = 1/2 ρV 2 (fL/D). In this equation L is the pipe or tube bundle length and D is the hydraulic diameter of the pipe or tube bundle. The friction factor is normally calculated using one of a number of explicit friction factor equations. A significant amount of work has been accomplished in developing explicit friction factor equations. These explicit equations range from approximations, which were developed for ease of numerical evaluation, to those which are mathematically complex but yield very good fits to the test data. These explicit friction factor equations are based on a large experimental data base, nearly all of which comes from pipe flow geometry information, and have been historically applied to tube bundles. This paper presents an adjustment method which may be applied to various explicit friction factor equations developed for pipe flow to accurately predict the friction factor for tube bundles. The characteristic of the adjustment is based on experimental friction pressure loss data obtained by Framatome ANP through flow testing of a nuclear fuel assembly (tube bundle) at its Richland Test Facility (RTF). Through adjustment of previously developed explicit friction factor equations for pipe flow, the vast amount of historical development and experimentation in the area of single phase pipe flow friction loss may be incorporated into the evaluation of single phase friction losses within tube bundles. Comparisons of the application of one or more of the previously

  1. Removal of phenol from water : a comparison of energization methods

    NARCIS (Netherlands)

    Grabowski, L.R.; Veldhuizen, van E.M.; Rutgers, W.R.

    2005-01-01

    Direct electrical energization methods for removal of persistent substances from water are under investigation in the framework of the ytriD-project. The emphasis of the first stage of the project is the energy efficiency. A comparison is made between a batch reactor with a thin layer of water and

  2. Comparison of multivariate methods for studying the G×E interaction

    Directory of Open Access Journals (Sweden)

    Deoclécio Domingos Garbuglio

    2015-12-01

    Full Text Available The objective of this work was to evaluate three statistical multivariate methods for analyzing adaptability and environmental stratification simultaneously, using data from maize cultivars indicated for planting in the State of Paraná-Brazil. Under the FGGE and GGE methods, the genotypic effect adjusts the G×E interactions across environments, resulting in a high percentage of explanation associated with a smaller number of axes. Environmental stratification via the FGGE and GGE methods showed similar responses, while the AMMI method did not ensure grouping of environments. The adaptability analysis revealed low divergence patterns of the responses obtained through the three methods. Genotypes P30F35, P30F53, P30R50, P30K64 and AS 1570 showed high yields associated with general adaptability. The FGGE method allowed differences in yield responses in specific regions and the impact in locations belonging to the same environmental group (through rE to be associated with the level of the simple portion of the G×E interaction.

  3. National comparison on volume sample activity measurement methods

    International Nuclear Information System (INIS)

    Sahagia, M.; Grigorescu, E.L.; Popescu, C.; Razdolescu, C.

    1992-01-01

    A national comparison on volume sample activity measurements methods may be regarded as a step toward accomplishing the traceability of the environmental and food chain activity measurements to national standards. For this purpose, the Radionuclide Metrology Laboratory has distributed 137 Cs and 134 Cs water-equivalent solid standard sources to 24 laboratories having responsibilities in this matter. Every laboratory has to measure the activity of the received source(s) by using its own standards, equipment and methods and report the obtained results to the organizer. The 'measured activities' will be compared with the 'true activities'. A final report will be issued, which plans to evaluate the national level of precision of such measurements and give some suggestions for improvement. (Author)

  4. 77 FR 69442 - Federal Acquisition Regulation; Information Collection; Economic Price Adjustment

    Science.gov (United States)

    2012-11-19

    ...; Information Collection; Economic Price Adjustment AGENCIES: Department of Defense (DOD), General Services... economic price adjustment. Public comments are particularly invited on: Whether this collection of..., Economic Price Adjustment by any of the following methods: Regulations.gov : http://www.regulations.gov...

  5. A novel suture method to place and adjust peripheral nerve catheters

    DEFF Research Database (Denmark)

    Rothe, C.; Steen-Hansen, C.; Madsen, M. H.

    2015-01-01

    We have developed a peripheral nerve catheter, attached to a needle, which works like an adjustable suture. We used in-plane ultrasound guidance to place 45 catheters close to the femoral, saphenous, sciatic and distal tibial nerves in cadaver legs. We displaced catheters after their initial...

  6. Steroid hormones in environmental matrices: extraction method comparison.

    Science.gov (United States)

    Andaluri, Gangadhar; Suri, Rominder P S; Graham, Kendon

    2017-11-09

    The U.S. Environmental Protection Agency (EPA) has developed methods for the analysis of steroid hormones in water, soil, sediment, and municipal biosolids by HRGC/HRMS (EPA Method 1698). Following the guidelines provided in US-EPA Method 1698, the extraction methods were validated with reagent water and applied to municipal wastewater, surface water, and municipal biosolids using GC/MS/MS for the analysis of nine most commonly detected steroid hormones. This is the first reported comparison of the separatory funnel extraction (SFE), continuous liquid-liquid extraction (CLLE), and Soxhlet extraction methods developed by the U.S. EPA. Furthermore, a solid phase extraction (SPE) method was also developed in-house for the extraction of steroid hormones from aquatic environmental samples. This study provides valuable information regarding the robustness of the different extraction methods. Statistical analysis of the data showed that SPE-based methods provided better recovery efficiencies and lower variability of the steroid hormones followed by SFE. The analytical methods developed in-house for extraction of biosolids showed a wide recovery range; however, the variability was low (≤ 7% RSD). Soxhlet extraction and CLLE are lengthy procedures and have been shown to provide highly variably recovery efficiencies. The results of this study are guidance for better sample preparation strategies in analytical methods for steroid hormone analysis, and SPE adds to the choice in environmental sample analysis.

  7. Comparison of potential method in analytic hierarchy process for multi-attribute of catering service companies

    Science.gov (United States)

    Mamat, Siti Salwana; Ahmad, Tahir; Awang, Siti Rahmah

    2017-08-01

    Analytic Hierarchy Process (AHP) is a method used in structuring, measuring and synthesizing criteria, in particular ranking of multiple criteria in decision making problems. On the other hand, Potential Method is a ranking procedure in which utilizes preference graph ς (V, A). Two nodes are adjacent if they are compared in a pairwise comparison whereby the assigned arc is oriented towards the more preferred node. In this paper Potential Method is used to solve problem on a catering service selection. The comparison of result by using Potential method is made with Extent Analysis. The Potential Method is found to produce the same rank as Extent Analysis in AHP.

  8. A comparison of surveillance methods for small incidence rates

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Woodall, William H.; Reynolds, Marion R.

    2008-05-15

    A number of methods have been proposed to detect an increasing shift in the incidence rate of a rare health event, such as a congenital malformation. Among these are the Sets method, two modifcations of the Sets method, and the CUSUM method based on the Poisson distribution. We consider the situation where data are observed as a sequence of Bernoulli trials and propose the Bernoulli CUSUM chart as a desirable method for the surveillance of rare health events. We compare the performance of the Sets method and its modifcations to the Bernoulli CUSUM chart under a wide variety of circumstances. Chart design parameters were chosen to satisfy a minimax criteria.We used the steady- state average run length to measure chart performance instead of the average run length which was used in nearly all previous comparisons involving the Sets method or its modifcations. Except in a very few instances, we found that the Bernoulli CUSUM chart has better steady-state average run length performance than the Sets method and its modifcations for the extensive number of cases considered.

  9. Workplace sitting and height-adjustable workstations: a randomized controlled trial.

    Science.gov (United States)

    Neuhaus, Maike; Healy, Genevieve N; Dunstan, David W; Owen, Neville; Eakin, Elizabeth G

    2014-01-01

    Desk-based office employees sit for most of their working day. To address excessive sitting as a newly identified health risk, best practice frameworks suggest a multi-component approach. However, these approaches are resource intensive and knowledge about their impact is limited. To compare the efficacy of a multi-component intervention to reduce workplace sitting time, to a height-adjustable workstations-only intervention, and to a comparison group (usual practice). Three-arm quasi-randomized controlled trial in three separate administrative units of the University of Queensland, Brisbane, Australia. Data were collected between January and June 2012 and analyzed the same year. Desk-based office workers aged 20-65 (multi-component intervention, n=16; workstations-only, n=14; comparison, n=14). The multi-component intervention comprised installation of height-adjustable workstations and organizational-level (management consultation, staff education, manager e-mails to staff) and individual-level (face-to-face coaching, telephone support) elements. Workplace sitting time (minutes/8-hour workday) assessed objectively via activPAL3 devices worn for 7 days at baseline and 3 months (end-of-intervention). At baseline, the mean proportion of workplace sitting time was approximately 77% across all groups (multi-component group 366 minutes/8 hours [SD=49]; workstations-only group 373 minutes/8 hours [SD=36], comparison 365 minutes/8 hours [SD=54]). Following intervention and relative to the comparison group, workplace sitting time in the multi-component group was reduced by 89 minutes/8-hour workday (95% CI=-130, -47 minutes; pworkplace sitting. These findings may have important practical and financial implications for workplaces targeting sitting time reductions. Australian New Zealand Clinical Trials Registry 00363297. © 2013 American Journal of Preventive Medicine Published by American Journal of Preventive Medicine All rights reserved.

  10. Bayesian approach to data adjustment and its applications in reactor physics

    International Nuclear Information System (INIS)

    Nir, I.

    1980-01-01

    A recently proposed method, dealing with the treatment of uncertainties and discrepancies among different sets of data measuring the same variables, is considered. The method is based on Bayesian principles and the concept of information entropy. It extracts from the data an estimation of negligence errors, which are then removed to adjust the original data. The adjusted data, now mutually-consistent, are unified to yield an enlarged ensemble, from which improved estimates of the variables can then be made. The adjustment procedure is developed for the most general case of any number of data sets and variables, with or without available prior information on the negligence errors. A measure of the adjustment quality is devised. It is used to assess the likelihood of suspected negligence error sources and the reliability of the prior. The validity of the method is demonstrated in two ways: first, it is shown that in the appropriate limits, the proposed formalism reduces to well established results. Secondly, the method is applied to specific examples of adjusting fast reactor group cross-sections and integral measurements. It is shown that the method can reproduce, to a large degree, the results of detailed re-evaluation and revision of inconsistent data, while not using these detailed information, and thus, proves the usefulness of the method in estimating negligence errors and identifying their sources

  11. A comparison of in vivo and in vitro methods for determining availability of iron from meals

    International Nuclear Information System (INIS)

    Schricker, B.R.; Miller, D.D.; Rasmussen, R.R.; Van Campen, D.

    1981-01-01

    A comparison is made between in vitro and human and rat in vivo methods for estimating food iron availability. Complex meals formulated to replicate meals used by Cook and Monsen (Am J Clin Nutr 1976;29:859) in human iron availability trials were used in the comparison. The meals were prepared by substituting pork, fish, cheese, egg, liver, or chicken for beef in two basic test meals and were evaluated for iron availability using in vitro and rat in vivo methods. When the criterion for comparison was the ability to show statistically significant differences between iron availability in the various meals, there was substantial agreement between the in vitro and human in vivo methods. There was less agreement between the human in vivo and the rat in vivo and between the in vivo and the rat in vivo and between the in vitro and the rat in vivo methods. Correlation analysis indicated significant agreement between in vitro and human in vivo methods. Correlation between the rat in vivo and human in vivo methods were also significant but correlations between the in vitro and rat in vivo methods were less significant and, in some cases, not significant. The comparison supports the contention that the in vitro method allows a rapid, inexpensive, and accurate estimation of nonheme iron availability in complex meals

  12. Comparison between PET template-based method and MRI-based method for cortical quantification of florbetapir (AV-45) uptake in vivo

    Energy Technology Data Exchange (ETDEWEB)

    Saint-Aubert, L.; Nemmi, F.; Peran, P. [Inserm, Imagerie Cerebrale et Handicaps neurologiques UMR 825, Centre Hospitalier Universitaire de Toulouse, Toulouse (France); Centre Hospitalier Universitaire de Toulouse, Universite de Toulouse, UPS, Imagerie Cerebrale et Handicaps Neurologiques UMR 825, Toulouse (France); Barbeau, E.J. [Universite de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, France, CNRS, CerCo, Toulouse (France); Service de Neurologie, Pole Neurosciences, Centre Hospitalier Universitaire de Toulouse, Toulouse (France); Payoux, P. [Inserm, Imagerie Cerebrale et Handicaps neurologiques UMR 825, Centre Hospitalier Universitaire de Toulouse, Toulouse (France); Centre Hospitalier Universitaire de Toulouse, Universite de Toulouse, UPS, Imagerie Cerebrale et Handicaps Neurologiques UMR 825, Toulouse (France); Service de Medecine Nucleaire, Pole Imagerie, Centre Hospitalier Universitaire de Toulouse, Toulouse (France); Chollet, F.; Pariente, J. [Inserm, Imagerie Cerebrale et Handicaps neurologiques UMR 825, Centre Hospitalier Universitaire de Toulouse, Toulouse (France); Centre Hospitalier Universitaire de Toulouse, Universite de Toulouse, UPS, Imagerie Cerebrale et Handicaps Neurologiques UMR 825, Toulouse (France); Service de Neurologie, Pole Neurosciences, Centre Hospitalier Universitaire de Toulouse, Toulouse (France)

    2014-05-15

    Florbetapir (AV-45) has been shown to be a reliable tool for assessing in vivo amyloid load in patients with Alzheimer's disease from the early stages. However, nonspecific white matter binding has been reported in healthy subjects as well as in patients with Alzheimer's disease. To avoid this issue, cortical quantification might increase the reliability of AV-45 PET analyses. In this study, we compared two quantification methods for AV-45 binding, a classical method relying on PET template registration (route 1), and a MRI-based method (route 2) for cortical quantification. We recruited 22 patients at the prodromal stage of Alzheimer's disease and 17 matched controls. AV-45 binding was assessed using both methods, and target-to-cerebellum mean global standard uptake values (SUVr) were obtained for each of them, together with SUVr in specific regions of interest. Quantification using the two routes was compared between the clinical groups (intragroup comparison), and between groups for each route (intergroup comparison). Discriminant analysis was performed. In the intragroup comparison, differences in uptake values were observed between route 1 and route 2 in both groups. In the intergroup comparison, AV-45 uptake was higher in patients than controls in all regions of interest using both methods, but the effect size of this difference was larger using route 2. In the discriminant analysis, route 2 showed a higher specificity (94.1 % versus 70.6 %), despite a lower sensitivity (77.3 % versus 86.4 %), and D-prime values were higher for route 2. These findings suggest that, although both quantification methods enabled patients at early stages of Alzheimer's disease to be well discriminated from controls, PET template-based quantification seems adequate for clinical use, while the MRI-based cortical quantification method led to greater intergroup differences and may be more suitable for use in current clinical research. (orig.)

  13. Adjoint Methods for Adjusting Three-Dimensional Atmosphere and Surface Properties to Fit Multi-Angle Multi-Pixel Polarimetric Measurements

    Science.gov (United States)

    Martin, William G.; Cairns, Brian; Bal, Guillaume

    2014-01-01

    This paper derives an efficient procedure for using the three-dimensional (3D) vector radiative transfer equation (VRTE) to adjust atmosphere and surface properties and improve their fit with multi-angle/multi-pixel radiometric and polarimetric measurements of scattered sunlight. The proposed adjoint method uses the 3D VRTE to compute the measurement misfit function and the adjoint 3D VRTE to compute its gradient with respect to all unknown parameters. In the remote sensing problems of interest, the scalar-valued misfit function quantifies agreement with data as a function of atmosphere and surface properties, and its gradient guides the search through this parameter space. Remote sensing of the atmosphere and surface in a three-dimensional region may require thousands of unknown parameters and millions of data points. Many approaches would require calls to the 3D VRTE solver in proportion to the number of unknown parameters or measurements. To avoid this issue of scale, we focus on computing the gradient of the misfit function as an alternative to the Jacobian of the measurement operator. The resulting adjoint method provides a way to adjust 3D atmosphere and surface properties with only two calls to the 3D VRTE solver for each spectral channel, regardless of the number of retrieval parameters, measurement view angles or pixels. This gives a procedure for adjusting atmosphere and surface parameters that will scale to the large problems of 3D remote sensing. For certain types of multi-angle/multi-pixel polarimetric measurements, this encourages the development of a new class of three-dimensional retrieval algorithms with more flexible parametrizations of spatial heterogeneity, less reliance on data screening procedures, and improved coverage in terms of the resolved physical processes in the Earth?s atmosphere.

  14. [Risk stratification of patients with diabetes mellitus undergoing coronary artery bypass grafting--a comparison of statistical methods].

    Science.gov (United States)

    Arnrich, B; Albert, A; Walter, J

    2006-01-01

    Among the coronary bypass patients from our Datamart database, we found a prevalence of 29.6% of diagnosed diabetics. 5.2% of the patients without a diagnosis of diabetes mellitus and a fasting plasma glucose level > 125 mg/dl were defined as undiagnosed diabetics. The objective of this paper was to compare univariate methods and techniques for risk stratification to determine, whether undiagnosed diabetes is per se a risk factor for increased ventilation time and length of ICU stay, and for increased prevalence of resuscitation, reintubation and 30-d mortality for diabetics in heart surgery. Univariate comparisons reveals that undiagnosed diabetics needed resuscitation significantly more often and had an increased ventilation time, while the length of ICU stay was significantly reduced. The significantly different distribution between the diabetics groups of 11 from 32 attributes examined, demands the use of methods for risk stratification. Both risk adjusted methods regression and matching confirm that undiagnosed diabetics had an increased ventilation time and an increased prevalence of resuscitation, while the length of ICU stay was not significantly reduced. A homogeneous distribution of the patient characteristics in the two diabetics groups could be achieved through a statistical matching method using the propensity score. In contrast to the regression analysis, a significantly increased prevalence of reintubation in undiagnosed diabetics was found. Based on an example of undiagnosed diabetics in heart surgery, the presented study reveals the necessity and the possibilities of techniques for risk stratification in retrospective analysis and shows how the potential of data collection from daily clinical practice can be used in an effective way.

  15. 77 FR 29982 - Federal Acquisition Regulation; Submission for OMB Review; Davis Bacon Act-Price Adjustment...

    Science.gov (United States)

    2012-05-21

    ...; Submission for OMB Review; Davis Bacon Act-Price Adjustment (Actual Method) AGENCY: Department of Defense... previously approved information collection requirement concerning the Davis-Bacon Act price adjustment... Bacon Act-Price Adjustment (Actual Method), by any of the following methods: Regulations.gov : http...

  16. Development and adjustment of programs for solving systems of linear equations

    International Nuclear Information System (INIS)

    Fujimura, Toichiro

    1978-03-01

    Programs for solving the systems of linear equations have been adjusted and developed in expanding the scientific subroutine library SSL. The principal programs adjusted are based on the congruent method, method of product form of the inverse, orthogonal method, Crout's method for sparse system, and acceleration of iterative methods. The programs developed are based on the escalator method, direct parallel residue method and block tridiagonal method for band system. Described are usage of the programs developed and their future improvement. FORTRAN lists with simple examples in tests of the programs are also given. (auth.)

  17. National Comparison of Hospital Performances in Lung Cancer Surgery: The Role Of Casemix Adjustment.

    Science.gov (United States)

    Beck, Naomi; Hoeijmakers, Fieke; van der Willik, Esmee M; Heineman, David J; Braun, Jerry; Tollenaar, Rob A E M; Schreurs, Wilhelmina H; Wouters, Michel W J M

    2018-04-03

    When comparing hospitals on outcome indicators, proper adjustment for casemix (a combination of patient- and disease characteristics) is indispensable. This study examines the need for casemix adjustment in evaluating hospital outcomes for Non-Small Cell Lung Cancer (NSCLC) surgery. Data from the Dutch Lung Cancer Audit for Surgery was used to validate factors associated with postoperative 30-day mortality and complicated course with multivariable logistic regression models. Between-hospital variation in casemix was studied by calculating medians and interquartile ranges for separate factors on hospital level and the 'expected' outcomes per hospital as a composite measure. 8040 patients, distributed over 51 Dutch hospitals were included for analysis. Mean observed postoperative mortality and complicated course were 2.2% and 13.6% respectively. Age, ASA-classification, ECOG performance score, lung function, extent of resection, tumor stage and postoperative histopathology were individual significant predictors for both outcomes of postoperative mortality and complicated course. A considerable variation of these casemix factors between hospital-populations was observed, with the expected mortality and complicated course per hospital ranging from 1.4 to 3.2% and 11.5 to 17.1%. The between-hospital variation in casemix of patients undergoing surgery for NSCLC emphasizes the importance of proper adjustment when comparing hospitals on outcome indicators. Copyright © 2018. Published by Elsevier Inc.

  18. A method for statistical comparison of data sets and its uses in analysis of nuclear physics data

    International Nuclear Information System (INIS)

    Bityukov, S.I.; Smirnova, V.V.; Krasnikov, N.V.; Maksimushkina, A.V.; Nikitenko, A.N.

    2014-01-01

    Authors propose a method for statistical comparison of two data sets. The method is based on the method of statistical comparison of histograms. As an estimator of quality of the decision made, it is proposed to use the value which it is possible to call the probability that the decision (data sets are various) is correct [ru

  19. Use of surveillance data for prevention of healthcare-associated infection: risk adjustment and reporting dilemmas.

    LENUS (Irish Health Repository)

    O'Neill, Eoghan

    2009-08-01

    Healthcare-associated or nosocomial infection (HCAI) is of increasing importance to healthcare providers and the public. Surveillance is crucial but must be adjusted for risk, especially when used for interhospital comparisons or for public reporting.

  20. Peer- and Self-Rated Correlates of a Teacher-Rated Typology of Child Adjustment

    Science.gov (United States)

    Lindstrom, William A., Jr.; Lease, A. Michele; Kamphaus, Randy W.

    2007-01-01

    External correlates of a teacher-rated typology of child adjustment developed using the Behavior Assessment System for Children were examined. Participants included 377 elementary school children recruited from 26 classrooms in the southeastern United States. Multivariate analyses of variance and planned comparisons were used to determine whether…

  1. Description of comparison method for evaluating spatial or temporal homogeneity of environmental monitoring data

    International Nuclear Information System (INIS)

    Mecozzi, M.; Cicero, A.M.

    1995-01-01

    In this paper a comparison method to verify the homogeneity/inhomogeneity of environmental monitoring data is described. The comparison method is based on the simultaneous application of three statistical tests: One-Way ANOVA, Kruskal Wallis and One-Way IANOVA. Robust tests such as IANOVA and Kruskal Wallis can be more efficient than the usual ANOVA methods because are resistant against the presence of outliers and divergences from the normal distribution of the data. The evidences of the study are that the validation of the result about presence/absence of homogeneity in the data set is obtained when it is confirmed by two tests at least

  2. Comparison of methods and instruments for 222Rn/220Rn progeny measurement

    International Nuclear Information System (INIS)

    Liu Yanyang; Shang Bing; Wu Yunyun; Zhou Qingzhi

    2012-01-01

    In this paper, comparisons were made among three methods of measurement (grab measurement, continuous measurement and integrating measurement) and also measurement of different instruments for a radon/thoron mixed chamber. Taking the optimized five-segment method as a comparison criterion, for the equilibrium-equivalent concentration of 222 Rn, measured results of Balm and 24 h integrating detectors are 31% and 29% higher than the criterion, the results of Wl x, however, is 20% lower; and for 220 Rn progeny, the results of Fiji-142, Kf-602D, BWLM and 24 h integrating detector are 86%, 18%, 28% and 36% higher than the criterion respectively, except that of WLx, which is 5% lower. For the differences shown, further research is needed. (authors)

  3. Drive Beam Quadrupoles for the CLIC Project: a Novel Method of Fiducialisation and a New Micrometric Adjustment System

    CERN Document Server

    AUTHOR|(SzGeCERN)411678; Duquenne, Mathieu; Sandomierski, Jacek; Sosin, Mateusz; Rude, Vivien

    2014-01-01

    This paper presents a new method of fiducialisation applied to determine the magnetic axis of the Drive Beam quadrupole of the CLIC project with respect to external alignment fiducials, within a micrometric accuracy and precision. It introduces also a new micrometric adjustment system along 5 Degrees of Freedom, developed for the same Drive Beam quadrupole. The combination of both developments opens very interesting perspectives to get a more simple and accurate alignment of the quadrupoles.

  4. Comparison of hardenability calculation methods of the heat-treatable constructional steels

    Energy Technology Data Exchange (ETDEWEB)

    Dobrzanski, L.A.; Sitek, W. [Division of Tool Materials and Computer Techniques in Metal Science, Silesian Technical University, Gliwice (Poland)

    1995-12-31

    Evaluation has been made of the consistency of calculation of the hardenability curves of the selected heat-treatable alloyed constructional steels with the experimental data. The study has been conducted basing on the analysis of present state of knowledge on hardenability calculation employing the neural network methods. Several calculation examples and comparison of the consistency of calculation methods employed are included. (author). 35 refs, 2 figs, 3 tabs.

  5. Comparison of three retail data communication and transmission methods

    Directory of Open Access Journals (Sweden)

    MA Yue

    2016-04-01

    Full Text Available With the rapid development of retail trade,the type and complexity of data keep increasing,and single data file size has a great difference between each other.How to realize an accurate,real-time and efficient data transmission based on a fixed cost is an important problem.Regarding the problem of effective transmission for business data files,this article implements analysis and comparison on 3 existing data transmission methods,considering the requirements on aspects like function in enterprise data communication system,we get a conclusion that which method can both meet the enterprise daily business development requirement better and have good extension ability.

  6. The density, the refractive index and the adjustment of the excess thermodynamic properties by means of the multiple linear regression method for the ternary system ethylbenzene–octane–propylbenzene

    International Nuclear Information System (INIS)

    Lisa, C.; Ungureanu, M.; Cosmaţchi, P.C.; Bolat, G.

    2015-01-01

    Graphical abstract: - Highlights: • Thermodynamic properties of the ethylbenzene–octane–propylbenzene system. • Equations with much lower standard deviations in comparison with other models. • The prediction of the V E based on the refractive index by means of the MLR method. - Abstract: The density (ρ) and the refractive index (n) have been experimentally determined for the ethylbenzene (1)–octane (2)–propylbenzene (3) ternary system in the entire variation range of the composition, at three temperatures: 298.15, 308.15 and 318.15 K and pressure 0.1 MPa. The excess thermodynamic properties that had been calculated based on the experimental determinations have been used to build empirical models which, despite of the disadvantage of having a greater number of coefficients, result in much lower standard deviations in comparison with the Redlich–Kister type models. The statistical processing of experimental data by means of the multiple linear regression method (MLR) was used in order to model the excess thermodynamic properties. Lower standard deviations than the Redlich–Kister type models were also obtained. The adjustment of the excess molar volume (V E ) based on refractive index by means of the Multiple linear regression of the SigmaPlot 11.2 program was made for the ethylbenzene (1)–octane (2)–propylbenzene (3) ternary system, obtaining a simple mathematical model which correlates the excess molar volume with the refractive index, the normalized temperature and the composition of the ternary mixture: V E = A 0 + A 1 X 1 + A 2 X 2 + A 3 (T/298.15) + A 4 n for which the standard deviation is 0.03.

  7. Comparison of continuum and atomistic methods for the analysis of InAs/GaAs quantum dots

    DEFF Research Database (Denmark)

    Barettin, D.; Pecchia, A.; Penazzi, G.

    2011-01-01

    We present a comparison of continuum k · p and atomistic empirical Tight Binding methods for the analysis of the optoelectronic properties of InAs/GaAs quantum dots.......We present a comparison of continuum k · p and atomistic empirical Tight Binding methods for the analysis of the optoelectronic properties of InAs/GaAs quantum dots....

  8. Characterization of the CALIBAN Critical Assembly Neutron Spectra using Several Adjustment Methods Based on Activation Foils Measurement

    Science.gov (United States)

    Casoli, Pierre; Grégoire, Gilles; Rousseau, Guillaume; Jacquet, Xavier; Authier, Nicolas

    2016-02-01

    CALIBAN is a metallic critical assembly managed by the Criticality, Neutron Science and Measurement Department located on the French CEA Center of Valduc. The reactor is extensively used for benchmark experiments dedicated to the evaluation of nuclear data, for electronic hardening or to study the effect of the neutrons on various materials. Therefore CALIBAN irradiation characteristics and especially its central cavity neutron spectrum have to be very accurately evaluated. In order to strengthen our knowledge of this spectrum, several adjustment methods based on activation foils measurements are being studied for a few years in the laboratory. Firstly two codes included in the UMG package have been tested and compared: MAXED and GRAVEL. More recently, the CALIBAN cavity spectrum has been studied using CALMAR, a new adjustment tool currently under development at the CEA Center of Cadarache. The article will discuss and compare the results and the quality of spectrum rebuilding obtained with the UMG codes and with the CALMAR software, from a set of activation measurements carried out in the CALIBAN irradiation cavity.

  9. Characterization of the CALIBAN Critical Assembly Neutron Spectra using Several Adjustment Methods Based on Activation Foils Measurement

    Directory of Open Access Journals (Sweden)

    Casoli Pierre

    2016-01-01

    Full Text Available CALIBAN is a metallic critical assembly managed by the Criticality, Neutron Science and Measurement Department located on the French CEA Center of Valduc. The reactor is extensively used for benchmark experiments dedicated to the evaluation of nuclear data, for electronic hardening or to study the effect of the neutrons on various materials. Therefore CALIBAN irradiation characteristics and especially its central cavity neutron spectrum have to be very accurately evaluated. In order to strengthen our knowledge of this spectrum, several adjustment methods based on activation foils measurements are being studied for a few years in the laboratory. Firstly two codes included in the UMG package have been tested and compared: MAXED and GRAVEL. More recently, the CALIBAN cavity spectrum has been studied using CALMAR, a new adjustment tool currently under development at the CEA Center of Cadarache. The article will discuss and compare the results and the quality of spectrum rebuilding obtained with the UMG codes and with the CALMAR software, from a set of activation measurements carried out in the CALIBAN irradiation cavity.

  10. A comparison of visual and quantitative methods to identify interstitial lung abnormalities

    OpenAIRE

    Kliment, Corrine R.; Araki, Tetsuro; Doyle, Tracy J.; Gao, Wei; Dupuis, Jos?e; Latourelle, Jeanne C.; Zazueta, Oscar E.; Fernandez, Isis E.; Nishino, Mizuki; Okajima, Yuka; Ross, James C.; Est?par, Ra?l San Jos?; Diaz, Alejandro A.; Lederer, David J.; Schwartz, David A.

    2015-01-01

    Background: Evidence suggests that individuals with interstitial lung abnormalities (ILA) on a chest computed tomogram (CT) may have an increased risk to develop a clinically significant interstitial lung disease (ILD). Although methods used to identify individuals with ILA on chest CT have included both automated quantitative and qualitative visual inspection methods, there has been not direct comparison between these two methods. To investigate this relationship, we created lung density met...

  11. Comparison of three methods of restoration of cosmic radio source profiles

    International Nuclear Information System (INIS)

    Malov, I.F.; Frolov, V.A.

    1986-01-01

    Effectiveness of three methods for restoration of radio brightness distribution over the source: main solution, fitting and minimal - phase method (MPM) - was compared on the basis of data on module and phase of luminosity function (LF) of 15 cosmic radiosources. It is concluded that MPM can soccessfully compete with other known methods. Its obvious advantages in comparison with the fitting method consist in that it gives unambigous and direct restoration and a main advantage as compared with the main solution is the feasibility of restoration in the absence of data on LF phase which reduces restoration errors

  12. Frequency adjustable MEMS vibration energy harvester

    Science.gov (United States)

    Podder, P.; Constantinou, P.; Amann, A.; Roy, S.

    2016-10-01

    Ambient mechanical vibrations offer an attractive solution for powering the wireless sensor nodes of the emerging “Internet-of-Things”. However, the wide-ranging variability of the ambient vibration frequencies pose a significant challenge to the efficient transduction of vibration into usable electrical energy. This work reports the development of a MEMS electromagnetic vibration energy harvester where the resonance frequency of the oscillator can be adjusted or tuned to adapt to the ambient vibrational frequency. Micro-fabricated silicon spring and double layer planar micro-coils along with sintered NdFeB micro-magnets are used to construct the electromagnetic transduction mechanism. Furthermore, another NdFeB magnet is adjustably assembled to induce variable magnetic interaction with the transducing magnet, leading to significant change in the spring stiffness and resonance frequency. Finite element analysis and numerical simulations exhibit substantial frequency tuning range (25% of natural resonance frequency) by appropriate adjustment of the repulsive magnetic interaction between the tuning and transducing magnet pair. This demonstrated method of frequency adjustment or tuning have potential applications in other MEMS vibration energy harvesters and micromechanical oscillators.

  13. Frequency adjustable MEMS vibration energy harvester

    International Nuclear Information System (INIS)

    Podder, P; Constantinou, P; Roy, S; Amann, A

    2016-01-01

    Ambient mechanical vibrations offer an attractive solution for powering the wireless sensor nodes of the emerging “Internet-of-Things”. However, the wide-ranging variability of the ambient vibration frequencies pose a significant challenge to the efficient transduction of vibration into usable electrical energy. This work reports the development of a MEMS electromagnetic vibration energy harvester where the resonance frequency of the oscillator can be adjusted or tuned to adapt to the ambient vibrational frequency. Micro-fabricated silicon spring and double layer planar micro-coils along with sintered NdFeB micro-magnets are used to construct the electromagnetic transduction mechanism. Furthermore, another NdFeB magnet is adjustably assembled to induce variable magnetic interaction with the transducing magnet, leading to significant change in the spring stiffness and resonance frequency. Finite element analysis and numerical simulations exhibit substantial frequency tuning range (25% of natural resonance frequency) by appropriate adjustment of the repulsive magnetic interaction between the tuning and transducing magnet pair. This demonstrated method of frequency adjustment or tuning have potential applications in other MEMS vibration energy harvesters and micromechanical oscillators. (paper)

  14. Size-Adjustable Microdroplets Generation Based on Microinjection

    Directory of Open Access Journals (Sweden)

    Shibao Li

    2017-03-01

    Full Text Available Microinjection is a promising tool for microdroplet generation, while the microinjection for microdroplets generation still remains a challenging issue due to the Laplace pressure at the micropipette opening. Here, we apply a simple and robust substrate-contacting microinjection method to microdroplet generation, presenting a size-adjustable microdroplets generation method based on a critical injection (CI model. Firstly, the micropipette is adjusted to a preset injection pressure. Secondly, the micropipette is moved down to contact the substrate, then, the Laplace pressure in the droplet is no longer relevant and the liquid flows out in time. The liquid constantly flows out until the micropipette is lifted, ending the substrate-contacting situation, which results in the recovery of the Laplace pressure at the micropipette opening, and the liquid injection is terminated. We carry out five groups of experiments whereupon 1600 images are captured within each group and the microdroplet radius of each image is detected. Then we determine the relationship among microdroplet radius, radius at the micropipette opening, time, and pressure, and, two more experiments are conducted to verify the relationship. To verify the effectiveness of the substrate-contacting method and the relationship, we conducted two experiments with six desired microdroplet radii are set in each experiment, by adjusting the injection time with a given pressure, and adjusting the injection pressure with a given time. Then, six arrays of microdroplets are obtained in each experiment. The results of the experiments show that the standard errors of the microdroplet radii are less than 2% and the experimental errors fall in the range of ±5%. The average operating speed is 20 microdroplets/min and the minimum radius of the microdroplets is 25 μm. This method has a simple experimental setup that enables easy manipulation and lower cost.

  15. Buildup factors for multilayer shieldings in deterministic methods and their comparison with Monte Carlo

    International Nuclear Information System (INIS)

    Listjak, M.; Slavik, O.; Kubovcova, D.; Vermeersch, F.

    2008-01-01

    In general there are two ways how to calculate effective doses. The first way is by use of deterministic methods like point kernel method which is implemented in Visiplan or Microshield. These kind of calculations are very fast, but they are not very convenient for a complex geometry with shielding composed of more then one material in meaning of result precision. In spite of this that programs are sufficient for ALARA optimisation calculations. On other side there are Monte Carlo methods which can be used for calculations. This way of calculation is quite precise in comparison with reality but calculation time is usually very large. Deterministic method like programs have one disadvantage -usually there is option to choose buildup factor (BUF) only for one material in multilayer stratified slabs shielding calculation problems even if shielding is composed from different materials. In literature there are proposed different formulas for multilayer BUF approximation. Aim of this paper was to examine these different formulas and their comparison with MCNP calculations. At first ware compared results of Visiplan and Microshield. Simple geometry was modelled - point source behind single and double slab shielding. For Build-up calculations was chosen Geometric Progression method (feature of the newest version of Visiplan) because there are lower deviations in comparison with Taylor fitting. (authors)

  16. Buildup factors for multilayer shieldings in deterministic methods and their comparison with Monte Carlo

    International Nuclear Information System (INIS)

    Listjak, M.; Slavik, O.; Kubovcova, D.; Vermeersch, F.

    2009-01-01

    In general there are two ways how to calculate effective doses. The first way is by use of deterministic methods like point kernel method which is implemented in Visiplan or Microshield. These kind of calculations are very fast, but they are not very convenient for a complex geometry with shielding composed of more then one material in meaning of result precision. In spite of this that programs are sufficient for ALARA optimisation calculations. On other side there are Monte Carlo methods which can be used for calculations. This way of calculation is quite precise in comparison with reality but calculation time is usually very large. Deterministic method like programs have one disadvantage -usually there is option to choose buildup factor (BUF) only for one material in multilayer stratified slabs shielding calculation problems even if shielding is composed from different materials. In literature there are proposed different formulas for multilayer BUF approximation. Aim of this paper was to examine these different formulas and their comparison with MCNP calculations. At first ware compared results of Visiplan and Microshield. Simple geometry was modelled - point source behind single and double slab shielding. For Build-up calculations was chosen Geometric Progression method (feature of the newest version of Visiplan) because there are lower deviations in comparison with Taylor fitting. (authors)

  17. Assessment and comparison of methods for solar ultraviolet radiation measurements

    Energy Technology Data Exchange (ETDEWEB)

    Leszczynski, K

    1995-06-01

    In the study, the different methods to measure the solar ultraviolet radiation are compared. The methods included are spectroradiometric, erythemally weighted broadband and multi-channel measurements. The comparison of the different methods is based on a literature review and assessments of optical characteristics of the spectroradiometer Optronic 742 of the Finnish Centre for Radiation and Nuclear Safety (STUK) and of the erythemally weighted Robertson-Berger type broadband radiometers Solar Light models 500 and 501 of the Finnish Meteorological Institute and STUK. An introduction to the sources of error in solar UV measurements, to methods for radiometric characterization of UV radiometers together with methods for error reduction are presented. Reviews on experiences from world-wide UV monitoring efforts and instrumentation as well as on the results from international UV radiometer intercomparisons are also presented. (62 refs.).

  18. Assessment and comparison of methods for solar ultraviolet radiation measurements

    International Nuclear Information System (INIS)

    Leszczynski, K.

    1995-06-01

    In the study, the different methods to measure the solar ultraviolet radiation are compared. The methods included are spectroradiometric, erythemally weighted broadband and multi-channel measurements. The comparison of the different methods is based on a literature review and assessments of optical characteristics of the spectroradiometer Optronic 742 of the Finnish Centre for Radiation and Nuclear Safety (STUK) and of the erythemally weighted Robertson-Berger type broadband radiometers Solar Light models 500 and 501 of the Finnish Meteorological Institute and STUK. An introduction to the sources of error in solar UV measurements, to methods for radiometric characterization of UV radiometers together with methods for error reduction are presented. Reviews on experiences from world-wide UV monitoring efforts and instrumentation as well as on the results from international UV radiometer intercomparisons are also presented. (62 refs.)

  19. Procedures, analysis, and comparison of groundwater velocity measurement methods for unconfined aquifers

    International Nuclear Information System (INIS)

    Kearl, P.M.; Dexter, J.J.; Price, J.E.

    1988-09-01

    Six methods for determining the average linear velocity of ground- water were tested at two separate field sites. The methods tested include bail tests, pumping tests, wave propagation, tracer tests, Geoflo Meter/reg sign/, and borehole dilution. This report presents procedures for performing field tests and compares the results of each method on the basis of application, cost, and accuracy. Comparisons of methods to determine the ground-water velocity at two field sites show certain methods yield similar results while other methods measure significantly different values. The literature clearly supports the reliability of pumping tests for determining hydraulic conductivity. Results of this investigation support this finding. Pumping tests, however, are limited because they measure an average hydraulic conductivity which is only representative of the aquifer within the radius of influence. Bail tests are easy and inexpensive to perform. If the tests are conducted on the majority of wells at a hazardous waste site, then the heterogeneity of the site aquifer can be assessed. However, comparisons of bail-test results with pumping-test and tracer-test results indicate that the accuracy of the method is questionable. Consequently, the principal recommendation of this investigation, based on cost and reliability of the ground-water velocity measurement methods, is that bail tests should be performed on all or a majority of monitoring wells at a site to determine the ''relative'' hydraulic conductivities

  20. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  1. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang

    2017-09-27

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  2. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.

    Science.gov (United States)

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-09-21

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  3. Adjustment and mental health problem in prisoners

    Directory of Open Access Journals (Sweden)

    Sudhinta Sinha

    2010-01-01

    Full Text Available Background : "Crime" is increasing day by day in our society not only in India but also all over the world. In turn, the number of prisoners is also increasing at the same rate. They remain imprisoned for a long duration or in some cases for the whole life. Living in a prison for long time becomes difficult for all inmates. So they often face adjustment and mental health problems. Recent findings suggest that mental illness rate in prison is three times higher than in the general population. Objective: The aim of the present study was to investigate the adjustment and the mental health problem and its relation in the prisoners. Materials and Methods : In the present study, 37 male prisoners of district jail of Dhanbad District of Jharkhand were selected on purposive sampling basis. Each prisoner was given specially designed Performa - Personal Data Sheet, General Health Questionnaire-12 and Bell Adjustment Inventory. Appropriate statistical tools were used to analyze the data. Results: The results obtained showed poor adjustment in social and emotional areas on the adjustment scale. The study also revealed a significant association between adjustment and mental health problem in the prisoners. Conclusion: The prisoners were found to have poor social and emotional adjustment which has strong association with their mental health.

  4. A Machine Learning Framework for Plan Payment Risk Adjustment.

    Science.gov (United States)

    Rose, Sherri

    2016-12-01

    To introduce cross-validation and a nonparametric machine learning framework for plan payment risk adjustment and then assess whether they have the potential to improve risk adjustment. 2011-2012 Truven MarketScan database. We compare the performance of multiple statistical approaches within a broad machine learning framework for estimation of risk adjustment formulas. Total annual expenditure was predicted using age, sex, geography, inpatient diagnoses, and hierarchical condition category variables. The methods included regression, penalized regression, decision trees, neural networks, and an ensemble super learner, all in concert with screening algorithms that reduce the set of variables considered. The performance of these methods was compared based on cross-validated R 2 . Our results indicate that a simplified risk adjustment formula selected via this nonparametric framework maintains much of the efficiency of a traditional larger formula. The ensemble approach also outperformed classical regression and all other algorithms studied. The implementation of cross-validated machine learning techniques provides novel insight into risk adjustment estimation, possibly allowing for a simplified formula, thereby reducing incentives for increased coding intensity as well as the ability of insurers to "game" the system with aggressive diagnostic upcoding. © Health Research and Educational Trust.

  5. Lipid Adjustment for Chemical Exposures: Accounting for Concomitant Variables

    Science.gov (United States)

    Li, Daniel; Longnecker, Matthew P.; Dunson, David B.

    2013-01-01

    Background Some environmental chemical exposures are lipophilic and need to be adjusted by serum lipid levels before data analyses. There are currently various strategies that attempt to account for this problem, but all have their drawbacks. To address such concerns, we propose a new method that uses Box-Cox transformations and a simple Bayesian hierarchical model to adjust for lipophilic chemical exposures. Methods We compared our Box-Cox method to existing methods. We ran simulation studies in which increasing levels of lipid-adjusted chemical exposure did and did not increase the odds of having a disease, and we looked at both single-exposure and multiple-exposures cases. We also analyzed an epidemiology dataset that examined the effects of various chemical exposures on the risk of birth defects. Results Compared with existing methods, our Box-Cox method produced unbiased estimates, good coverage, similar power, and lower type-I error rates. This was the case in both single- and multiple-exposure simulation studies. Results from analysis of the birth-defect data differed from results using existing methods. Conclusion Our Box-Cox method is a novel and intuitive way to account for the lipophilic nature of certain chemical exposures. It addresses some of the problems with existing methods, is easily extendable to multiple exposures, and can be used in any analyses that involve concomitant variables. PMID:24051893

  6. Study and comparison of different methods control in light water critical facility

    International Nuclear Information System (INIS)

    Michaiel, M.L.; Mahmoud, M.S.

    1980-01-01

    The control of nuclear reactors, may be studied using several control methods, such as control by rod absorbers, by inserting or removing fuel rods (moderator cavities), or by changing reflector thickness. Every method has its advantage, the comparison between these different methods and their effect on the reactivity of a reactor is the purpose of this work. A computer program is written by the authors to calculate the critical radius and worth in any case of the three precedent methods of control

  7. Adjustment of High School Dropouts in Closed Religious Communities

    Science.gov (United States)

    Itzhaki, Yael; Itzhaky, Haya; Yablon, Yaacov B.

    2018-01-01

    Background: While extensive research has been done on high-school dropouts' adjustment, there is little data on dropouts from closed religious communities. Objective: This study examines the contribution of personal and social resources to the adjustment of high school dropouts in Ultraorthodox Jewish communities in Israel. Method: Using a…

  8. Comparison of statistical sampling methods with ScannerBit, the GAMBIT scanning module

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, Gregory D. [University of California, Physics and Astronomy Department, Los Angeles, CA (United States); McKay, James; Scott, Pat [Imperial College London, Department of Physics, Blackett Laboratory, London (United Kingdom); Farmer, Ben; Conrad, Jan [AlbaNova University Centre, Oskar Klein Centre for Cosmoparticle Physics, Stockholm (Sweden); Stockholm University, Department of Physics, Stockholm (Sweden); Roebber, Elinore [McGill University, Department of Physics, Montreal, QC (Canada); Putze, Antje [LAPTh, Universite de Savoie, CNRS, Annecy-le-Vieux (France); Collaboration: The GAMBIT Scanner Workgroup

    2017-11-15

    We introduce ScannerBit, the statistics and sampling module of the public, open-source global fitting framework GAMBIT. ScannerBit provides a standardised interface to different sampling algorithms, enabling the use and comparison of multiple computational methods for inferring profile likelihoods, Bayesian posteriors, and other statistical quantities. The current version offers random, grid, raster, nested sampling, differential evolution, Markov Chain Monte Carlo (MCMC) and ensemble Monte Carlo samplers. We also announce the release of a new standalone differential evolution sampler, Diver, and describe its design, usage and interface to ScannerBit. We subject Diver and three other samplers (the nested sampler MultiNest, the MCMC GreAT, and the native ScannerBit implementation of the ensemble Monte Carlo algorithm T-Walk) to a battery of statistical tests. For this we use a realistic physical likelihood function, based on the scalar singlet model of dark matter. We examine the performance of each sampler as a function of its adjustable settings, and the dimensionality of the sampling problem. We evaluate performance on four metrics: optimality of the best fit found, completeness in exploring the best-fit region, number of likelihood evaluations, and total runtime. For Bayesian posterior estimation at high resolution, T-Walk provides the most accurate and timely mapping of the full parameter space. For profile likelihood analysis in less than about ten dimensions, we find that Diver and MultiNest score similarly in terms of best fit and speed, outperforming GreAT and T-Walk; in ten or more dimensions, Diver substantially outperforms the other three samplers on all metrics. (orig.)

  9. Parental adjustment and attitudes to parenting after in vitro fertilization.

    Science.gov (United States)

    Gibson, F L; Ungerer, J A; Tennant, C C; Saunders, D M

    2000-03-01

    To examine the psychosocial and parenthood-specific adjustment and attitudes to parenting at 1 year postpartum of IVF parents. Prospective, controlled study. Volunteers in a teaching hospital environment. Sixty-five primiparous women with singleton IVF pregnancies and their partners, and a control group of 61 similarly aged primiparous women with no history of infertility and their partners. Completion of questionnaires and interviews. Parent reports of general and parenthood-specific adjustment and attitudes to parenting. The IVF mothers tended to report lower self-esteem and less parenting competence than control mothers. Although there were no group differences on protectiveness, IVF mothers saw their children as significantly more vulnerable and "special" compared with controls. The IVF fathers reported significantly lower self-esteem and marital satisfaction, although not less competence in parenting. Both IVF mothers and fathers did not differ from control parents on other measures of general adjustment (mood) or those more specific to parenthood (e.g., attachment to the child and attitudes to child rearing). The IVF parents' adjustment to parenthood is similar to naturally conceiving comparison families. Nonetheless, there are minor IVF differences that reflect heightened child-focused concern and less confidence in parenting for mothers, less satisfaction with the marriage for the fathers, and vulnerable self-esteem for both parents.

  10. Force-controlled adjustment of car body fixtures

    OpenAIRE

    Keller, Carsten

    2014-01-01

    Production technology in modern car body assembling is affected by highly automated and complex facilities. However, in mounting car body assemblies adjustments are always necessary to react on quality instabilities of the input parts. Today these adjustments are made according to experience and with a high content of manual operation. This paper describes an innovative method that detects part deformations in a force sensitive way following the works of Dr. Muck, who developed a force sensit...

  11. Detection technology research on the one-way clutch of automatic brake adjuster

    Science.gov (United States)

    Jiang, Wensong; Luo, Zai; Lu, Yi

    2013-10-01

    In this article, we provide a new testing method to evaluate the acceptable quality of the one-way clutch of automatic brake adjuster. To analysis the suitable adjusting brake moment which keeps the automatic brake adjuster out of failure, we build a mechanical model of one-way clutch according to the structure and the working principle of one-way clutch. The ranges of adjusting brake moment both clockwise and anti-clockwise can be calculated through the mechanical model of one-way clutch. Its critical moment, as well, are picked up as the ideal values of adjusting brake moment to evaluate the acceptable quality of one-way clutch of automatic brake adjuster. we calculate the ideal values of critical moment depending on the different structure of one-way clutch based on its mechanical model before the adjusting brake moment test begin. In addition, an experimental apparatus, which the uncertainty of measurement is ±0.1Nm, is specially designed to test the adjusting brake moment both clockwise and anti-clockwise. Than we can judge the acceptable quality of one-way clutch of automatic brake adjuster by comparing the test results and the ideal values instead of the EXP. In fact, the evaluation standard of adjusting brake moment applied on the project are still using the EXP provided by manufacturer currently in China, but it would be unavailable when the material of one-way clutch changed. Five kinds of automatic brake adjusters are used in the verification experiment to verify the accuracy of the test method. The experimental results show that the experimental values of adjusting brake moment both clockwise and anti-clockwise are within the ranges of theoretical results. The testing method provided by this article vividly meet the requirements of manufacturer's standard.

  12. Stochastic spectral Galerkin and collocation methods for PDEs with random coefficients: A numerical comparison

    KAUST Repository

    Bä ck, Joakim; Nobile, Fabio; Tamellini, Lorenzo; Tempone, Raul

    2010-01-01

    Much attention has recently been devoted to the development of Stochastic Galerkin (SG) and Stochastic Collocation (SC) methods for uncertainty quantification. An open and relevant research topic is the comparison of these two methods

  13. Case-mix adjustment for diabetes indicators: a systematic review.

    Science.gov (United States)

    Calsbeek, Hiske; Markhorst, Joekle G M; Voerman, Gerlienke E; Braspenning, Jozé C C

    2016-02-01

    Case-mix adjustment is generally considered indispensable for fair comparison of healthcare performance. Inaccurate results are also unfair to patients as they are ineffective for improving quality. However, little is known about what factors should be adjusted for. We reviewed case-mix factors included in adjustment models for key diabetes indicators, the rationale for their inclusion, and their impact on performance. Systematic review. This systematic review included studies published up to June 2013 addressing case-mix factors for 6 key diabetes indicators: 2 outcomes and 2 process indicators for glycated hemoglobin (A1C), low-density lipoprotein cholesterol, and blood pressure. Factors were categorized as demographic, diabetes-related, comorbidity, generic health, geographic, or care-seeking, and were evaluated on the rationale for inclusion in the adjustment models, as well as their impact on indicator scores and ranking. Thirteen studies were included, mainly addressing A1C value and measurement. Twenty-three different case-mix factors, mostly demographic and diabetes-related, were identified, and varied from 1 to 14 per adjustment model. Six studies provided selection motives for the inclusion of case-mix factors. Marital status and body mass index showed a significant impact on A1C value. For the other factors, either no or conflicting associations were reported, or too few studies (n ≤ 2) investigated this association. Scientific knowledge about the relative importance of case-mix factors for diabetes indicators is emerging, especially for demographic and diabetes-related factors and indicators on A1C, but is still limited. Because arbitrary adjustment potentially results in inaccurate quality information, meaningful stratification that demonstrates inequity in care might be a better guide, as it can be a driver for quality improvement.

  14. Dwell time adjustment for focused ion beam machining

    International Nuclear Information System (INIS)

    Taniguchi, Jun; Satake, Shin-ichi; Oosumi, Takaki; Fukushige, Akihisa; Kogo, Yasuo

    2013-01-01

    Focused ion beam (FIB) machining is potentially useful for micro/nano fabrication of hard brittle materials, because the removal method involves physical sputtering. Usually, micro/nano scale patterning of hard brittle materials is very difficult to achieve by mechanical polishing or dry etching. Furthermore, in most reported examples, FIB machining has been applied to silicon substrates in a limited range of shapes. Therefore, a versatile method for FIB machining is required. We previously established the dwell time adjustment for mechanical polishing. The dwell time adjustment is calculated by using a convolution model derived from Preston’s hypothesis. More specifically, the target removal shape is a convolution of the unit removal shape, and the dwell time is calculated by means of one of four algorithms. We investigate these algorithms for dwell time adjustment in FIB machining, and we found that a combination a fast Fourier transform calculation technique and a constraint-type calculation is suitable. By applying this algorithm, we succeeded in machining a spherical lens shape with a diameter of 2.93 μm and a depth of 203 nm in a glassy carbon substrate by means of FIB with dwell time adjustment

  15. Comparison of several methods of sires evaluation for total milk ...

    African Journals Online (AJOL)

    2015-01-24

    Jan 24, 2015 ... Comparison of several methods of sires evaluation for total milk yield in a herd of Holstein cows in Yemen. F.R. Al-Samarai1,*, Y.K. Abdulrahman1, F.A. Mohammed2, F.H. Al-Zaidi2 and N.N. Al-Anbari3. 1Department of Veterinary Public Health/College of Veterinary Medicine, University of Baghdad, Iraq.

  16. A comparison of two instructional methods for drawing Lewis Structures

    Science.gov (United States)

    Terhune, Kari

    Two instructional methods for teaching Lewis structures were compared -- the Direct Octet Rule Method (DORM) and the Commonly Accepted Method (CAM). The DORM gives the number of bonds and the number of nonbonding electrons immediately, while the CAM involves moving electron pairs from nonbonding to bonding electrons, if necessary. The research question was as follows: Will high school chemistry students draw more accurate Lewis structures using the DORM or the CAM? Students in Regular Chemistry 1 (N = 23), Honors Chemistry 1 (N = 51) and Chemistry 2 (N = 15) at an urban high school were the study participants. An identical pretest and posttest was given before and after instruction. Students were given instruction with either the DORM (N = 45), the treatment method, or the CAM (N = 44), the control for two days. After the posttest, 15 students were interviewed, using a semistructured interview process. The pretest/posttest consisted of 23 numerical response questions and 2 to 6 free response questions that were graded using a rubric. A two-way ANOVA showed a significant interaction effect between the groups and the methods, F (1, 70) = 10.960, p = 0.001. Post hoc comparisons using the Bonferroni pairwise comparison showed that Reg Chem 1 students demonstrated larger gain scores when they had been taught the CAM (Mean difference = 3.275, SE = 1.324, p Chemistry 1 students performed better with the DORM, perhaps due to better math skills, enhanced working memory, and better metacognitive skills. Regular Chemistry 1 students performed better with the CAM, perhaps because it is more visual. Teachers may want to use the CAM or a direct-pairing method to introduce the topic and use the DORM in advanced classes when a correct structure is needed quickly.

  17. A comparison of two analytical evaluation methods for educational computer games for young children

    NARCIS (Netherlands)

    Bekker, M.M.; Baauw, E.; Barendregt, W.

    2008-01-01

    In this paper we describe a comparison of two analytical methods for educational computer games for young children. The methods compared in the study are the Structured Expert Evaluation Method (SEEM) and the Combined Heuristic Evaluation (HE) (based on a combination of Nielsen’s HE and the

  18. Comparison Study of Subspace Identification Methods Applied to Flexible Structures

    Science.gov (United States)

    Abdelghani, M.; Verhaegen, M.; Van Overschee, P.; De Moor, B.

    1998-09-01

    In the past few years, various time domain methods for identifying dynamic models of mechanical structures from modal experimental data have appeared. Much attention has been given recently to so-called subspace methods for identifying state space models. This paper presents a detailed comparison study of these subspace identification methods: the eigensystem realisation algorithm with observer/Kalman filter Markov parameters computed from input/output data (ERA/OM), the robust version of the numerical algorithm for subspace system identification (N4SID), and a refined version of the past outputs scheme of the multiple-output error state space (MOESP) family of algorithms. The comparison is performed by simulating experimental data using the five mode reduced model of the NASA Mini-Mast structure. The general conclusion is that for the case of white noise excitations as well as coloured noise excitations, the N4SID/MOESP algorithms perform equally well but give better results (improved transfer function estimates, improved estimates of the output) compared to the ERA/OM algorithm. The key computational step in the three algorithms is the approximation of the extended observability matrix of the system to be identified, for N4SID/MOESP, or of the observer for the system to be identified, for the ERA/OM. Furthermore, the three algorithms only require the specification of one dimensioning parameter.

  19. [Autogenic training as a therapy for adjustment disorder in adults].

    Science.gov (United States)

    Jojić, Boris R; Leposavić, Ljubica M

    2005-01-01

    Autogenic training is a widely recognised psychotherapy technique. The British School of Autogenic Training cites a large list of disorders, states, and changes, where autogenic training may prove to be of help. We wanted to explore the application of autogenic training as a therapy for adjustment disorder in adults. Our sample consisted of a homogeneous group of 35 individuals, with an average age of 39.3 +/- 1.6 years, who were diagnosed with adjustment disorder, F 43.2, in accordance with ICD 10 search criteria. The aim of our study was to research the effectiveness of autogenic training as a therapy for adjustment disorder in adults, by checking the influence of autogenic training on the biophysical and biochemical indicators of adjustment disorder. We measured the indicators of adjustment disorder and their changes in three phases: before the beginning, immediately after the beginning, and six months after the completion, of a practical course in autogenic training. We measured systolic and diastolic arterial blood pressure, brachial pulse rate as well as the levels of cortisol in plasma, of cholesterol in blood, and of glucose. During that period, autogenic training functioned as the sole therapy. The study confirmed our preliminary assumptions. The measurements we performed demonstrated that arterial blood pressure, pulse rate, concentration of cholesterol and cortisol, after the application of autogenic training among the subjects suffering from adjustment disorder, were lower in comparison to the initial values. These values remained lower even six months after the completion of the practical course in autogenic training. Autogenic training significantly decreases the values of physiological indicators of adjustment disorder, diminishes the effects of stress in an individual, and helps adults to cope with stress, facilitating their recuperation.

  20. Is the distinction between adjustment disorder with depressed mood and adjustment disorder with mixed anxious and depressed mood valid?

    Science.gov (United States)

    Zimmerman, Mark; Martinez, Jennifer H; Dalrymple, Kristy; Martinez, Jennifer H; Chelminski, Iwona; Young, Diane

    2013-11-01

    In the DSM-IV, adjustment disorder is subtyped according to the predominant presenting feature. The different diagnostic code numbers assigned to each subtype suggest their significance in DSM-IV. However, little research has examined the validity of these subtypes. In the present report from the Rhode Island Methods to Improve Diagnostic Assessment and Services (MIDAS) project, we compared the demographic and clinical profiles of patients diagnosed with adjustment disorder subtypes to determine whether there was enough empirical evidence supporting the retention of multiple adjustment disorder subtypes in future versions of the DSM. A total of 3,400 psychiatric patients presenting to the Rhode Island Hospital outpatient practice were evaluated with semistructured diagnostic interviews for DSM-IV Axis I and Axis II disorders and measures of psychosocial morbidity. Approximately 7% (224 of 3,400) of patients were diagnosed with current adjustment disorder. Adjustment disorder with depressed mood and with mixed anxious and depressed mood were the most common subtypes, accounting for 80% of the patients diagnosed with adjustment disorder. There was no significant difference between these 2 groups with regard to demographic variables, current comorbid Axis I or Axis II disorders, lifetime history of major depressive disorder or anxiety disorders, psychosocial morbidity, or family history of psychiatric disorders. The only difference between the groups was lifetime history of drug use, which was significantly higher in the patients diagnosed with adjustment disorder with depressed mood. There is no evidence supporting the retention of both of these adjustment disorder subtypes, and DSM-IV previously set a precedent for eliminating adjustment disorder subtypes in the absence of any data. Therefore, in the spirit of nosologic parsimony, consideration should be given to collapsing the 2 disorders into 1: adjustment disorder with depressed mood.

  1. Comparison between two methods of methanol production from carbon dioxide

    International Nuclear Information System (INIS)

    Anicic, B.; Trop, P.; Goricanec, D.

    2014-01-01

    Over recent years there has been a significant increase in the amount of technology contributing to lower emissions of carbon dioxide. The aim of this paper is to provide a comparison between two technologies for methanol production, both of which use carbon dioxide and hydrogen as initial raw materials. The first methanol production technology includes direct synthesis of methanol from CO 2 , and the second has two steps. During the first step CO 2 is converted into CO via RWGS (reverse water gas shift) reaction, and methanol is produced during the second step. A comparison between these two methods was achieved in terms of economical and energy-efficiency bases. The price of electricity had the greatest impact from the economical point of view as hydrogen is produced via the electrolysis of water. Furthermore, both the cost of CO 2 capture and the amounts of carbon taxes were taken into consideration. Energy-efficiency comparison is based on cold gas efficiency, while economic feasibility is compared using net present value. Even though the mentioned processes are similar, it was shown that direct methanol synthesis has higher energy and economic efficiency. - Highlights: • We compared two methods for methanol production. • Process schemes for both, direct synthesis and two-step synthesis, are described. • Direct synthesis has higher economical and energy efficiency

  2. Sampling for ants in different-aged spruce forests: A comparison of methods

    Czech Academy of Sciences Publication Activity Database

    Véle, A.; Holuša, J.; Frouz, Jan

    2009-01-01

    Roč. 45, č. 4 (2009), s. 301-305 ISSN 1164-5563 Institutional research plan: CEZ:AV0Z60660521 Keywords : ants * baits * methods comparison Subject RIV: EH - Ecology, Behaviour Impact factor: 1.247, year: 2009

  3. Comparison of Electronic Data Capture (EDC) with the Standard Data Capture Method for Clinical Trial Data

    Science.gov (United States)

    Walther, Brigitte; Hossin, Safayet; Townend, John; Abernethy, Neil; Parker, David; Jeffries, David

    2011-01-01

    Background Traditionally, clinical research studies rely on collecting data with case report forms, which are subsequently entered into a database to create electronic records. Although well established, this method is time-consuming and error-prone. This study compares four electronic data capture (EDC) methods with the conventional approach with respect to duration of data capture and accuracy. It was performed in a West African setting, where clinical trials involve data collection from urban, rural and often remote locations. Methodology/Principal Findings Three types of commonly available EDC tools were assessed in face-to-face interviews; netbook, PDA, and tablet PC. EDC performance during telephone interviews via mobile phone was evaluated as a fourth method. The Graeco Latin square study design allowed comparison of all four methods to standard paper-based recording followed by data double entry while controlling simultaneously for possible confounding factors such as interview order, interviewer and interviewee. Over a study period of three weeks the error rates decreased considerably for all EDC methods. In the last week of the study the data accuracy for the netbook (5.1%, CI95%: 3.5–7.2%) and the tablet PC (5.2%, CI95%: 3.7–7.4%) was not significantly different from the accuracy of the conventional paper-based method (3.6%, CI95%: 2.2–5.5%), but error rates for the PDA (7.9%, CI95%: 6.0–10.5%) and telephone (6.3%, CI95% 4.6–8.6%) remained significantly higher. While EDC-interviews take slightly longer, data become readily available after download, making EDC more time effective. Free text and date fields were associated with higher error rates than numerical, single select and skip fields. Conclusions EDC solutions have the potential to produce similar data accuracy compared to paper-based methods. Given the considerable reduction in the time from data collection to database lock, EDC holds the promise to reduce research-associated costs

  4. Comparison of electronic data capture (EDC with the standard data capture method for clinical trial data.

    Directory of Open Access Journals (Sweden)

    Brigitte Walther

    Full Text Available BACKGROUND: Traditionally, clinical research studies rely on collecting data with case report forms, which are subsequently entered into a database to create electronic records. Although well established, this method is time-consuming and error-prone. This study compares four electronic data capture (EDC methods with the conventional approach with respect to duration of data capture and accuracy. It was performed in a West African setting, where clinical trials involve data collection from urban, rural and often remote locations. METHODOLOGY/PRINCIPAL FINDINGS: Three types of commonly available EDC tools were assessed in face-to-face interviews; netbook, PDA, and tablet PC. EDC performance during telephone interviews via mobile phone was evaluated as a fourth method. The Graeco Latin square study design allowed comparison of all four methods to standard paper-based recording followed by data double entry while controlling simultaneously for possible confounding factors such as interview order, interviewer and interviewee. Over a study period of three weeks the error rates decreased considerably for all EDC methods. In the last week of the study the data accuracy for the netbook (5.1%, CI95%: 3.5-7.2% and the tablet PC (5.2%, CI95%: 3.7-7.4% was not significantly different from the accuracy of the conventional paper-based method (3.6%, CI95%: 2.2-5.5%, but error rates for the PDA (7.9%, CI95%: 6.0-10.5% and telephone (6.3%, CI95% 4.6-8.6% remained significantly higher. While EDC-interviews take slightly longer, data become readily available after download, making EDC more time effective. Free text and date fields were associated with higher error rates than numerical, single select and skip fields. CONCLUSIONS: EDC solutions have the potential to produce similar data accuracy compared to paper-based methods. Given the considerable reduction in the time from data collection to database lock, EDC holds the promise to reduce research

  5. Feasibility of CBCT-based dose calculation: Comparative analysis of HU adjustment techniques

    International Nuclear Information System (INIS)

    Fotina, Irina; Hopfgartner, Johannes; Stock, Markus; Steininger, Thomas; Lütgendorf-Caucig, Carola; Georg, Dietmar

    2012-01-01

    Background and purpose: The aim of this work was to compare the accuracy of different HU adjustments for CBCT-based dose calculation. Methods and materials: Dose calculation was performed on CBCT images of 30 patients. In the first two approaches phantom-based (Pha-CC) and population-based (Pop-CC) conversion curves were used. The third method (WAB) represents override of the structures with standard densities for water, air and bone. In ROI mapping approach all structures were overridden with average HUs from planning CT. All techniques were benchmarked to the Pop-CC and CT-based plans by DVH comparison and γ-index analysis. Results: For prostate plans, WAB and ROI mapping compared to Pop-CC showed differences in PTV D median below 2%. The WAB and Pha-CC methods underestimated the bladder dose in IMRT plans. In lung cases PTV coverage was underestimated by Pha-CC method by 2.3% and slightly overestimated by the WAB and ROI techniques. The use of the Pha-CC method for head–neck IMRT plans resulted in difference in PTV coverage up to 5%. Dose calculation with WAB and ROI techniques showed better agreement with pCT than conversion curve-based approaches. Conclusions: Density override techniques provide an accurate alternative to the conversion curve-based methods for dose calculation on CBCT images.

  6. Parenting and the Adjustment of Children Born to Gay Fathers Through Surrogacy.

    Science.gov (United States)

    Golombok, Susan; Blake, Lucy; Slutsky, Jenna; Raffanello, Elizabeth; Roman, Gabriela D; Ehrhardt, Anke

    2017-01-23

    Findings are presented on a study of 40 gay father families created through surrogacy and a comparison group of 55 lesbian mother families created through donor insemination with a child aged 3-9 years. Standardized interview, observational and questionnaire measures of stigmatization, quality of parent-child relationships, and children's adjustment were administered to parents, children, and teachers. Children in both family types showed high levels of adjustment with lower levels of children's internalizing problems reported by gay fathers. Irrespective of family type, children whose parents perceived greater stigmatization and children who experienced higher levels of negative parenting showed higher levels of parent-reported externalizing problems. The findings contribute to theoretical understanding of the role of family structure and family processes in child adjustment. © 2017 The Authors. Child Development published by Wiley Periodicals, Inc on behalf of Society for Research in Child Development.

  7. AUTOMATIC ADJUSTMENT OF WIDE-BASE GOOGLE STREET VIEW PANORAMAS

    Directory of Open Access Journals (Sweden)

    E. Boussias-Alexakis

    2016-06-01

    Full Text Available This paper focuses on the issue of sparse matching in cases of extremely wide-base panoramic images such as those acquired by Google Street View in narrow urban streets. In order to effectively use affine point operators for bundle adjustment, panoramas must be suitably rectified to simulate affinity. To this end, a custom piecewise planar projection (triangular prism projection is applied. On the assumption that the image baselines run parallel to the street façades, the estimated locations of the vanishing lines of the façade plane allow effectively removing projectivity and applying the ASIFT point operator on panorama pairs. Results from comparisons with multi-panorama adjustment, based on manually measured image points, and ground truth indicate that such an approach, if further elaborated, may well provide a realistic answer to the matching problem in the case of demanding panorama configurations.

  8. Adjustment guidance for cyclotron by real-time display of feasible setting regions

    International Nuclear Information System (INIS)

    Okamura, Tetsuya; Murakami, Tohru

    1990-01-01

    A computer aided operation system for start-up of cyclotron is being developed in order to support operators who, through their experiences and intuition, adjust dozens of components to maximize extracted beam current. This paper describes a guidance method using real-time display of feasible setting regions of adjustment parameters. It is a function of the beam adjustment support system. The followings are the key points of this paper. (1) It is proposed that a cyclotron can be modeled as a series of mapping of beam condition. In this model, the adjustment is consider to be a searching process for a mapping which maps the beam condition into the acceptance of cyclotron. (2) The searching process is formulated as a nonlinear minimization problem. In order to solve this problem, a fast search algorithm composed of a line search method (golden section search) and an image processing method (border following) is developed. The solutions are the feasible setting regions. (3) A human interface which displays feasible setting regions and a search history is realized for the beam adjustment support system It enables that the operators and the computers cooperate the operation of beam adjustment. (author)

  9. The impact of calcium assay change on a local adjusted calcium equation.

    Science.gov (United States)

    Davies, Sarah L; Hill, Charlotte; Bailey, Lisa M; Davison, Andrew S; Milan, Anna M

    2016-03-01

    Deriving and validating local adjusted calcium equations is important for ensuring appropriate calcium status classification. We investigated the impact on our local adjusted calcium equation of a change in calcium method by the manufacturer from cresolphthalein complexone to NM-BAPTA. Calcium and albumin results from general practice requests were extracted from the Laboratory Information Management system for a three-month period. Results for which there was evidence of disturbance in calcium homeostasis were excluded leaving 13,482 sets of results for analysis. The adjusted calcium equation was derived following least squares regression analysis of total calcium on albumin and normalized to the mean calcium concentration of the data-set. The revised equation (NM-BAPTA calcium method) was compared with the previous equation (cresolphthalein complexone calcium method). The switch in calcium assay resulted in a small change in the adjusted calcium equation but was not considered to be clinically significant. The calcium reference interval differed from that proposed by Pathology Harmony in the UK. Local adjusted calcium equations should be re-assessed following changes in the calcium method. A locally derived reference interval may differ from the consensus harmonized reference interval. © The Author(s) 2015.

  10. NET SALARY ADJUSTMENT

    CERN Multimedia

    Finance Division

    2001-01-01

    On 15 June 2001 the Council approved the correction of the discrepancy identified in the net salary adjustment implemented on 1st January 2001 by retroactively increasing the scale of basic salaries to achieve the 2.8% average net salary adjustment approved in December 2000. We should like to inform you that the corresponding adjustment will be made to your July salary. Full details of the retroactive adjustments will consequently be shown on your pay slip.

  11. Comparison between MRI-based attenuation correction methods for brain PET in dementia patients

    International Nuclear Information System (INIS)

    Cabello, Jorge; Lukas, Mathias; Pyka, Thomas; Nekolla, Stephan G.; Ziegler, Sibylle I.; Rota Kops, Elena; Shah, N. Jon; Ribeiro, Andre; Yakushev, Igor

    2016-01-01

    The combination of Positron Emission Tomography (PET) with magnetic resonance imaging (MRI) in hybrid PET/MRI scanners offers a number of advantages in investigating brain structure and function. A critical step of PET data reconstruction is attenuation correction (AC). Accounting for bone in attenuation maps (μ-map) was shown to be important in brain PET studies. While there are a number of MRI-based AC methods, no systematic comparison between them has been performed so far. The aim of this work was to study the different performance obtained by some of the recent methods presented in the literature. To perform such a comparison, we focused on [ 18 F]-Fluorodeoxyglucose-PET/MRI neurodegenerative dementing disorders, which are known to exhibit reduced levels of glucose metabolism in certain brain regions. Four novel methods were used to calculate μ-maps from MRI data of 15 patients with Alzheimer's dementia (AD). The methods cover two atlas-based methods, a segmentation method, and a hybrid template/segmentation method. Additionally, the Dixon-based and a UTE-based method, offered by a vendor, were included in the comparison. Performance was assessed at three levels: tissue identification accuracy in the μ-map, quantitative accuracy of reconstructed PET data in specific brain regions, and precision in diagnostic images at identifying hypometabolic areas. Quantitative regional errors of -20-10 % were obtained using the vendor's AC methods, whereas the novel methods produced errors in a margin of ±5 %. The obtained precision at identifying areas with abnormally low levels of glucose uptake, potentially regions affected by AD, were 62.9 and 79.5 % for the two vendor AC methods, the former ignoring bone and the latter including bone information. The precision increased to 87.5-93.3 % in average for the four new methods, exhibiting similar performances. We confirm that the AC methods based on the Dixon and UTE sequences provided by the vendor are inferior

  12. Comparison between MRI-based attenuation correction methods for brain PET in dementia patients

    Energy Technology Data Exchange (ETDEWEB)

    Cabello, Jorge; Lukas, Mathias; Pyka, Thomas; Nekolla, Stephan G.; Ziegler, Sibylle I. [Technische Universitaet Muenchen, Nuklearmedizinische Klinik und Poliklinik, Klinikum rechts der Isar, Munich (Germany); Rota Kops, Elena; Shah, N. Jon [Forschungszentrum Juelich GmbH, Institute of Neuroscience and Medicine 4, Medical Imaging Physics, Juelich (Germany); Ribeiro, Andre [Forschungszentrum Juelich GmbH, Institute of Neuroscience and Medicine 4, Medical Imaging Physics, Juelich (Germany); Institute of Biophysics and Biomedical Engineering, Lisbon (Portugal); Yakushev, Igor [Technische Universitaet Muenchen, Nuklearmedizinische Klinik und Poliklinik, Klinikum rechts der Isar, Munich (Germany); Institute TUM Neuroimaging Center (TUM-NIC), Munich (Germany)

    2016-11-15

    The combination of Positron Emission Tomography (PET) with magnetic resonance imaging (MRI) in hybrid PET/MRI scanners offers a number of advantages in investigating brain structure and function. A critical step of PET data reconstruction is attenuation correction (AC). Accounting for bone in attenuation maps (μ-map) was shown to be important in brain PET studies. While there are a number of MRI-based AC methods, no systematic comparison between them has been performed so far. The aim of this work was to study the different performance obtained by some of the recent methods presented in the literature. To perform such a comparison, we focused on [{sup 18}F]-Fluorodeoxyglucose-PET/MRI neurodegenerative dementing disorders, which are known to exhibit reduced levels of glucose metabolism in certain brain regions. Four novel methods were used to calculate μ-maps from MRI data of 15 patients with Alzheimer's dementia (AD). The methods cover two atlas-based methods, a segmentation method, and a hybrid template/segmentation method. Additionally, the Dixon-based and a UTE-based method, offered by a vendor, were included in the comparison. Performance was assessed at three levels: tissue identification accuracy in the μ-map, quantitative accuracy of reconstructed PET data in specific brain regions, and precision in diagnostic images at identifying hypometabolic areas. Quantitative regional errors of -20-10 % were obtained using the vendor's AC methods, whereas the novel methods produced errors in a margin of ±5 %. The obtained precision at identifying areas with abnormally low levels of glucose uptake, potentially regions affected by AD, were 62.9 and 79.5 % for the two vendor AC methods, the former ignoring bone and the latter including bone information. The precision increased to 87.5-93.3 % in average for the four new methods, exhibiting similar performances. We confirm that the AC methods based on the Dixon and UTE sequences provided by the vendor are

  13. Method comparison of ultrasound and kilovoltage x-ray fiducial marker imaging for prostate radiotherapy targeting

    International Nuclear Information System (INIS)

    Fuller, Clifton David; Jr, Charles R Thomas; Schwartz, Scott; Golden, Nanalei; Ting, Joe; Wong, Adrian; Erdogmus, Deniz; Scarbrough, Todd J

    2006-01-01

    Several measurement techniques have been developed to address the capability for target volume reduction via target localization in image-guided radiotherapy; among these have been ultrasound (US) and fiducial marker (FM) software-assisted localization. In order to assess interchangeability between methods, US and FM localization were compared using established techniques for determination of agreement between measurement methods when a 'gold-standard' comparator does not exist, after performing both techniques daily on a sequential series of patients. At least 3 days prior to CT simulation, four gold seeds were placed within the prostate. FM software-assisted localization utilized the ExacTrac X-Ray 6D (BrainLab AG, Germany) kVp x-ray image acquisition system to determine prostate position; US prostate targeting was performed on each patient using the SonArray (Varian, Palo Alto, CA). Patients were aligned daily using laser alignment of skin marks. Directional shifts were then calculated by each respective system in the X, Y and Z dimensions before each daily treatment fraction, previous to any treatment or couch adjustment, as well as a composite vector of displacement. Directional shift agreement in each axis was compared using Altman-Bland limits of agreement, Lin's concordance coefficient with Partik's grading schema, and Deming orthogonal bias-weighted correlation methodology. 1019 software-assisted shifts were suggested by US and FM in 39 patients. The 95% limits of agreement in X, Y and Z axes were ±9.4 mm, ±11.3 mm and ±13.4, respectively. Three-dimensionally, measurements agreed within 13.4 mm in 95% of all paired measures. In all axes, concordance was graded as 'poor' or 'unacceptable'. Deming regression detected proportional bias in both directional axes and three-dimensional vectors. Our data suggest substantial differences between US and FM image-guided measures and subsequent suggested directional shifts. Analysis reveals that the vast majority of

  14. Adjustable valves in normal-pressure hydrocephalus: a retrospective study of 218 patients

    DEFF Research Database (Denmark)

    Zemack, G.; Rommer, Bertil Roland

    2008-01-01

    OBJECTIVE: We sought to assess the value of adjusting shunt valve opening pressure, complications, and outcomes with the use of an adjustable shunt valve in the treatment of patients with normal-pressure hydrocephalus (NPH). METHODS: In a single-center retrospective study, 231 adjustable valves...

  15. An investigation of methods for free-field comparison calibration of measurement microphones

    DEFF Research Database (Denmark)

    Barrera-Figueroa, Salvador; Moreno Pescador, Guillermo; Jacobsen, Finn

    2010-01-01

    Free-field comparison calibration of measurement microphones requires that a calibrated reference microphone and a test microphone are exposed to the same sound pressure in a free field. The output voltages of the microphones can be measured either sequentially or simultaneously. The sequential...... method requires the sound field to have good temporal stability. The simultaneous method requires instead that the sound pressure is the same in the positions where the microphones are placed. In this paper the results of the application of the two methods are compared. A third combined method...

  16. Proposal for a national inventory adjustment for trade in the presence of border carbon adjustment: Assessing carbon tax policy in Japan

    International Nuclear Information System (INIS)

    Zhou, Xin; Yano, Takashi; Kojima, Satoshi

    2013-01-01

    In this paper we pointed out a hidden inequality in accounting for trade-related emissions in the presence of border carbon adjustment. Under a domestic carbon pricing policy, producers pay for the carbon costs in exchange for the right to emit. Under border carbon adjustment, however, the exporting country pays for the carbon costs of their exports to the importing country but not be given any emission credits. As a result, export-related emissions will be remained in the national inventory of the exporting country based on the UNFCCC inventory approach. This hidden inequality is important to climate policy but has not yet been pointed out. To address this issue we propose a method of National Inventory Adjustment for Trade, by which export-related emissions will be deducted from the national inventory of the exporting country and added to the national inventory of the importing country which implements border carbon adjustment. To assess the policy impacts, we simulated a carbon tax policy with border tax adjustment for Japan using a multi-region computable general equilibrium model. The results indicate that with the National Inventory Adjustment for Trade, both Japan′s national inventory and the carbon leakage effects of Japan′s climate policy will be greatly different. - Highlights: • The inequality in GHG accounting caused by border carbon adjustment presented. • National inventory adjustment for trade under border carbon adjustment proposed. • Policy impacts on international competitiveness and carbon leakage assessed. • Practical issues related to the national inventory adjustment for trade discussed

  17. A Comparison between the Effect of Cooperative Learning Teaching Method and Lecture Teaching Method on Students' Learning and Satisfaction Level

    Science.gov (United States)

    Mohammadjani, Farzad; Tonkaboni, Forouzan

    2015-01-01

    The aim of the present research is to investigate a comparison between the effect of cooperative learning teaching method and lecture teaching method on students' learning and satisfaction level. The research population consisted of all the fourth grade elementary school students of educational district 4 in Shiraz. The statistical population…

  18. Estimating HIV incidence among adults in Kenya and Uganda: a systematic comparison of multiple methods.

    Directory of Open Access Journals (Sweden)

    Andrea A Kim

    2011-03-01

    Full Text Available Several approaches have been used for measuring HIV incidence in large areas, yet each presents specific challenges in incidence estimation.We present a comparison of incidence estimates for Kenya and Uganda using multiple methods: 1 Epidemic Projections Package (EPP and Spectrum models fitted to HIV prevalence from antenatal clinics (ANC and national population-based surveys (NPS in Kenya (2003, 2007 and Uganda (2004/2005; 2 a survey-derived model to infer age-specific incidence between two sequential NPS; 3 an assay-derived measurement in NPS using the BED IgG capture enzyme immunoassay, adjusted for misclassification using a locally derived false-recent rate (FRR for the assay; (4 community cohorts in Uganda; (5 prevalence trends in young ANC attendees. EPP/Spectrum-derived and survey-derived modeled estimates were similar: 0.67 [uncertainty range: 0.60, 0.74] and 0.6 [confidence interval: (CI 0.4, 0.9], respectively, for Uganda (2005 and 0.72 [uncertainty range: 0.70, 0.74] and 0.7 [CI 0.3, 1.1], respectively, for Kenya (2007. Using a local FRR, assay-derived incidence estimates were 0.3 [CI 0.0, 0.9] for Uganda (2004/2005 and 0.6 [CI 0, 1.3] for Kenya (2007. Incidence trends were similar for all methods for both Uganda and Kenya.Triangulation of methods is recommended to determine best-supported estimates of incidence to guide programs. Assay-derived incidence estimates are sensitive to the level of the assay's FRR, and uncertainty around high FRRs can significantly impact the validity of the estimate. Systematic evaluations of new and existing incidence assays are needed to the study the level, distribution, and determinants of the FRR to guide whether incidence assays can produce reliable estimates of national HIV incidence.

  19. Methodological aspects of journaling a dynamic adjusting entry model

    Directory of Open Access Journals (Sweden)

    Vlasta Kašparovská

    2011-01-01

    Full Text Available This paper expands the discussion of the importance and function of adjusting entries for loan receivables. Discussion of the cyclical development of adjusting entries, their negative impact on the business cycle and potential solutions has intensified during the financial crisis. These discussions are still ongoing and continue to be relevant to members of the professional public, banking regulators and representatives of international accounting institutions. The objective of this paper is to evaluate a method of journaling dynamic adjusting entries under current accounting law. It also expresses the authors’ opinions on the potential for consistently implementing basic accounting principles in journaling adjusting entries for loan receivables under a dynamic model.

  20. Estimation of the specific activity of radioiodinated gonadotrophins: comparison of three methods

    Energy Technology Data Exchange (ETDEWEB)

    Englebienne, P [Centre for Research and Diagnosis in Endocrinology, Kain (Belgium); Slegers, G [Akademisch Ziekenhuis, Ghent (Belgium). Lab. voor Analytische Chemie

    1983-01-14

    The authors compared 3 methods for estimating the specific activity of radioiodinated gonadotrophins. Two of the methods (column recovery and isotopic dilution) gave similar results, while the third (autodisplacement) gave significantly higher estimations. In the autodisplacement method, B/T ratios, obtained when either labelled hormone alone, or labelled and unlabelled hormone, are added to the antibody, were compared as estimates of the mass of hormone iodinated. It is likely that immunologically unreactive impurities present in the labelled hormone solution invalidate such comparison.

  1. Adjustment of rainfall estimates from weather radars using in-situ stormwater drainage sensors

    DEFF Research Database (Denmark)

    Ahm, Malte

    importance as long as the estimated flow and water levels are correct. It makes sense to investigate the possibility of adjusting weather radar data to rainfall-runoff measurements instead of rain gauge measurements in order to obtain better predictions of flow and water levels. This Ph.D. study investigates......-rain gauge adjusted data is applied for urban drainage models, discrepancies between radar-estimated runoff and observed runoff still occur. The aim of urban drainage applications is to estimate flow and water levels in critical points in the system. The “true” rainfall at ground level is, therefore, of less...... how rainfall-runoff measurements can be utilised to adjust weather radars. Two traditional adjustments methods based on rain gauges were used as the basis for developing two radar-runoff adjustment methods. The first method is based on the ZR relationship describing the relation between radar...

  2. Characterizing and Addressing the Need for Statistical Adjustment of Global Climate Model Data

    Science.gov (United States)

    White, K. D.; Baker, B.; Mueller, C.; Villarini, G.; Foley, P.; Friedman, D.

    2017-12-01

    As part of its mission to research and measure the effects of the changing climate, the U. S. Army Corps of Engineers (USACE) regularly uses the World Climate Research Programme's Coupled Model Intercomparison Project Phase 5 (CMIP5) multi-model dataset. However, these data are generated at a global level and are not fine-tuned for specific watersheds. This often causes CMIP5 output to vary from locally observed patterns in the climate. Several downscaling methods have been developed to increase the resolution of the CMIP5 data and decrease systemic differences to support decision-makers as they evaluate results at the watershed scale. Evaluating preliminary comparisons of observed and projected flow frequency curves over the US revealed a simple framework for water resources decision makers to plan and design water resources management measures under changing conditions using standard tools. Using this framework as a basis, USACE has begun to explore to use of statistical adjustment to alter global climate model data to better match the locally observed patterns while preserving the general structure and behavior of the model data. When paired with careful measurement and hypothesis testing, statistical adjustment can be particularly effective at navigating the compromise between the locally observed patterns and the global climate model structures for decision makers.

  3. Refractive accuracy with light-adjustable intraocular lenses.

    Science.gov (United States)

    Villegas, Eloy A; Alcon, Encarna; Rubio, Elena; Marín, José M; Artal, Pablo

    2014-07-01

    To evaluate efficacy, predictability, and stability of refractive treatments using light-adjustable intraocular lenses (IOLs). University Hospital Virgen de la Arrixaca, Murcia, Spain. Prospective nonrandomized clinical trial. Eyes with a light-adjustable IOL (LAL) were treated with spatial intensity profiles to correct refractive errors. The effective changes in refraction in the light-adjustable IOL after every treatment were estimated by subtracting those in the whole eye and the cornea, which were measured with a Hartmann-Shack sensor and a corneal topographer, respectively. The refractive changes in the whole eye and light-adjustable IOL, manifest refraction, and visual acuity were obtained after every light treatment and at the 3-, 6-, and 12-month follow-ups. The study enrolled 53 eyes (49 patients). Each tested light spatial pattern (5 spherical; 3 astigmatic) produced a different refractive change (Plight adjustments induced a maximum change in spherical power of the light-adjustable IOL of between -1.98 diopters (D) and +2.30 D and in astigmatism of up to -2.68 D with axis errors below 9 degrees. Intersubject variability (standard deviation) ranged between 0.10 D and 0.40 D. The 2 required lock-in procedures induced a small myopic shift (range +0.01 to +0.57 D) that depended on previous adjustments. Light-adjustable IOL implantation achieved accurate refractive outcomes (around emmetropia) with good uncorrected distance visual acuity, which remained stable over time. Further refinements in nomograms and in the treatment's protocol would improve the predictability of refractive and visual outcomes with these IOLs. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2014 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  4. Evaluation of two streamlined life cycle assessment methods

    International Nuclear Information System (INIS)

    Hochschomer, Elisabeth; Finnveden, Goeran; Johansson, Jessica

    2002-02-01

    Two different methods for streamlined life cycle assessment (LCA) are described: the MECO-method and SLCA. Both methods are tested on an already made case-study on cars fuelled with petrol or ethanol, and electric cars with electricity produced from hydro power or coal. The report also contains some background information on LCA and streamlined LCA, and a deschption of the case study used. The evaluation of the MECO and SLCA-methods are based on a comparison of the results from the case study as well as practical aspects. One conclusion is that the SLCA-method has some limitations. Among the limitations are that the whole life-cycle is not covered, it requires quite a lot of information and there is room for arbitrariness. It is not very flexible instead it difficult to develop further. We are therefore not recommending the SLCA-method. The MECO-method does in comparison show several attractive features. It is also interesting to note that the MECO-method produces information that is complementary compared to a more traditional quantitative LCA. We suggest that the MECO method needs some further development and adjustment to Swedish conditions

  5. Comparison of DUPIC fuel composition heterogeneity control methods

    International Nuclear Information System (INIS)

    Choi, Hang Bok; Ko, Won Il

    1999-08-01

    A method to reduce the fuel composition heterogeneity effect on the core performance parameters has been studied for the DUPIC fuel which is made of spent pressurized water reactor (PWR) fuels by a dry refabrication process. This study focuses on the reactivity control method which uses either slightly enriched, depleted, or natural uranium to minimize the cost rise effect on the manufacturing of DUPIC fuel, when adjusting the excess reactivity control by slightly enriched and depleted uranium, reactivity control by natural uranium for high reactivity spent PWR fuels, and reactivity control by natural uranium for linear reactivity spent PWR fuels. The results of this study have shown that the reactivity control by slightly enriched and depleted uranium, all the spent PWR fuels can be utilized as the DUPIC fuel and the fraction of fresh uranium feed is 3.4% on an average. For the reactivity control by natural uranium, about 88% of spent PWR fuel can be utilized as the DUPIC fuel when the linear reactivity spent PWR fuels are used, and the amount of natural uranium feed needed to control the DUPIC fuel reactivity is negligible. (author). 13 refs., 16 tabs., 6 figs

  6. Identifying the contents of a type 1 diabetes outpatient care program based on the self-adjustment of insulin using the Delphi method.

    Science.gov (United States)

    Kubota, Mutsuko; Shindo, Yukari; Kawaharada, Mariko

    2014-10-01

    The objective of this study is to identify the items necessary for an outpatient care program based on the self-adjustment of insulin for type 1 diabetes patients. Two surveys based on the Delphi method were conducted. The survey participants were 41 certified diabetes nurses in Japan. An outpatient care program based on the self-adjustment of insulin was developed based on pertinent published work and expert opinions. There were a total of 87 survey items in the questionnaire, which was developed based on the care program mentioned earlier, covering matters such as the establishment of prerequisites and a cooperative relationship, the basics of blood glucose pattern management, learning and practice sessions for the self-adjustment of insulin, the implementation of the self-adjustment of insulin, and feedback. The participants' approval on items in the questionnaires was defined at 70%. Participants agreed on all of the items in the first survey. Four new parameters were added to make a total of 91 items for the second survey and participants agreed on the inclusion of 84 of them. Items necessary for a type 1 diabetes outpatient care program based on self-adjustment of insulin were subsequently selected. It is believed that this care program received a fairly strong approval from certified diabetes nurses; however, it will be necessary to have the program further evaluated in conjunction with intervention studies in the future. © 2014 The Authors. Japan Journal of Nursing Science © 2014 Japan Academy of Nursing Science.

  7. Impact of gastrectomy procedural complexity on surgical outcomes and hospital comparisons.

    Science.gov (United States)

    Mohanty, Sanjay; Paruch, Jennifer; Bilimoria, Karl Y; Cohen, Mark; Strong, Vivian E; Weber, Sharon M

    2015-08-01

    Most risk adjustment approaches adjust for patient comorbidities and the primary procedure. However, procedures done at the same time as the index case may increase operative risk and merit inclusion in adjustment models for fair hospital comparisons. Our objectives were to evaluate the impact of surgical complexity on postoperative outcomes and hospital comparisons in gastric cancer surgery. Patients who underwent gastric resection for cancer were identified from a large clinical dataset. Procedure complexity was characterized using secondary procedure CPT codes and work relative value units (RVUs). Regression models were developed to evaluate the association between complexity variables and outcomes. The impact of complexity adjustment on model performance and hospital comparisons was examined. Among 3,467 patients who underwent gastrectomy for adenocarcinoma, 2,171 operations were distal and 1,296 total. A secondary procedure was reported for 33% of distal gastrectomies and 59% of total gastrectomies. Six of 10 secondary procedures were associated with adverse outcomes. For example, patients who underwent a synchronous bowel resection had a higher risk of mortality (odds ratio [OR], 2.14; 95% CI, 1.07-4.29) and reoperation (OR, 2.09; 95% CI, 1.26-3.47). Model performance was slightly better for nearly all outcomes with complexity adjustment (mortality c-statistics: standard model, 0.853; secondary procedure model, 0.858; RVU model, 0.855). Hospital ranking did not change substantially after complexity adjustment. Surgical complexity variables are associated with adverse outcomes in gastrectomy, but complexity adjustment does not affect hospital rankings appreciably. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Optimum adjusting method of fuel in a reactor

    International Nuclear Information System (INIS)

    Otsuji, Niro; Shirakawa, Toshihisa; Toyoshi, Isamu; Tatemichi, Shin-ichiro; Mukai, Hideyuki.

    1976-01-01

    Object: To effectively select an intermediate pattern of a control rod to thereby shorten time required to adjust the fuel. Structure: Control rods are divided into several regions in concentric and circular fashion or the like within a core. A control rod position satisfied with a thermal limit value in a maximal power level is preset as a target position by a three dimensional nuclear hydrothermal force counting code or the like. Next, an intermediate pattern of the control rods in each region is determined on the basis of the target position, and while judging operational condition, a part of fuel rods is maintained for a given period of time in a level more than power density of the target position with the power increased within the range not to produce interaction between pellet and cladding material (PCI), so that said power density may be learned. Thereafter, the power is rapidly decreased. Similar operation may be applied to the other fuel rods, after which the control rod may be set the target position to obtain the maximum power level. (Ikeda, J.)

  9. Adolescents conceived by IVF: parenting and psychosocial adjustment.

    Science.gov (United States)

    Colpin, H; Bossaert, G

    2008-12-01

    A follow-up study was conducted in mid-adolescence on parenting and the child's psychosocial development after in vitro fertilization (IVF). The first phase of the study had compared 31 IVF families and 31 families with a naturally conceived child when the children were 2 years old (Colpin et al., 1995). Of these, 24 IVF families and 21 control families participated again when the children were 15-16 years old. Fathers, mothers and adolescents completed questionnaires assessing parenting style and stress, and adolescent psychosocial adjustment. No significant differences were found in self- or adolescent-reported parenting style, or in parenting stress between IVF mothers and mothers in the control group, nor between IVF fathers and fathers in the control group. Neither did we find significant differences in self- or parent-reported behavioural problems between adolescents conceived by IVF and those conceived naturally. Comparison of behavioural problems between IVF adolescents informed or not informed about the IVF conception did not reveal significant differences. Parenting and 15-16-year-old adolescents' psychosocial adjustment did not differ significantly between IVF families and control families. This study is, to the best of our knowledge, the first psychosocial follow-up in mid-adolescence, and adds to the evidence that IVF children and their parents are well-adjusted. Large-scale studies in adolescence are needed to support these findings.

  10. A particle method with adjustable transport properties - the generalized consistent Boltzmann algorithm

    International Nuclear Information System (INIS)

    Garcia, A.L.; Alexander, F.J.; Alder, B.J.

    1997-01-01

    The consistent Boltzmann algorithm (CBA) for dense, hard-sphere gases is generalized to obtain the van der Waals equation of state and the corresponding exact viscosity at all densities except at the highest temperatures. A general scheme for adjusting any transport coefficients to higher values is presented

  11. Social and Emotional Adjustment of Siblings of Children with Autism

    Science.gov (United States)

    Pilowsky, Tammy; Yirmiya, Nurit; Doppelt, Osnat; Gross-Tsur, Varda; Shalev, Ruth S.

    2004-01-01

    Background: Social and emotional adjustment of siblings of children with autism was examined, to explore their risk or resilience to effects of genetic liability and environmental factors involved in having a sibling with autism. Method: Social-emotional adjustment, behavior problems, socialization skills, and siblings' relationships were compared…

  12. Perception of quality of life and social adjustment of patients with recurrent depression

    Directory of Open Access Journals (Sweden)

    Stanković Žana

    2006-01-01

    Full Text Available Introduction: Depression is the most commonly present psychiatric entity in clinical practice, accompanied by significant impairment of both social and professional functioning. In addition, depression frequently develops as complication of other psychiatric disorders and various somatic diseases. Objective: To investigate subjective perception of quality of life and social adjustment, severity of depressive symptoms as well as level of correlation of severity of depressive symptoms and quality of life and social adjustment of patients with recurrent depression in comparison to the group of patients with diabetes and healthy subjects. Method: The study included 45 subjects of both sexes, ranging from 18 to 60 years of age, divided in three groups of 15 subjects each. The experimental group comprised the patients diagnosed with recurrent depression in remission (DSM-IV, one control group was consisted of patients diagnosed with Type 2 Diabetes mellitus and another one comprised healthy subjects. The instruments of assessment were: The Beck Depression Inventory- BDI, The Social Adaptation Self -evaluation scale - SASS, The Psychological General Well-Being Scale - WBQ. Results: Significant difference of both BDI and WBQ scales was found between the experimental and the control group of healthy subjects (ANOVA, Mann Whitney; p≤0.01, as well as between two control groups (p≤0.02. The level of inverse correlation of mean score values of BDI and SASS scales was significant in the control group of patients with diabetes while such levels of BDI and WBQ scales (Spearman correlation coefficient, p<0.01 were found in all groups of our study. Conclusion: In the group of patients with recurrent depression, significant decline of quality of life and significantly higher severity of depressive symptoms were present in comparison to the group of healthy subjects as well as significant level of inverse correlation of severity of depressive symptoms and

  13. Effect of play therapy on behavioral problems of mal-adjusted pre-school children

    Directory of Open Access Journals (Sweden)

    Mehdi Khanbani

    2011-01-01

    Full Text Available Objective: The present research was conducted to study the effect of play therapy on reducing behavioral problems of mal-adjusted children (children with oppositional defiant disorder. Method: By using multistage cluster sampling, regions 6, 7, and 8 in Tehran were selected, and among kindergartens of these areas, 3 kindergartens under the support of welfare organization were randomly selected. From pre-school children of these 3 kindergartens, 40 children that could have behavioral disorder according to their teachers and parents complaints, were carefully tested, and among them, by the results obtained from child symptom inventory questionnaire (CSI-4, teacher's form, and a researcher-made self-control checklist, 16 children who showed severe symptoms of oppositional defiant disorder were selected, and they were randomly divided into control and experimental group. This research is quasi-experimental, and is done by the use of pre-test, post-test, and control group. Results: values of calculated F for oppositional defiant disorder in control and experimental group is meaningful after fixing the effect of pre-test (F(1,12=74/94, P<0/001 so there is a meaningful difference between means of disobedience disorder post-test scores in experimental and control group by having the fixed effect of pre-test effect. Comparison of adjusted means of 2 groups shows that the mean of attention-deficit hyperactivity disorder (ADHD in experimental group (M=14/09 is lower than control group (M=36/66. Therefore, applying play therapy in experimental group in comparison with control group, who did not receive these instructions, caused reduction in attention-deficit hyperactivity disorder (ADHD in pre-school children. Conclusion: Results of this research show that the children's disobedience is reduced by benefiting from play therapy.

  14. Leg-adjustment strategies for stable running in three dimensions

    International Nuclear Information System (INIS)

    Peuker, Frank; Maufroy, Christophe; Seyfarth, André

    2012-01-01

    The dynamics of the center of mass (CoM) in the sagittal plane in humans and animals during running is well described by the spring-loaded inverted pendulum (SLIP). With appropriate parameters, SLIP running patterns are stable, and these models can recover from perturbations without the need for corrective strategies, such as the application of additional forces. Rather, it is sufficient to adjust the leg to a fixed angle relative to the ground. In this work, we consider the extension of the SLIP to three dimensions (3D SLIP) and investigate feed-forward strategies for leg adjustment during the flight phase. As in the SLIP model, the leg is placed at a fixed angle. We extend the scope of possible reference axes from only fixed horizontal and vertical axes to include the CoM velocity vector as a movement-related reference, resulting in six leg-adjustment strategies. Only leg-adjustment strategies that include the CoM velocity vector produced stable running and large parameter domains of stability. The ability of the model to recover from perturbations along the direction of motion (directional stability) depended on the strategy for lateral leg adjustment. Specifically, asymptotic and neutral directional stability was observed for strategies based on the global reference axis and the velocity vector, respectively. Additional features of velocity-based leg adjustment are running at arbitrary low speed (kinetic energy) and the emergence of large domains of stable 3D running that are smoothly transferred to 2D SLIP stability and even to 1D SLIP hopping. One of the additional leg-adjustment strategies represented a large convex region of parameters where stable and robust hopping and running patterns exist. Therefore, this strategy is a promising candidate for implementation into engineering applications, such as robots, for instance. In a preliminary comparison, the model predictions were in good agreement with the experimental data, suggesting that the 3D SLIP is an

  15. The barriers to and enablers of providing reasonably adjusted health services to people with intellectual disabilities in acute hospitals: evidence from a mixed-methods study.

    Science.gov (United States)

    Tuffrey-Wijne, Irene; Goulding, Lucy; Giatras, Nikoletta; Abraham, Elisabeth; Gillard, Steve; White, Sarah; Edwards, Christine; Hollins, Sheila

    2014-04-16

    To identify the factors that promote and compromise the implementation of reasonably adjusted healthcare services for patients with intellectual disabilities in acute National Health Service (NHS) hospitals. A mixed-methods study involving interviews, questionnaires and participant observation (July 2011-March 2013). Six acute NHS hospital trusts in England. Reasonable adjustments for people with intellectual disabilities were identified through the literature. Data were collected on implementation and staff understanding of these adjustments. Data collected included staff questionnaires (n=990), staff interviews (n=68), interviews with adults with intellectual disabilities (n=33), questionnaires (n=88) and interviews (n=37) with carers of patients with intellectual disabilities, and expert panel discussions (n=42). Hospital strategies that supported implementation of reasonable adjustments did not reliably translate into consistent provision of such adjustments. Good practice often depended on the knowledge, understanding and flexibility of individual staff and teams, leading to the delivery of reasonable adjustments being haphazard throughout the organisation. Major barriers included: lack of effective systems for identifying and flagging patients with intellectual disabilities, lack of staff understanding of the reasonable adjustments that may be needed, lack of clear lines of responsibility and accountability for implementing reasonable adjustments, and lack of allocation of additional funding and resources. Key enablers were the Intellectual Disability Liaison Nurse and the ward manager. The evidence suggests that ward culture, staff attitudes and staff knowledge are crucial in ensuring that hospital services are accessible to vulnerable patients. The authors suggest that flagging the need for specific reasonable adjustments, rather than the vulnerable condition itself, may address some of the barriers. Further research is recommended that describes and

  16. Qualitative Analysis of Chang'e-1 γ-ray Spectrometer Spectra Using Noise Adjusted Singular Value Decomposition Method

    International Nuclear Information System (INIS)

    Yang Jia; Ge Liangquan; Xiong Shengqing

    2010-01-01

    From the features of spectra shape of Chang'e-1 γ-ray spectrometer(CE1-GRS) data, it is difficult to determine elemental compositions on the lunar surface. Aimed at this problem, this paper proposes using noise adjusted singular value decomposition (NASVD) method to extract orthogonal spectral components from CE1-GRS data. Then the peak signals in the spectra of lower-order layers corresponding to the observed spectrum of each lunar region are respectively analyzed. Elemental compositions of each lunar region can be determined based upon whether the energy corresponding to each peak signal equals to the energy corresponding to the characteristic gamma-ray line emissions of specific elements. The result shows that a number of elements such as U, Th, K, Fe, Ti, Si, O, Al, Mg, Ca and Na are qualitatively determined by this method. (authors)

  17. Structured Feedback Training for Time-Out: Efficacy and Efficiency in Comparison to a Didactic Method.

    Science.gov (United States)

    Jensen, Scott A; Blumberg, Sean; Browning, Megan

    2017-09-01

    Although time-out has been demonstrated to be effective across multiple settings, little research exists on effective methods for training others to implement time-out. The present set of studies is an exploratory analysis of a structured feedback method for training time-out using repeated role-plays. The three studies examined (a) a between-subjects comparison to more a traditional didactic/video modeling method of time-out training, (b) a within-subjects comparison to traditional didactic/video modeling training for another skill, and (c) the impact of structured feedback training on in-home time-out implementation. Though findings are only preliminary and more research is needed, the structured feedback method appears across studies to be an efficient, effective method that demonstrates good maintenance of skill up to 3 months post training. Findings suggest, though do not confirm, a benefit of the structured feedback method over a more traditional didactic/video training model. Implications and further research on the method are discussed.

  18. A comparison of three clustering methods for finding subgroups in MRI, SMS or clinical data

    DEFF Research Database (Denmark)

    Kent, Peter; Jensen, Rikke K; Kongsted, Alice

    2014-01-01

    ). There is a scarcity of head-to-head comparisons that can inform the choice of which clustering method might be suitable for particular clinical datasets and research questions. Therefore, the aim of this study was to perform a head-to-head comparison of three commonly available methods (SPSS TwoStep CA, Latent Gold...... LCA and SNOB LCA). METHODS: The performance of these three methods was compared: (i) quantitatively using the number of subgroups detected, the classification probability of individuals into subgroups, the reproducibility of results, and (ii) qualitatively using subjective judgments about each program...... classify individuals into those subgroups. CONCLUSIONS: Our subjective judgement was that Latent Gold offered the best balance of sensitivity to subgroups, ease of use and presentation of results with these datasets but we recognise that different clustering methods may suit other types of data...

  19. Interconnection: A qualitative analysis of adjusting to living with renal cell carcinoma.

    Science.gov (United States)

    Leal, Isabel; Milbury, Kathrin; Engebretson, Joan; Matin, Surena; Jonasch, Eric; Tannir, Nizar; Wood, Christopher G; Cohen, Lorenzo

    2018-04-01

    ABSTRACTObjective:Adjusting to cancer is an ongoing process, yet few studies explore this adjustment from a qualitative perspective. The aim of our qualitative study was to understand how patients construct their experience of adjusting to living with cancer. Qualitative analysis was conducted of written narratives collected from four separate writing sessions as part of a larger expressive writing clinical trial with renal cell carcinoma patients. Thematic analysis and constant comparison were employed to code the primary patterns in the data into themes until thematic saturation was reached at 37 participants. A social constructivist perspective informed data interpretation. Interconnection described the overarching theme underlying the process of adjusting to cancer and involved four interrelated themes: (1) discontinuity-feelings of disconnection and loss following diagnosis; (2) reorientation-to the reality of cancer psychologically and physically; (3) rebuilding-struggling through existential distress to reconnect; and (4) expansion-finding meaning in interconnections with others. Participants related a dialectical movement in which disruption and loss catalyzed an ongoing process of finding meaning. Our findings suggest that adjusting to living with cancer is an ongoing, iterative, nonlinear process. The dynamic interactions between the different themes in this process describe the transformation of meaning as participants move through and revisit prior themes in response to fluctuating symptoms and medical news. It is important that clinicians recognize the dynamic and ongoing process of adjusting to cancer to support patients in addressing their unmet psychosocial needs throughout the changing illness trajectory.

  20. Comparison of three noninvasive methods for hemoglobin screening of blood donors.

    Science.gov (United States)

    Ardin, Sergey; Störmer, Melanie; Radojska, Stela; Oustianskaia, Larissa; Hahn, Moritz; Gathof, Birgit S

    2015-02-01

    To prevent phlebotomy of anemic individuals and to ensure hemoglobin (Hb) content of the blood units, Hb screening of blood donors before donation is essential. Hb values are mostly evaluated by measurement of capillary blood obtained from fingerstick. Rapid noninvasive methods have recently become available and may be preferred by donors and staff. The aim of this study was to evaluate for the first time all different noninvasive methods for Hb screening. Blood donors were screened for Hb levels in three different trials using three different noninvasive methods (Haemospect [MBR Optical Systems GmbH & Co. KG], NBM 200 [LMB Technology GmbH], Pronto-7 [Masimo Europe Ltd]) in comparison to the established fingerstick method (CompoLab Hb [Fresenius Kabi GmbH]) and to levels obtained from venous samples on a cell counter (Sysmex [Sysmex Europe GmbH]) as reference. The usability of the noninvasive methods was assessed with an especially developed survey. Technical failures occurred by using the Pronto-7 due to nail polish, skin color, or ambient light. The NBM 200 also showed a high sensitivity to ambient light and noticeably lower Hb levels for women than obtained from the Sysmex. The statistical analysis showed the following bias and standard deviation of differences of all methods in comparison to the venous results: Haemospect, -0.22 ± 1.24; NBM, 200 -0.12 ± 1.14; Pronto-7, -0.50 ± 0.99; and CompoLab Hb, -0.53 ± 0.81. Noninvasive Hb tests represent an attractive alternative by eliminating pain and reducing risks of blood contamination. The main problem for generating reliable results seems to be preanalytical variability in sampling. Despite the sensitivity to environmental stress, all methods are suitable for Hb measurement. © 2014 AABB.

  1. Comparison of the dose evaluation methods for criticality accident

    International Nuclear Information System (INIS)

    Shimizu, Yoshio; Oka, Tsutomu

    2004-01-01

    The improvement of the dose evaluation method for criticality accidents is important to rationalize design of the nuclear fuel cycle facilities. The source spectrums of neutron and gamma ray of a criticality accident depend on the condition of the source, its materials, moderation, density and so on. The comparison of the dose evaluation methods for a criticality accident is made. Some methods, which are combination of criticality calculation and shielding calculation, are proposed. Prompt neutron and gamma ray doses from nuclear criticality of some uranium systems have been evaluated as the Nuclear Criticality Slide Rule. The uranium metal source (unmoderated system) and the uranyl nitrate solution source (moderated system) in the rule are evaluated by some calculation methods, which are combinations of code and cross section library, as follows: (a) SAS1X (ENDF/B-IV), (b) MCNP4C (ENDF/B-VI)-ANISN (DLC23E or JSD120), (c) MCNP4C-MCNP4C (ENDF/B-VI). They have consisted of criticality calculation and shielding calculation. These calculation methods are compared about the tissue absorbed dose and the spectrums at 2 m from the source. (author)

  2. Case mix adjusted variation in cesarean section rate in Sweden.

    Science.gov (United States)

    Mesterton, Johan; Ladfors, Lars; Ekenberg Abreu, Anna; Lindgren, Peter; Saltvedt, Sissel; Weichselbraun, Marianne; Amer-Wåhlin, Isis

    2017-05-01

    Cesarean section (CS) rate is a well-established indicator of performance in maternity care and is also related to resource use. Case mix adjustment of CS rates when performing comparisons between hospitals is important. The objective of this study was to estimate case mix adjusted variation in CS rate between hospitals in Sweden. In total, 139 756 deliveries in 2011 and 2012 were identified in administrative systems in seven regions covering 67% of all deliveries in Sweden. Data were linked to the Medical birth register and population data. Twenty-three different sociodemographic and clinical characteristics were used for adjustment. Analyses were performed for the entire study population as well as for two subgroups. Logistic regression was used to analyze differences between hospitals. The overall CS rate was 16.9% (hospital minimum-maximum 12.1-22.6%). Significant variations in CS rate between hospitals were observed after case mix adjustment: hospital odds ratios for CS varied from 0.62 (95% CI 0.53-0.73) to 1.45 (95% CI 1.37-1.52). In nulliparous, cephalic, full-term, singletons the overall CS rate was 14.3% (hospital minimum-maximum: 9.0-19.0%), whereas it was 4.7% for multiparous, cephalic, full-term, singletons with no previous CS (hospital minimum-maximum: 3.2-6.7%). In both subgroups significant variations were observed in case mix adjusted CS rates. Significant differences in CS rate between Swedish hospitals were found after adjusting for differences in case mix. This indicates a potential for fewer interventions and lower resource use in Swedish childbirth care. Best practice sharing and continuous monitoring are important tools for improving childbirth care. © 2017 Nordic Federation of Societies of Obstetrics and Gynecology.

  3. The big five and identification-contrast processes in social comparison in adjustment to cancer treatment

    NARCIS (Netherlands)

    van der Zee, KI; Buunk, BP; Sanderman, R; Botke, G; van den Bergh, F

    1999-01-01

    The present study examined the relationship between social comparison processes and the Big Five personality factors. In a sample of 112 patients with various forms of cancer it was found that Neuroticism was associated with a tendency to focus on the negative interpretation of social comparison

  4. Systematic review of general burden of disease studies using disability-adjusted life years

    Directory of Open Access Journals (Sweden)

    Polinder Suzanne

    2012-11-01

    Full Text Available Abstract Objective To systematically review the methodology of general burden of disease studies. Three key questions were addressed: 1 what was the quality of the data, 2 which methodological choices were made to calculate disability adjusted life years (DALYs, and 3 were uncertainty and risk factor analyses performed? Furthermore, DALY outcomes of the included studies were compared. Methods Burden of disease studies (1990 to 2011 in international peer-reviewed journals and in grey literature were identified with main inclusion criteria being multiple-cause studies that quantified the burden of disease as the sum of the burden of all distinct diseases expressed in DALYs. Electronic database searches included Medline (PubMed, EMBASE, and Web of Science. Studies were collated by study population, design, methods used to measure mortality and morbidity, risk factor analyses, and evaluation of results. Results Thirty-one studies met the inclusion criteria of our review. Overall, studies followed the Global Burden of Disease (GBD approach. However, considerable variation existed in disability weights, discounting, age-weighting, and adjustments for uncertainty. Few studies reported whether mortality data were corrected for missing data or underreporting. Comparison with the GBD DALY outcomes by country revealed that for some studies DALY estimates were of similar magnitude; others reported DALY estimates that were two times higher or lower. Conclusions Overcoming “error” variation due to the use of different methodologies and low-quality data is a critical priority for advancing burden of disease studies. This can enlarge the detection of true variation in DALY outcomes between populations or over time.

  5. Adjusts of control rod cross sections and its utilization in power distribution calculations for Angra-1 reactor

    International Nuclear Information System (INIS)

    Pina, C.M. de

    1981-01-01

    One of the most important part in neutronics calculations is the study of core behavior with inserted control rods. The first stage of this calculations consists in generating equivalent microscopic cross sections for the basic cells containing fuel or absorbed material. The cross sections will be then adjusted. The choice of parameters that help in those adjustments, were obtained by the comparisons of data coming from the control rod supercell calculations with the Hammer and Citation computer codes. The effect of those adjustments in core integral parameters was evaluated; in this work only the core power two-dimensional distribution calculations with the D bank completely inserted, is studied. (E.G.) [pt

  6. An Enzymatic Clinical Chemistry Laboratory Experiment Incorporating an Introduction to Mathematical Method Comparison Techniques

    Science.gov (United States)

    Duxbury, Mark

    2004-01-01

    An enzymatic laboratory experiment based on the analysis of serum is described that is suitable for students of clinical chemistry. The experiment incorporates an introduction to mathematical method-comparison techniques in which three different clinical glucose analysis methods are compared using linear regression and Bland-Altman difference…

  7. Validation and comparison of two sampling methods to assess dermal exposure to drilling fluids and crude oil.

    Science.gov (United States)

    Galea, Karen S; McGonagle, Carolyn; Sleeuwenhoek, Anne; Todd, David; Jiménez, Araceli Sánchez

    2014-06-01

    Dermal exposure to drilling fluids and crude oil is an exposure route of concern. However, there have been no published studies describing sampling methods or reporting dermal exposure measurements. We describe a study that aimed to evaluate a wipe sampling method to assess dermal exposure to an oil-based drilling fluid and crude oil, as well as to investigate the feasibility of using an interception cotton glove sampler for exposure on the hands/wrists. A direct comparison of the wipe and interception methods was also completed using pigs' trotters as a surrogate for human skin and a direct surface contact exposure scenario. Overall, acceptable recovery and sampling efficiencies were reported for both methods, and both methods had satisfactory storage stability at 1 and 7 days, although there appeared to be some loss over 14 days. The methods' comparison study revealed significantly higher removal of both fluids from the metal surface with the glove samples compared with the wipe samples (on average 2.5 times higher). Both evaluated sampling methods were found to be suitable for assessing dermal exposure to oil-based drilling fluids and crude oil; however, the comparison study clearly illustrates that glove samplers may overestimate the amount of fluid transferred to the skin. Further comparison of the two dermal sampling methods using additional exposure situations such as immersion or deposition, as well as a field evaluation, is warranted to confirm their appropriateness and suitability in the working environment. © The Author 2014. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  8. Comparison of Transmission Line Methods for Surface Acoustic Wave Modeling

    Science.gov (United States)

    Wilson, William; Atkinson, Gary

    2009-01-01

    Surface Acoustic Wave (SAW) technology is low cost, rugged, lightweight, extremely low power and can be used to develop passive wireless sensors. For these reasons, NASA is investigating the use of SAW technology for Integrated Vehicle Health Monitoring (IVHM) of aerospace structures. To facilitate rapid prototyping of passive SAW sensors for aerospace applications, SAW models have been developed. This paper reports on the comparison of three methods of modeling SAWs. The three models are the Impulse Response Method (a first order model), and two second order matrix methods; the conventional matrix approach, and a modified matrix approach that is extended to include internal finger reflections. The second order models are based upon matrices that were originally developed for analyzing microwave circuits using transmission line theory. Results from the models are presented with measured data from devices. Keywords: Surface Acoustic Wave, SAW, transmission line models, Impulse Response Method.

  9. Simple rules for design of exhaust mufflers and a comparison with four-pole and FEM calculations

    DEFF Research Database (Denmark)

    Jensen, Morten Skaarup; Ødegaard, John

    1999-01-01

    For good muffler design it is advisable to use an advanced computational method such as four-pole theory or FEM or BEM. To get a starting points for these methods and to suggest adjustments to the geometry and materials it is useful to have some simple rules of thumb. This paper presents a number...... of such "rules", and illustrates their reliability and limitations by comparing with results using some of the advanced computational methods. At the same time, this investigation also gives a comparison between four-pole theory and BEM....

  10. Classification Method in Integrated Information Network Using Vector Image Comparison

    Directory of Open Access Journals (Sweden)

    Zhou Yuan

    2014-05-01

    Full Text Available Wireless Integrated Information Network (WMN consists of integrated information that can get data from its surrounding, such as image, voice. To transmit information, large resource is required which decreases the service time of the network. In this paper we present a Classification Approach based on Vector Image Comparison (VIC for WMN that improve the service time of the network. The available methods for sub-region selection and conversion are also proposed.

  11. Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files. SG39 meeting, May 2016

    International Nuclear Information System (INIS)

    Herman, Michal Wladyslaw; Cabellos De Francisco, Oscar; Beck, Bret; Ignatyuk, Anatoly V.; Palmiotti, Giuseppe; Grudzevich, Oleg T.; Salvatores, Massimo; Chadwick, Mark; Pelloni, Sandro; Diez De La Obra, Carlos Javier; Wu, Haicheng; Sobes, Vladimir; Rearden, Bradley T.; Yokoyama, Kenji; Hursin, Mathieu; Penttila, Heikki; Kodeli, Ivan-Alexander; Plevnik, Lucijan; Plompen, Arjan; Gabrielli, Fabrizio; Leal, Luiz Carlos; Aufiero, Manuele; Fiorito, Luca; Hummel, Andrew; Siefman, Daniel; Leconte, Pierre

    2016-05-01

    The aim of WPEC subgroup 39 'Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files' is to provide criteria and practical approaches to use effectively the results of sensitivity analyses and cross section adjustments for feedback to evaluators and differential measurement experimentalists in order to improve the knowledge of neutron cross sections, uncertainties, and correlations to be used in a wide range of applications. WPEC subgroup 40-CIELO (Collaborative International Evaluated Library Organization) provides a new working paradigm to facilitate evaluated nuclear reaction data advances. It brings together experts from across the international nuclear reaction data community to identify and document discrepancies among existing evaluated data libraries, measured data, and model calculation interpretations, and aims to make progress in reconciling these discrepancies to create more accurate ENDF-formatted files. SG40-CIELO focusses on 6 important isotopes: "1H, "1"6O, "5"6Fe, "2"3"5","2"3"8U, "2"3"9Pu. This document is the proceedings of the seventh formal Subgroup 39 meeting and of the Joint SG39+SG40 Session held at the NEA, OECD Conference Center, Paris, France on 10-11 May 2016. It comprises a Summary Record of the meeting, and all the available presentations (slides) given by the participants: A - Welcome and actions review (Oscar CABELLOS); B - Methods: - XGPT: uncertainty propagation and data assimilation from continuous energy covariance matrix and resonance parameters covariances (Manuele AUFIERO); - Optimal experiment utilization (REWINDing PIA), (G. Palmiotti); C - Experiment analysis, sensitivity calculations and benchmarks: - Tripoli-4 analysis of SEG experiments (Andrew HUMMEL); - Tripoli-4 analysis of BERENICE experiments (P. DUFAY, Cyrille DE SAINT JEAN); - Preparation of sensitivities of k-eff, beta-eff and shielding benchmarks for adjustment exercise (Ivo KODELI); - SA and

  12. Effect of coping with stress training on the social adjustment of students with learning disability

    Directory of Open Access Journals (Sweden)

    Saifolah Khodadadi

    2017-06-01

    Full Text Available Learning disability includes wide range of educational problems which treating these problems need child's social, emotional and behavior treatment. As prevalence of learning disabilities among children and their difficulties, the purpose of this study was to investigate the effect of coping with stress training on social adjustment of students with learning disabilities. The statistical population consists of all boy student with learning disabilities in learning disabilities center, in which 34 students were selected by convenience sampling. The social adjustment questionnaire was used. The experimental group had coping strategies training in 9 sessions for 90 minutes every week. Covariance analysis was used to compare the scores. The results showed that there was significant difference in pretest and posttest of experimental group. The findings also indicated that coping strategies training increased social adjustment, affective and educational adjustments of experimental group in comparison of control group. Appropriate strategies can be used for dealing with stress in students with learning disabilities. Coping training can be used as supplemental program in schools and centers of learning disabilities to improve the adjustment problems of these students.

  13. The relationship between effectiveness and costs measured by a risk-adjusted case-mix system: multicentre study of Catalonian population data bases

    Directory of Open Access Journals (Sweden)

    Flor-Serra Ferran

    2009-06-01

    Full Text Available Abstract Background The main objective of this study is to measure the relationship between morbidity, direct health care costs and the degree of clinical effectiveness (resolution of health centres and health professionals by the retrospective application of Adjusted Clinical Groups in a Spanish population setting. The secondary objectives are to determine the factors determining inadequate correlations and the opinion of health professionals on these instruments. Methods/Design We will carry out a multi-centre, retrospective study using patient records from 15 primary health care centres and population data bases. The main measurements will be: general variables (age and sex, centre, service [family medicine, paediatrics], and medical unit, dependent variables (mean number of visits, episodes and direct costs, co-morbidity (Johns Hopkins University Adjusted Clinical Groups Case-Mix System and effectiveness. The totality of centres/patients will be considered as the standard for comparison. The efficiency index for visits, tests (laboratory, radiology, others, referrals, pharmaceutical prescriptions and total will be calculated as the ratio: observed variables/variables expected by indirect standardization. The model of cost/patient/year will differentiate fixed/semi-fixed (visits costs of the variables for each patient attended/year (N = 350,000 inhabitants. The mean relative weights of the cost of care will be obtained. The effectiveness will be measured using a set of 50 indicators of process, efficiency and/or health results, and an adjusted synthetic index will be constructed (method: percentile 50. The correlation between the efficiency (relative-weights and synthetic (by centre and physician indices will be established using the coefficient of determination. The opinion/degree of acceptance of physicians (N = 1,000 will be measured using a structured questionnaire including various dimensions. Statistical analysis: multiple regression

  14. Comparison of urine analysis using manual and sedimentation methods.

    Science.gov (United States)

    Kurup, R; Leich, M

    2012-06-01

    Microscopic examination of urine sediment is an essential part in the evaluation of renal and urinary tract diseases. Traditionally, urine sediments are assessed by microscopic examination of centrifuged urine. However the current method used by the Georgetown Public Hospital Corporation Medical Laboratory involves uncentrifuged urine. To encourage high level of care, the results provided to the physician must be accurate and reliable for proper diagnosis. The aim of this study is to determine whether the centrifuge method is more clinically significant than the uncentrifuged method. In this study, a comparison between the results obtained from centrifuged and uncentrifuged methods were performed. A total of 167 urine samples were randomly collected and analysed during the period April-May 2010 at the Medical Laboratory, Georgetown Public Hospital Corporation. The urine samples were first analysed microscopically by the uncentrifuged, and then by the centrifuged method. The results obtained from both methods were recorded in a log book. These results were then entered into a database created in Microsoft Excel, and analysed for differences and similarities using this application. Analysis was further done in SPSS software to compare the results using Pearson ' correlation. When compared using Pearson's correlation coefficient analysis, both methods showed a good correlation between urinary sediments with the exception of white bloods cells. The centrifuged method had a slightly higher identification rate for all of the parameters. There is substantial agreement between the centrifuged and uncentrifuged methods. However the uncentrifuged method provides for a rapid turnaround time.

  15. Propensity-score matching in economic analyses: comparison with regression models, instrumental variables, residual inclusion, differences-in-differences, and decomposition methods.

    Science.gov (United States)

    Crown, William H

    2014-02-01

    This paper examines the use of propensity score matching in economic analyses of observational data. Several excellent papers have previously reviewed practical aspects of propensity score estimation and other aspects of the propensity score literature. The purpose of this paper is to compare the conceptual foundation of propensity score models with alternative estimators of treatment effects. References are provided to empirical comparisons among methods that have appeared in the literature. These comparisons are available for a subset of the methods considered in this paper. However, in some cases, no pairwise comparisons of particular methods are yet available, and there are no examples of comparisons across all of the methods surveyed here. Irrespective of the availability of empirical comparisons, the goal of this paper is to provide some intuition about the relative merits of alternative estimators in health economic evaluations where nonlinearity, sample size, availability of pre/post data, heterogeneity, and missing variables can have important implications for choice of methodology. Also considered is the potential combination of propensity score matching with alternative methods such as differences-in-differences and decomposition methods that have not yet appeared in the empirical literature.

  16. Parent-Child Communication and Adjustment Among Children With Advanced and Non-Advanced Cancer in the First Year Following Diagnosis or Relapse.

    Science.gov (United States)

    Keim, Madelaine C; Lehmann, Vicky; Shultz, Emily L; Winning, Adrien M; Rausch, Joseph R; Barrera, Maru; Gilmer, Mary Jo; Murphy, Lexa K; Vannatta, Kathryn A; Compas, Bruce E; Gerhardt, Cynthia A

    2017-09-01

    To examine parent-child communication (i.e., openness, problems) and child adjustment among youth with advanced or non-advanced cancer and comparison children. Families (n = 125) were recruited after a child's diagnosis/relapse and stratified by advanced (n = 55) or non-advanced (n = 70) disease. Comparison children (n = 60) were recruited from local schools. Children (ages 10-17) reported on communication (Parent-Adolescent Communication Scale) with both parents, while mothers reported on child adjustment (Child Behavior Checklist) at enrollment (T1) and one year (T2). Openness/problems in communication did not differ across groups at T1, but problems with fathers were higher among children with non-advanced cancer versus comparisons at T2. Openness declined for all fathers, while changes in problems varied by group for both parents. T1 communication predicted later adjustment only for children with advanced cancer. Communication plays an important role, particularly for children with advanced cancer. Additional research with families affected by life-limiting conditions is needed. © The Author 2017. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  17. Adjustment of positional geodetic networks by unconventional estimations

    Directory of Open Access Journals (Sweden)

    Silvia Gašincová

    2010-06-01

    Full Text Available The content of this paper is the adjustment of positional geodetic networks by robust estimations. The techniques (basedon the unconventional estimations of repeated least-square method which have turned out to be suitable and applicable in the practisehave been demonstrated on the example of the local geodetic network, which was founded to compose this thesis. In the thesisthe following techniques have been chosen to compare the Method of least-squares with those many published in foreign literature:M-estimation of Biweight,M-estimation of Welsch and Danish method. All presented methods are based on the repeated least-squaremethod principle with gradual changing of weight of individual measurements. In the first stage a standard least-square method wascarried out in the following steps – iterations we gradually change individual weights according to the relevant instructions/ regulation(so-called weight function. Iteration process will be stopped when no deviated measurements are found in the file of measured data.MatLab programme version 5.2 T was used to implement mathematical adjustment.

  18. 39 CFR 3010.25 - Limitation on unused rate adjustment authority rate adjustments.

    Science.gov (United States)

    2010-07-01

    ... only be applied together with inflation-based limitation rate adjustments or when inflation-based... used in lieu of an inflation-based limitation rate adjustment. ... 39 Postal Service 1 2010-07-01 2010-07-01 false Limitation on unused rate adjustment authority...

  19. Recent amendments of the KTA 2101.2 fire barrier resistance rating method for German NPP and comparison to the Eurocode t-equivalent method

    Energy Technology Data Exchange (ETDEWEB)

    Forell, Burkhard [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) gGmbH, Koeln (Germany)

    2015-12-15

    The German nuclear standard KTA2101 on ''Fire Protection in Nuclear Power Plants'', Part 2: ''Fire Protection of Structural Plant Components'' includes a simplified method for the fire resistance rating of fire barrier elements based on the t-equivalent approach. The method covers the specific features of compartments in nuclear power plant buildings in terms of the boundary conditions which have to be expected in the event of fire. The method has proven to be relatively simple and straightforward to apply. The paper gives an overview of amendments with respect to the rating method made within the regular review of the KTA 2101.2. A comparison to the method of the non-nuclear Eurocode 1 is also provided. The Eurocode method is closely connected to the German standard DIN 18230 on structural fire protection in industrial buildings. Special emphasis of the comparison is given to the ventilation factor, which has a large impact on the required fire resistance.

  20. Parametric methods outperformed non-parametric methods in comparisons of discrete numerical variables

    Directory of Open Access Journals (Sweden)

    Sandvik Leiv

    2011-04-01

    Full Text Available Abstract Background The number of events per individual is a widely reported variable in medical research papers. Such variables are the most common representation of the general variable type called discrete numerical. There is currently no consensus on how to compare and present such variables, and recommendations are lacking. The objective of this paper is to present recommendations for analysis and presentation of results for discrete numerical variables. Methods Two simulation studies were used to investigate the performance of hypothesis tests and confidence interval methods for variables with outcomes {0, 1, 2}, {0, 1, 2, 3}, {0, 1, 2, 3, 4}, and {0, 1, 2, 3, 4, 5}, using the difference between the means as an effect measure. Results The Welch U test (the T test with adjustment for unequal variances and its associated confidence interval performed well for almost all situations considered. The Brunner-Munzel test also performed well, except for small sample sizes (10 in each group. The ordinary T test, the Wilcoxon-Mann-Whitney test, the percentile bootstrap interval, and the bootstrap-t interval did not perform satisfactorily. Conclusions The difference between the means is an appropriate effect measure for comparing two independent discrete numerical variables that has both lower and upper bounds. To analyze this problem, we encourage more frequent use of parametric hypothesis tests and confidence intervals.

  1. A normalization method for combination of laboratory test results from different electronic healthcare databases in a distributed research network.

    Science.gov (United States)

    Yoon, Dukyong; Schuemie, Martijn J; Kim, Ju Han; Kim, Dong Ki; Park, Man Young; Ahn, Eun Kyoung; Jung, Eun-Young; Park, Dong Kyun; Cho, Soo Yeon; Shin, Dahye; Hwang, Yeonsoo; Park, Rae Woong

    2016-03-01

    Distributed research networks (DRNs) afford statistical power by integrating observational data from multiple partners for retrospective studies. However, laboratory test results across care sites are derived using different assays from varying patient populations, making it difficult to simply combine data for analysis. Additionally, existing normalization methods are not suitable for retrospective studies. We normalized laboratory results from different data sources by adjusting for heterogeneous clinico-epidemiologic characteristics of the data and called this the subgroup-adjusted normalization (SAN) method. Subgroup-adjusted normalization renders the means and standard deviations of distributions identical under population structure-adjusted conditions. To evaluate its performance, we compared SAN with existing methods for simulated and real datasets consisting of blood urea nitrogen, serum creatinine, hematocrit, hemoglobin, serum potassium, and total bilirubin. Various clinico-epidemiologic characteristics can be applied together in SAN. For simplicity of comparison, age and gender were used to adjust population heterogeneity in this study. In simulations, SAN had the lowest standardized difference in means (SDM) and Kolmogorov-Smirnov values for all tests (p normalization performed better than normalization using other methods. The SAN method is applicable in a DRN environment and should facilitate analysis of data integrated across DRN partners for retrospective observational studies. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Comparison of parameter-adapted segmentation methods for fluorescence micrographs.

    Science.gov (United States)

    Held, Christian; Palmisano, Ralf; Häberle, Lothar; Hensel, Michael; Wittenberg, Thomas

    2011-11-01

    Interpreting images from fluorescence microscopy is often a time-consuming task with poor reproducibility. Various image processing routines that can help investigators evaluate the images are therefore useful. The critical aspect for a reliable automatic image analysis system is a robust segmentation algorithm that can perform accurate segmentation for different cell types. In this study, several image segmentation methods were therefore compared and evaluated in order to identify the most appropriate segmentation schemes that are usable with little new parameterization and robustly with different types of fluorescence-stained cells for various biological and biomedical tasks. The study investigated, compared, and enhanced four different methods for segmentation of cultured epithelial cells. The maximum-intensity linking (MIL) method, an improved MIL, a watershed method, and an improved watershed method based on morphological reconstruction were used. Three manually annotated datasets consisting of 261, 817, and 1,333 HeLa or L929 cells were used to compare the different algorithms. The comparisons and evaluations showed that the segmentation performance of methods based on the watershed transform was significantly superior to the performance of the MIL method. The results also indicate that using morphological opening by reconstruction can improve the segmentation of cells stained with a marker that exhibits the dotted surface of cells. Copyright © 2011 International Society for Advancement of Cytometry.

  3. Comparison of deterministic and stochastic methods for time-dependent Wigner simulations

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Sihong, E-mail: sihong@math.pku.edu.cn [LMAM and School of Mathematical Sciences, Peking University, Beijing 100871 (China); Sellier, Jean Michel, E-mail: jeanmichel.sellier@parallel.bas.bg [IICT, Bulgarian Academy of Sciences, Acad. G. Bonchev str. 25A, 1113 Sofia (Bulgaria)

    2015-11-01

    Recently a Monte Carlo method based on signed particles for time-dependent simulations of the Wigner equation has been proposed. While it has been thoroughly validated against physical benchmarks, no technical study about its numerical accuracy has been performed. To this end, this paper presents the first step towards the construction of firm mathematical foundations for the signed particle Wigner Monte Carlo method. An initial investigation is performed by means of comparisons with a cell average spectral element method, which is a highly accurate deterministic method and utilized to provide reference solutions. Several different numerical tests involving the time-dependent evolution of a quantum wave-packet are performed and discussed in deep details. In particular, this allows us to depict a set of crucial criteria for the signed particle Wigner Monte Carlo method to achieve a satisfactory accuracy.

  4. Assessing Major Adjustment Problems of Freshman Students in ...

    African Journals Online (AJOL)

    Ethiopian Journal of Education and Sciences ... The data was analyzed by using both descriptive and inferential statistical methods. ... in Jimma University experience social adjustment problems than educational and personalpsychological, ...

  5. Culture, Cross-Role Consistency, and Adjustment: Testing Trait and Cultural Psychology Perspectives

    OpenAIRE

    Church, A. Timothy; Anderson-Harumi, Cheryl A.; del Prado, Alicia M.; Curtis, Guy J.; Tanaka-Matsumi, Junko; Valdez Medina, José L.; Mastor, Khairul A.; White, Fiona A.; Miramontes, Lilia A.; Katigbak, Marcia S.

    2008-01-01

    Trait and cultural psychology perspectives on cross-role consistency and its relation to adjustment were examined in two individualistic cultures, the United States (N = 231) and Australia (N = 195), and four collectivistic cultures, Mexico (N = 199), Philippines (N = 195), Malaysia (N = 217), and Japan (N = 180). Cross-role consistency in trait ratings was evident in all cultures, supporting trait perspectives. Cultural comparisons of mean consistency provided support for cultural psychology...

  6. A Simplified Version of the Fuzzy Decision Method and its Comparison with the Paraconsistent Decision Method

    Science.gov (United States)

    de Carvalho, Fábio Romeu; Abe, Jair Minoro

    2010-11-01

    Two recent non-classical logics have been used to make decision: fuzzy logic and paraconsistent annotated evidential logic Et. In this paper we present a simplified version of the fuzzy decision method and its comparison with the paraconsistent one. Paraconsistent annotated evidential logic Et, introduced by Da Costa, Vago and Subrahmanian (1991), is capable of handling uncertain and contradictory data without becoming trivial. It has been used in many applications such as information technology, robotics, artificial intelligence, production engineering, decision making etc. Intuitively, one Et logic formula is type p(a, b), in which a and b belong to [0, 1] (real interval) and represent respectively the degree of favorable evidence (or degree of belief) and the degree of contrary evidence (or degree of disbelief) found in p. The set of all pairs (a; b), called annotations, when plotted, form the Cartesian Unitary Square (CUS). This set, containing a similar order relation of real number, comprises a network, called lattice of the annotations. Fuzzy logic was introduced by Zadeh (1965). It tries to systematize the knowledge study, searching mainly to study the fuzzy knowledge (you don't know what it means) and distinguish it from the imprecise one (you know what it means, but you don't know its exact value). This logic is similar to paraconsistent annotated one, since it attributes a numeric value (only one, not two values) to each proposition (then we can say that it is an one-valued logic). This number translates the intensity (the degree) with which the preposition is true. Let's X a set and A, a subset of X, identified by the function f(x). For each element x∈X, you have y = f(x)∈[0, 1]. The number y is called degree of pertinence of x in A. Decision making theories based on these logics have shown to be powerful in many aspects regarding more traditional methods, like the one based on Statistics. In this paper we present a first study for a simplified

  7. An interactive website for analytical method comparison and bias estimation.

    Science.gov (United States)

    Bahar, Burak; Tuncel, Ayse F; Holmes, Earle W; Holmes, Daniel T

    2017-12-01

    Regulatory standards mandate laboratories to perform studies to ensure accuracy and reliability of their test results. Method comparison and bias estimation are important components of these studies. We developed an interactive website for evaluating the relative performance of two analytical methods using R programming language tools. The website can be accessed at https://bahar.shinyapps.io/method_compare/. The site has an easy-to-use interface that allows both copy-pasting and manual entry of data. It also allows selection of a regression model and creation of regression and difference plots. Available regression models include Ordinary Least Squares, Weighted-Ordinary Least Squares, Deming, Weighted-Deming, Passing-Bablok and Passing-Bablok for large datasets. The server processes the data and generates downloadable reports in PDF or HTML format. Our website provides clinical laboratories a practical way to assess the relative performance of two analytical methods. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  8. Method comparison of ultrasound and kilovoltage x-ray fiducial marker imaging for prostate radiotherapy targeting

    Science.gov (United States)

    Fuller, Clifton David; Thomas, Charles R., Jr.; Schwartz, Scott; Golden, Nanalei; Ting, Joe; Wong, Adrian; Erdogmus, Deniz; Scarbrough, Todd J.

    2006-10-01

    Several measurement techniques have been developed to address the capability for target volume reduction via target localization in image-guided radiotherapy; among these have been ultrasound (US) and fiducial marker (FM) software-assisted localization. In order to assess interchangeability between methods, US and FM localization were compared using established techniques for determination of agreement between measurement methods when a 'gold-standard' comparator does not exist, after performing both techniques daily on a sequential series of patients. At least 3 days prior to CT simulation, four gold seeds were placed within the prostate. FM software-assisted localization utilized the ExacTrac X-Ray 6D (BrainLab AG, Germany) kVp x-ray image acquisition system to determine prostate position; US prostate targeting was performed on each patient using the SonArray (Varian, Palo Alto, CA). Patients were aligned daily using laser alignment of skin marks. Directional shifts were then calculated by each respective system in the X, Y and Z dimensions before each daily treatment fraction, previous to any treatment or couch adjustment, as well as a composite vector of displacement. Directional shift agreement in each axis was compared using Altman-Bland limits of agreement, Lin's concordance coefficient with Partik's grading schema, and Deming orthogonal bias-weighted correlation methodology. 1019 software-assisted shifts were suggested by US and FM in 39 patients. The 95% limits of agreement in X, Y and Z axes were ±9.4 mm, ±11.3 mm and ±13.4, respectively. Three-dimensionally, measurements agreed within 13.4 mm in 95% of all paired measures. In all axes, concordance was graded as 'poor' or 'unacceptable'. Deming regression detected proportional bias in both directional axes and three-dimensional vectors. Our data suggest substantial differences between US and FM image-guided measures and subsequent suggested directional shifts. Analysis reveals that the vast majority of

  9. Concentrations versus amounts of biomarkers in urine: a comparison of approaches to assess pyrethroid exposure

    Directory of Open Access Journals (Sweden)

    Bouchard Michèle

    2008-11-01

    Full Text Available Abstract Background Assessment of human exposure to non-persistent pesticides such as pyrethroids is often based on urinary biomarker measurements. Urinary metabolite levels of these pesticides are usually reported in volume-weighted concentrations or creatinine-adjusted concentrations measured in spot urine samples. It is known that these units are subject to intra- and inter-individual variations. This research aimed at studying the impact of these variations on the assessment of pyrethroid absorbed doses at individual and population levels. Methods Using data obtained from various adult and infantile populations, the intra and inter-individual variability in the urinary flow rate and creatinine excretion rate was first estimated. Individual absorbed doses were then calculated using volume-weighted or creatinine-adjusted concentrations according to published approaches and compared to those estimated from the amounts of biomarkers excreted in 15- or 24-h urine collections, the latter serving as a benchmark unit. The effect of the units of measurements (volume-weighted or creatinine adjusted concentrations or 24-h amounts on results of the comparison of pyrethroid biomarker levels between two populations was also evaluated. Results Estimation of daily absorbed doses of permethrin from volume-weighted or creatinine-adjusted concentrations of biomarkers was found to potentially lead to substantial under or overestimation when compared to doses reconstructed directly from amounts excreted in urine during a given period of time (-70 to +573% and -83 to +167%, respectively. It was also shown that the variability in creatinine excretion rate and urinary flow rate may introduce a bias in the case of between population comparisons. Conclusion The unit chosen to express biomonitoring data may influence the validity of estimated individual absorbed dose as well as the outcome of between population comparisons.

  10. Comparison of DNA Quantification Methods for Next Generation Sequencing.

    Science.gov (United States)

    Robin, Jérôme D; Ludlow, Andrew T; LaRanger, Ryan; Wright, Woodring E; Shay, Jerry W

    2016-04-06

    Next Generation Sequencing (NGS) is a powerful tool that depends on loading a precise amount of DNA onto a flowcell. NGS strategies have expanded our ability to investigate genomic phenomena by referencing mutations in cancer and diseases through large-scale genotyping, developing methods to map rare chromatin interactions (4C; 5C and Hi-C) and identifying chromatin features associated with regulatory elements (ChIP-seq, Bis-Seq, ChiA-PET). While many methods are available for DNA library quantification, there is no unambiguous gold standard. Most techniques use PCR to amplify DNA libraries to obtain sufficient quantities for optical density measurement. However, increased PCR cycles can distort the library's heterogeneity and prevent the detection of rare variants. In this analysis, we compared new digital PCR technologies (droplet digital PCR; ddPCR, ddPCR-Tail) with standard methods for the titration of NGS libraries. DdPCR-Tail is comparable to qPCR and fluorometry (QuBit) and allows sensitive quantification by analysis of barcode repartition after sequencing of multiplexed samples. This study provides a direct comparison between quantification methods throughout a complete sequencing experiment and provides the impetus to use ddPCR-based quantification for improvement of NGS quality.

  11. A comparison of methods for teaching receptive language to toddlers with autism.

    Science.gov (United States)

    Vedora, Joseph; Grandelski, Katrina

    2015-01-01

    The use of a simple-conditional discrimination training procedure, in which stimuli are initially taught in isolation with no other comparison stimuli, is common in early intensive behavioral intervention programs. Researchers have suggested that this procedure may encourage the development of faulty stimulus control during training. The current study replicated previous work that compared the simple-conditional and the conditional-only methods to teach receptive labeling of pictures to young children with autism spectrum disorder. Both methods were effective, but the conditional-only method required fewer sessions to mastery. © Society for the Experimental Analysis of Behavior.

  12. Contrast Enhancement Algorithm Based on Gap Adjustment for Histogram Equalization

    Science.gov (United States)

    Chiu, Chung-Cheng; Ting, Chih-Chung

    2016-01-01

    Image enhancement methods have been widely used to improve the visual effects of images. Owing to its simplicity and effectiveness histogram equalization (HE) is one of the methods used for enhancing image contrast. However, HE may result in over-enhancement and feature loss problems that lead to unnatural look and loss of details in the processed images. Researchers have proposed various HE-based methods to solve the over-enhancement problem; however, they have largely ignored the feature loss problem. Therefore, a contrast enhancement algorithm based on gap adjustment for histogram equalization (CegaHE) is proposed. It refers to a visual contrast enhancement algorithm based on histogram equalization (VCEA), which generates visually pleasing enhanced images, and improves the enhancement effects of VCEA. CegaHE adjusts the gaps between two gray values based on the adjustment equation, which takes the properties of human visual perception into consideration, to solve the over-enhancement problem. Besides, it also alleviates the feature loss problem and further enhances the textures in the dark regions of the images to improve the quality of the processed images for human visual perception. Experimental results demonstrate that CegaHE is a reliable method for contrast enhancement and that it significantly outperforms VCEA and other methods. PMID:27338412

  13. Contrast Enhancement Algorithm Based on Gap Adjustment for Histogram Equalization

    Directory of Open Access Journals (Sweden)

    Chung-Cheng Chiu

    2016-06-01

    Full Text Available Image enhancement methods have been widely used to improve the visual effects of images. Owing to its simplicity and effectiveness histogram equalization (HE is one of the methods used for enhancing image contrast. However, HE may result in over-enhancement and feature loss problems that lead to unnatural look and loss of details in the processed images. Researchers have proposed various HE-based methods to solve the over-enhancement problem; however, they have largely ignored the feature loss problem. Therefore, a contrast enhancement algorithm based on gap adjustment for histogram equalization (CegaHE is proposed. It refers to a visual contrast enhancement algorithm based on histogram equalization (VCEA, which generates visually pleasing enhanced images, and improves the enhancement effects of VCEA. CegaHE adjusts the gaps between two gray values based on the adjustment equation, which takes the properties of human visual perception into consideration, to solve the over-enhancement problem. Besides, it also alleviates the feature loss problem and further enhances the textures in the dark regions of the images to improve the quality of the processed images for human visual perception. Experimental results demonstrate that CegaHE is a reliable method for contrast enhancement and that it significantly outperforms VCEA and other methods.

  14. Annual Adjustment Factors

    Data.gov (United States)

    Department of Housing and Urban Development — The Department of Housing and Urban Development establishes the rent adjustment factors - called Annual Adjustment Factors (AAFs) - on the basis of Consumer Price...

  15. Comparison of PDF and Moment Closure Methods in the Modeling of Turbulent Reacting Flows

    Science.gov (United States)

    Norris, Andrew T.; Hsu, Andrew T.

    1994-01-01

    In modeling turbulent reactive flows, Probability Density Function (PDF) methods have an advantage over the more traditional moment closure schemes in that the PDF formulation treats the chemical reaction source terms exactly, while moment closure methods are required to model the mean reaction rate. The common model used is the laminar chemistry approximation, where the effects of turbulence on the reaction are assumed negligible. For flows with low turbulence levels and fast chemistry, the difference between the two methods can be expected to be small. However for flows with finite rate chemistry and high turbulence levels, significant errors can be expected in the moment closure method. In this paper, the ability of the PDF method and the moment closure scheme to accurately model a turbulent reacting flow is tested. To accomplish this, both schemes were used to model a CO/H2/N2- air piloted diffusion flame near extinction. Identical thermochemistry, turbulence models, initial conditions and boundary conditions are employed to ensure a consistent comparison can be made. The results of the two methods are compared to experimental data as well as to each other. The comparison reveals that the PDF method provides good agreement with the experimental data, while the moment closure scheme incorrectly shows a broad, laminar-like flame structure.

  16. Convexity Adjustments for ATS Models

    DEFF Research Database (Denmark)

    Murgoci, Agatha; Gaspar, Raquel M.

    . As a result we classify convexity adjustments into forward adjustments and swaps adjustments. We, then, focus on affine term structure (ATS) models and, in this context, conjecture convexity adjustments should be related of affine functionals. In the case of forward adjustments, we show how to obtain exact...

  17. Comparison of Memory Function and MMPI-2 Profile between Post-traumatic Stress Disorder and Adjustment Disorder after a Traffic Accident

    Science.gov (United States)

    Bae, Sung-Man; Hyun, Myoung-Ho

    2014-01-01

    Objective Differential diagnosis between post-traumatic stress disorder (PTSD) and adjustment disorder (AD) is rather difficult, but very important to the assignment of appropriate treatment and prognosis. This study investigated methods to differentiate PTSD and AD. Methods Twenty-five people with PTSD and 24 people with AD were recruited. Memory tests, the Minnesota Multiphasic Personality Inventory 2 (MMPI-2), and Beck's Depression Inventory were administered. Results There were significant decreases in immediate verbal recall and delayed verbal recognition in the participants with PTSD. The reduced memory functions of participants with PTSD were significantly influenced by depressive symptoms. Hypochondriasis, hysteria, psychopathic deviate, paranoia, schizophrenia, post-traumatic stress disorder scale of MMPI-2 classified significantly PTSD and AD group. Conclusion Our results suggest that verbal memory assessments and the MMPI-2 could be useful for discriminating between PTSD and AD. PMID:24851120

  18. Comparison of Heuristic Methods Applied for Optimal Operation of Water Resources

    Directory of Open Access Journals (Sweden)

    Alireza Borhani Dariane

    2009-01-01

    Full Text Available Water resources optimization problems are usually complex and hard to solve using the ordinary optimization methods, or they are at least  not economically efficient. A great number of studies have been conducted in quest of suitable methods capable of handling such problems. In recent years, some new heuristic methods such as genetic and ant algorithms have been introduced in systems engineering. Preliminary applications of these methods in water resources problems have shown that some of them are powerful tools, capable of solving complex problems. In this paper, the application of such heuristic methods as Genetic Algorithm (GA and Ant Colony Optimization (ACO have been studied for optimizing reservoir operation. The Dez Dam reservoir inIranwas chosen for a case study. The methods were applied and compared using short-term (one year and long-term models. Comparison of the results showed that GA outperforms both DP and ACO in finding true global optimum solutions and operating rules.

  19. Comparison of fuzzy AHP and fuzzy TODIM methods for landfill location selection.

    Science.gov (United States)

    Hanine, Mohamed; Boutkhoum, Omar; Tikniouine, Abdessadek; Agouti, Tarik

    2016-01-01

    Landfill location selection is a multi-criteria decision problem and has a strategic importance for many regions. The conventional methods for landfill location selection are insufficient in dealing with the vague or imprecise nature of linguistic assessment. To resolve this problem, fuzzy multi-criteria decision-making methods are proposed. The aim of this paper is to use fuzzy TODIM (the acronym for Interactive and Multi-criteria Decision Making in Portuguese) and the fuzzy analytic hierarchy process (AHP) methods for the selection of landfill location. The proposed methods have been applied to a landfill location selection problem in the region of Casablanca, Morocco. After determining the criteria affecting the landfill location decisions, fuzzy TODIM and fuzzy AHP methods are applied to the problem and results are presented. The comparisons of these two methods are also discussed.

  20. Comparison of two solution ways of district heating control: Using analysis methods, using artificial intelligence methods

    Energy Technology Data Exchange (ETDEWEB)

    Balate, J.; Sysala, T. [Technical Univ., Zlin (Czech Republic). Dept. of Automation and Control Technology

    1997-12-31

    The District Heating Systems - DHS (Centralized Heat Supply Systems - CHSS) are being developed in large cities in accordance with their growth. The systems are formed by enlarging networks of heat distribution to consumers and at the same time they interconnect the heat sources gradually built. The heat is distributed to the consumers through the circular networks, that are supplied by several cooperating heat sources, that means by power and heating plants and heating plants. The complicated process of heat production technology and supply requires the system approach when solving the concept of automatized control. The paper deals with comparison of the solution way using the analysis methods and using the artificial intelligence methods. (orig.)

  1. [Risk adjusted assessment of quality of perinatal centers - results of perinatal/neonatal quality surveillance in Saxonia].

    Science.gov (United States)

    Koch, R; Gmyrek, D; Vogtmann, Ch

    2005-12-01

    The weak point of the country-wide perinatal/neonatal quality surveillance as a tool for evaluation of achievements of a distinct clinic, is the ignorance of interhospital differences in the case-mix of patients. Therefore, that approach can not result in a reliable bench marking. To adjust the results of quality assessment of different hospitals according to their risk profile of patients by multivariate analysis. The perinatal/neonatal data base of 12.783 newborns of the saxonian quality surveillance from 1998 to 2000 was analyzed. 4 relevant quality indicators of newborn outcome -- a) severe intraventricular hemorrhage in preterm infants 2500 g and d) hypoxic-ischemic encephalopathy -- were targeted to find out specific risk predictors by considering 26 risk factors. A logistic regression model was used to develop the risk predictors. Risk predictors for the 4 quality indicators could be described by 3 - 9 out of 26 analyzed risk factors. The AUC (ROC)-values for these quality indicators were 82, 89, 89 and 89 %, what signifies their reliability. Using the new specific predictors for calculation the risk adjusted incidence rates of quality indicator yielded in some remarkable changes. The apparent differences in the outcome criteria of analyzed hospitals were found to be much less pronounced. The application of the proposed method for risk adjustment of quality indicators makes it possible to perform a more objective comparison of neonatal outcome criteria between different hospitals or regions.

  2. Comparison of Potentiometric and Gravimetric Methods for Determination of O/U Ratio

    International Nuclear Information System (INIS)

    Farida; Windaryati, L; Putro Kasino, P

    1998-01-01

    Comparison of determination O/U ratio by using potentiometric and gravimetric methods has been done. Those methods are simple, economical and having high precision and accuracy. Determination O/U ratio for UO 2 powder using potentiometric is carried out by adopting the davies-gray method. This technique is based on the redox reaction of uranium species such as U(IV) and U(VI). In gravimetric method,the UO 2 power as a sample is calcined at temperature of 900 C, and the weight of the sample is measured after calcination process. The t-student test show that there are no different result significantly between those methods. However, for low concentration in the sample the potentiometric method has a highed precision and accuracy compare to the gravimetric method. O/U ratio obtained is 2.00768 ± 0,00170 for potentiometric method 2.01089 ± 0,02395 for gravimetric method

  3. Comparison of seven optical clearing methods for mouse brain

    Science.gov (United States)

    Wan, Peng; Zhu, Jingtan; Yu, Tingting; Zhu, Dan

    2018-02-01

    Recently, a variety of tissue optical clearing techniques have been developed to reduce light scattering for imaging deeper and three-dimensional reconstruction of tissue structures. Combined with optical imaging techniques and diverse labeling methods, these clearing methods have significantly promoted the development of neuroscience. However, most of the protocols were proposed aiming for specific tissue type. Though there are some comparison results, the clearing methods covered are limited and the evaluation indices are lack of uniformity, which made it difficult to select a best-fit protocol for clearing in practical applications. Hence, it is necessary to systematically assess and compare these clearing methods. In this work, we evaluated the performance of seven typical clearing methods, including 3DISCO, uDISCO, SeeDB, ScaleS, ClearT2, CUBIC and PACT, on mouse brain samples. First, we compared the clearing capability on both brain slices and whole-brains by observing brain transparency. Further, we evaluated the fluorescence preservation and the increase of imaging depth. The results showed that 3DISCO, uDISCO and PACT posed excellent clearing capability on mouse brains, ScaleS and SeeDB rendered moderate transparency, while ClearT2 was the worst. Among those methods, ScaleS was the best on fluorescence preservation, and PACT achieved the highest increase of imaging depth. This study is expected to provide important reference for users in choosing most suitable brain optical clearing method.

  4. A comparison of Nodal methods in neutron diffusion calculations

    Energy Technology Data Exchange (ETDEWEB)

    Tavron, Barak [Israel Electric Company, Haifa (Israel) Nuclear Engineering Dept. Research and Development Div.

    1996-12-01

    The nuclear engineering department at IEC uses in the reactor analysis three neutron diffusion codes based on nodal methods. The codes, GNOMERl, ADMARC2 and NOXER3 solve the neutron diffusion equation to obtain flux and power distributions in the core. The resulting flux distributions are used for the furl cycle analysis and for fuel reload optimization. This work presents a comparison of the various nodal methods employed in the above codes. Nodal methods (also called Coarse-mesh methods) have been designed to solve problems that contain relatively coarse areas of homogeneous composition. In the nodal method parts of the equation that present the state in the homogeneous area are solved analytically while, according to various assumptions and continuity requirements, a general solution is sought out. Thus efficiency of the method for this kind of problems, is very high compared with the finite element and finite difference methods. On the other hand, using this method one can get only approximate information about the node vicinity (or coarse-mesh area, usually a feel assembly of a 20 cm size). These characteristics of the nodal method make it suitable for feel cycle analysis and reload optimization. This analysis requires many subsequent calculations of the flux and power distributions for the feel assemblies while there is no need for detailed distribution within the assembly. For obtaining detailed distribution within the assembly methods of power reconstruction may be applied. However homogenization of feel assembly properties, required for the nodal method, may cause difficulties when applied to fuel assemblies with many absorber rods, due to exciting strong neutron properties heterogeneity within the assembly. (author).

  5. BASACF, Integral Neutron Spectra Adjustment and Dosimetry

    International Nuclear Information System (INIS)

    Tichy, Milos

    1996-01-01

    1 - Description of program or function: Adjustment of a neutron spectrum based on integral detector measurements and calculation of an integral dosimetric quantity (integral flux, d.p.a., dose equivalent) and its variance. The program requires measured data (activities and their covariance matrix) and a priori information (spectrum, dosimetry cross sections, integral quantity conversion factor and their covariance matrices). All a priori covariance matrices can be read in from a file prepared by some other code or can be generated by means of three different methods (by subroutines included in the program). A subroutine which can normalize the a priori flux to measured data is also included. The program provides also adjusted dosimetry cross sections (with covariance matrix) so that it can be used for an adjustment of cross sections (or response functions of e.g. Bonner balls) by measurements in well-known neutron spectra. 2 - Method of solution: Bayesian theorem on conditional probability applied to linearized relation between activities, dosimetry cross sections and flux. All probability distributions are supposed to be normal and this supposition leads to minimizing of the same functional as least squares method (STAY'SL). This task is solved by a covariance filter method which avoids any matrix inversion and is numerically robust and stable. 3 - Restrictions on the complexity of the problem: This version can use 45 energy groups and 5 detectors and occupies 310 kB of main memory. This restriction can be modified according to available memory. The covariance matrix of activities is supposed diagonal. A solution is produced for any set of input data but in the case of non-consistent data, when measured activities do not match the a priori flux, the solution is not very meaningful

  6. Performance evaluation of inpatient service in Beijing: a horizontal comparison with risk adjustment based on Diagnosis Related Groups.

    Science.gov (United States)

    Jian, Weiyan; Huang, Yinmin; Hu, Mu; Zhang, Xiumei

    2009-04-30

    The medical performance evaluation, which provides a basis for rational decision-making, is an important part of medical service research. Current progress with health services reform in China is far from satisfactory, without sufficient regulation. To achieve better progress, an effective tool for evaluating medical performance needs to be established. In view of this, this study attempted to develop such a tool appropriate for the Chinese context. Data was collected from the front pages of medical records (FPMR) of all large general public hospitals (21 hospitals) in the third and fourth quarter of 2007. Locally developed Diagnosis Related Groups (DRGs) were introduced as a tool for risk adjustment and performance evaluation indicators were established: Charge Efficiency Index (CEI), Time Efficiency Index (TEI) and inpatient mortality of low-risk group cases (IMLRG), to reflect respectively work efficiency and medical service quality. Using these indicators, the inpatient services' performance was horizontally compared among hospitals. Case-mix Index (CMI) was used to adjust efficiency indices and then produce adjusted CEI (aCEI) and adjusted TEI (aTEI). Poisson distribution analysis was used to test the statistical significance of the IMLRG differences between different hospitals. Using the aCEI, aTEI and IMLRG scores for the 21 hospitals, Hospital A and C had relatively good overall performance because their medical charges were lower, LOS shorter and IMLRG smaller. The performance of Hospital P and Q was the worst due to their relatively high charge level, long LOS and high IMLRG. Various performance problems also existed in the other hospitals. It is possible to develop an accurate and easy to run performance evaluation system using Case-Mix as the tool for risk adjustment, choosing indicators close to consumers and managers, and utilizing routine report forms as the basic information source. To keep such a system running effectively, it is necessary to

  7. Verification of Transformer Restricted Earth Fault Protection by using the Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    KRSTIVOJEVIC, J. P.

    2015-08-01

    Full Text Available The results of a comprehensive investigation of the influence of current transformer (CT saturation on restricted earth fault (REF protection during power transformer magnetization inrush are presented. Since the inrush current during switch-on of unloaded power transformer is stochastic, its values are obtained by: (i laboratory measurements and (ii calculations based on the input data obtained by the Monte Carlo (MC simulation. To make a detailed assessment of the current transformer performance the uncertain input data for the CT model were obtained by applying the MC method. In this way, different levels of remanent flux in CT core are taken into consideration. By the generated CT secondary currents, the algorithm for REF protection based on phase comparison in time domain is tested. On the basis of the obtained results, a method of adjustment of the triggering threshold in order to ensure safe operation during transients, and thereby improve the algorithm security, has been proposed. The obtained results indicate that power transformer REF protection would be enhanced by using the proposed adjustment of triggering threshold in the algorithm which is based on phase comparison in time domain.

  8. Case-mix adjustment of consumer reports about managed behavioral health care and health plans.

    Science.gov (United States)

    Eselius, Laura L; Cleary, Paul D; Zaslavsky, Alan M; Huskamp, Haiden A; Busch, Susan H

    2008-12-01

    To develop a model for adjusting patients' reports of behavioral health care experiences on the Experience of Care and Health Outcomes (ECHO) survey to allow for fair comparisons across health plans. Survey responses from 4,068 individuals enrolled in 21 managed behavioral health plans who received behavioral health care within the previous year (response rate = 48 percent). Potential case-mix adjustors were evaluated by combining information about their predictive power and the amount of within- and between-plan variability. Changes in plan scores and rankings due to case-mix adjustment were quantified. The final case-mix adjustment model included self-reported mental health status, self-reported general health status, alcohol/drug treatment, age, education, and race/ethnicity. The impact of adjustment on plan report scores was modest, but large enough to change some plan rankings. Adjusting plan report scores on the ECHO survey for differences in patient characteristics had modest effects, but still may be important to maintain the credibility of patient reports as a quality metric. Differences between those with self-reported fair/poor health compared with those in excellent/very good health varied by plan, suggesting quality differences associated with health status and underscoring the importance of collecting quality information.

  9. Quantitative comparison of analysis methods for spectroscopic optical coherence tomography: reply to comment

    NARCIS (Netherlands)

    Bosschaart, Nienke; van Leeuwen, Ton; Aalders, Maurice C.G.; Faber, Dirk

    2014-01-01

    We reply to the comment by Kraszewski et al on “Quantitative comparison of analysis methods for spectroscopic optical coherence tomography.” We present additional simulations evaluating the proposed window function. We conclude that our simulations show good qualitative agreement with the results of

  10. Comparison of Flood Frequency Analysis Methods for Ungauged Catchments in France

    Directory of Open Access Journals (Sweden)

    Jean Odry

    2017-09-01

    Full Text Available The objective of flood frequency analysis (FFA is to associate flood intensity with a probability of exceedance. Many methods are currently employed for this, ranging from statistical distribution fitting to simulation approaches. In many cases the site of interest is actually ungauged, and a regionalisation scheme has to be associated with the FFA method, leading to a multiplication of the number of possible methods available. This paper presents the results of a wide-range comparison of FFA methods from statistical and simulation families associated with different regionalisation schemes based on regression, or spatial or physical proximity. The methods are applied to a set of 1535 French catchments, and a k-fold cross-validation procedure is used to consider the ungauged configuration. The results suggest that FFA from the statistical family largely relies on the regionalisation step, whereas the simulation-based method is more stable regarding regionalisation. This conclusion emphasises the difficulty of the regionalisation process. The results are also contrasted depending on the type of climate: the Mediterranean catchments tend to aggravate the differences between the methods.

  11. Gauge-adjusted rainfall estimates from commercial microwave links

    Directory of Open Access Journals (Sweden)

    M. Fencl

    2017-01-01

    experimental layouts of ground truth from rain gauges (RGs with different spatial and temporal resolutions. The results suggest that CMLs adjusted by RGs with a temporal aggregation of up to 1 h (i provide precise high-resolution QPEs (relative error  < 7 %, Nash–Sutcliffe efficiency coefficient  >  0.75 and (ii that the combination of both sensor types clearly outperforms each individual monitoring system. Unfortunately, adjusting CML observations to RGs with longer aggregation intervals of up to 24 h has drawbacks. Although it substantially reduces bias, it unfavourably smoothes out rainfall peaks of high intensities, which is undesirable for stormwater management. A similar, but less severe, effect occurs due to spatial averaging when CMLs are adjusted to remote RGs. Nevertheless, even here, adjusted CMLs perform better than RGs alone. Furthermore, we provide first evidence that the joint use of multiple CMLs together with RGs also reduces bias in their QPEs. In summary, we believe that our adjustment method has great potential to improve the space–time resolution of current urban rainfall monitoring networks. Nevertheless, future work should aim to better understand the reason for the observed systematic error in QPEs from CMLs.

  12. Comparison of a Full Food-Frequency Questionnaire with the Three-Day Unweighted Food Records in Young Polish Adult Women: Implications for Dietary Assessment

    Science.gov (United States)

    Kowalkowska, Joanna; Slowinska, Malgorzata A.; Slowinski, Dariusz; Dlugosz, Anna; Niedzwiedzka, Ewa; Wadolowska, Lidia

    2013-01-01

    The food frequency questionnaire (FFQ) and the food record (FR) are among the most common methods used in dietary research. It is important to know that is it possible to use both methods simultaneously in dietary assessment and prepare a single, comprehensive interpretation. The aim of this study was to compare the energy and nutritional value of diets, determined by the FFQ and by the three-day food records of young women. The study involved 84 female students aged 21–26 years (mean of 22.2 ± 0.8 years). Completing the FFQ was preceded by obtaining unweighted food records covering three consecutive days. Energy and nutritional value of diets was assessed for both methods (FFQ-crude, FR-crude). Data obtained for FFQ-crude were adjusted with beta-coefficient equaling 0.5915 (FFQ-adjusted) and regression analysis (FFQ-regressive). The FFQ-adjusted was calculated as FR-crude/FFQ-crude ratio of mean daily energy intake. FFQ-regressive was calculated for energy and each nutrient separately using regression equation, including FFQ-crude and FR-crude as covariates. For FR-crude and FFQ-crude the energy value of diets was standardized to 2000 kcal (FR-standardized, FFQ-standardized). Methods of statistical comparison included a dependent samples t-test, a chi-square test, and the Bland-Altman method. The mean energy intake in FFQ-crude was significantly higher than FR-crude (2740.5 kcal vs. 1621.0 kcal, respectively). For FR-standardized and FFQ-standardized, significance differences were found in the mean intake of 18 out of 31 nutrients, for FR-crude and FFQ-adjusted in 13 out of 31 nutrients and FR-crude and FFQ-regressive in 11 out of 31 nutrients. The Bland-Altman method showed an overestimation of energy and nutrient intake by FFQ-crude in comparison to FR-crude, e.g., total protein was overestimated by 34.7 g/day (95% Confidence Interval, CI: −29.6, 99.0 g/day) and fat by 48.6 g/day (95% CI: −36.4, 133.6 g/day). After regressive transformation of FFQ, the

  13. Investigation of Industrialised Building System Performance in Comparison to Conventional Construction Method

    Directory of Open Access Journals (Sweden)

    Othuman Mydin M.A.

    2014-03-01

    Full Text Available Conventional construction methods are still widely practised, although many researches have indicated that this method is less effective compared to the IBS construction method. The existence of the IBS has added to the many techniques within the construction industry. This study is aimed at making comparisons between the two construction approaches. Case studies were conducted at four sites in the state of Penang, Malaysia. Two projects were IBS-based while the remaining two deployed the conventional method of construction. Based on an analysis of the results, it can be concluded that the IBS approach has more to offer compared to the conventional method. Among these advantages are shorter construction periods, reduced overall costs, less labour needs, better site conditions and the production of higher quality components.

  14. Application of adjustment calculus in the nodeless Trefftz method for a problem of two-dimensional temperature field of the boiling liquid flowing in a minichannel

    Directory of Open Access Journals (Sweden)

    Hożejowska Sylwia

    2014-03-01

    Full Text Available The paper presents application of the nodeless Trefftz method to calculate temperature of the heating foil and the insulating glass pane during continuous flow of a refrigerant along a vertical minichannel. Numerical computations refer to an experiment in which the refrigerant (FC-72 enters under controlled pressure and temperature a rectangular minichannel. Initially its temperature is below the boiling point. During the flow it is heated by a heating foil. The thermosensitive liquid crystals allow to obtain twodimensional temperature field in the foil. Since the nodeless Trefftz method has very good performance for providing solutions to such problems, it was chosen as a numerical method to approximate two-dimensional temperature distribution in the protecting glass and the heating foil. Due to known temperature of the refrigerant it was also possible to evaluate the heat transfer coefficient at the foil-refrigerant interface. For expected improvement of the numerical results the nodeless Trefftz method was combined with adjustment calculus. Adjustment calculus allowed to smooth the measurements and to decrease the measurement errors. As in the case of the measurement errors, the error of the heat transfer coefficient decreased.

  15. Double Compton effect: a new method of detection

    International Nuclear Information System (INIS)

    Cafagne, A.

    1978-01-01

    In this paper, a new method of observation of the double Compton effect is described. The proposed method is based on the use of a sum-coincidence circuit, whose resulting pulse is in a fast coincidence (ζ=1,7x10 -8 sec) with pulses (∼=10- 9 sec) from both scintillation detectors used to measure the energy of the coincident scattered gamma-rays. By means of this procedure, the contribution of the pulses from the sum-coincidence circuit due to random gamma-rays is eliminated. The spectra were registered in an Ortec model 6240 Multi-channel analyser using a further coincidence circuit, eliminate non-coincident pulses. The gate is open by a rectangulasr pulse which lasts for 10n sec and an adjustable delayed pulse generator adjusts its time-position in order to be coincident with the top of the sum-coincidence pulses. The adjustable delayed pulse generator compensates also for the finite time of propagation of the pulses in the circuits. Through this experimental technique it was possible to measure simultaneously the energy of each coincident photon which allowed an excellent comparison due the agreement found between the obtained results and the theory of Mandl and Skyrme. (Author) [pt

  16. Bilateral comparison between PTB and ENEA to check the performance of a commercial TDCR system for activity measurements

    International Nuclear Information System (INIS)

    Kossert, Karsten; Capogni, Marco; Nähle, Ole J.

    2014-01-01

    The only commercial TDCR counter from Hidex Oy (Finland), comprising three photomultiplier tubes, was tested at the two National Metrology Institutes (NMIs) PTB and ENEA. To this end, the two NMIs purchased a Hidex 300 SL TDCR counter (METRO version) each and carried out various tests at their laboratories. In addition, the two institutions agreed to organize a bilateral comparison in order to acquire information on the reproducibility of the results obtained with the counters. To achieve this, PTB prepared some 89 Sr liquid scintillation samples, which were first measured in various counters at PTB and then shipped to ENEA for comparative measurements. The aim of this paper is to summarize the findings on the counter characteristics and adjustments. In addition, the results of the bilateral comparison between PTB and ENEA are presented and the results from various commercial counters using the CIEMAT/NIST efficiency tracing and the TDCR method are discussed. - Highlights: • The TDCR counter from Hidex Oy was tested at PTB and ENEA. • The studies comprised linearity checks and investigation of adjustments. • A bilateral 89 Sr comparison was organized to compare two Hidex counters

  17. Nonlinear predictive control for adaptive adjustments of deep brain stimulation parameters in basal ganglia-thalamic network.

    Science.gov (United States)

    Su, Fei; Wang, Jiang; Niu, Shuangxia; Li, Huiyan; Deng, Bin; Liu, Chen; Wei, Xile

    2018-02-01

    The efficacy of deep brain stimulation (DBS) for Parkinson's disease (PD) depends in part on the post-operative programming of stimulation parameters. Closed-loop stimulation is one method to realize the frequent adjustment of stimulation parameters. This paper introduced the nonlinear predictive control method into the online adjustment of DBS amplitude and frequency. This approach was tested in a computational model of basal ganglia-thalamic network. The autoregressive Volterra model was used to identify the process model based on physiological data. Simulation results illustrated the efficiency of closed-loop stimulation methods (amplitude adjustment and frequency adjustment) in improving the relay reliability of thalamic neurons compared with the PD state. Besides, compared with the 130Hz constant DBS the closed-loop stimulation methods can significantly reduce the energy consumption. Through the analysis of inter-spike-intervals (ISIs) distribution of basal ganglia neurons, the evoked network activity by the closed-loop frequency adjustment stimulation was closer to the normal state. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Evaluation and Comparison of Extremal Hypothesis-Based Regime Methods

    Directory of Open Access Journals (Sweden)

    Ishwar Joshi

    2018-03-01

    Full Text Available Regime channels are important for stable canal design and to determine river response to environmental changes, e.g., due to the construction of a dam, land use change, and climate shifts. A plethora of methods is available describing the hydraulic geometry of alluvial rivers in the regime. However, comparison of these methods using the same set of data seems lacking. In this study, we evaluate and compare four different extremal hypothesis-based regime methods, namely minimization of Froude number (MFN, maximum entropy and minimum energy dissipation rate (ME and MEDR, maximum flow efficiency (MFE, and Millar’s method, by dividing regime channel data into sand and gravel beds. The results show that for sand bed channels MFN gives a very high accuracy of prediction for regime channel width and depth. For gravel bed channels we find that MFN and ‘ME and MEDR’ give a very high accuracy of prediction for width and depth. Therefore the notion that extremal hypotheses which do not contain bank stability criteria are inappropriate for use is shown false as both MFN and ‘ME and MEDR’ lack bank stability criteria. Also, we find that bank vegetation has significant influence in the prediction of hydraulic geometry by MFN and ‘ME and MEDR’.

  19. Metric-adjusted skew information

    DEFF Research Database (Denmark)

    Liang, Cai; Hansen, Frank

    2010-01-01

    on a bipartite system and proved superadditivity of the Wigner-Yanase-Dyson skew informations for such states. We extend this result to the general metric-adjusted skew information. We finally show that a recently introduced extension to parameter values 1 ...We give a truly elementary proof of the convexity of metric-adjusted skew information following an idea of Effros. We extend earlier results of weak forms of superadditivity to general metric-adjusted skew information. Recently, Luo and Zhang introduced the notion of semi-quantum states...... of (unbounded) metric-adjusted skew information....

  20. ADJUSTABLE CHIP HOLDER

    DEFF Research Database (Denmark)

    2009-01-01

    An adjustable microchip holder for holding a microchip is provided having a plurality of displaceable interconnection pads for connecting the connection holes of a microchip with one or more external devices or equipment. The adjustable microchip holder can fit different sizes of microchips...

  1. An interlaboratory comparison of methods for measuring rock matrix porosity

    International Nuclear Information System (INIS)

    Rasilainen, K.; Hellmuth, K.H.; Kivekaes, L.; Ruskeeniemi, T.; Melamed, A.; Siitari-Kauppi, M.

    1996-09-01

    An interlaboratory comparison study was conducted for the available Finnish methods of rock matrix porosity measurements. The aim was first to compare different experimental methods for future applications, and second to obtain quality assured data for the needs of matrix diffusion modelling. Three different versions of water immersion techniques, a tracer elution method, a helium gas through-diffusion method, and a C-14-PMMA method were tested. All methods selected for this study were established experimental tools in the respective laboratories, and they had already been individually tested. Rock samples for the study were obtained from a homogeneous granitic drill core section from the natural analogue site at Palmottu. The drill core section was cut into slabs that were expected to be practically identical. The subsamples were then circulated between the different laboratories using a round robin approach. The circulation was possible because all methods were non-destructive, except the C-14-PMMA method, which was always the last method to be applied. The possible effect of drying temperature on the measured porosity was also preliminarily tested. These measurements were done in the order of increasing drying temperature. Based on the study, it can be concluded that all methods are comparable in their accuracy. The selection of methods for future applications can therefore be based on practical considerations. Drying temperature seemed to have very little effect on the measured porosity, but a more detailed study is needed for definite conclusions. (author) (4 refs.)

  2. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    Science.gov (United States)

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Comparison of deterministic and Monte Carlo methods in shielding design.

    Science.gov (United States)

    Oliveira, A D; Oliveira, C

    2005-01-01

    In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions.

  4. Comparison of deterministic and Monte Carlo methods in shielding design

    International Nuclear Information System (INIS)

    Oliveira, A. D.; Oliveira, C.

    2005-01-01

    In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions. (authors)

  5. Constraint propagation of C2-adjusted formulation: Another recipe for robust ADM evolution system

    International Nuclear Information System (INIS)

    Tsuchiya, Takuya; Yoneda, Gen; Shinkai, Hisa-aki

    2011-01-01

    With a purpose of constructing a robust evolution system against numerical instability for integrating the Einstein equations, we propose a new formulation by adjusting the ADM evolution equations with constraints. We apply an adjusting method proposed by Fiske (2004) which uses the norm of the constraints, C 2 . One of the advantages of this method is that the effective signature of adjusted terms (Lagrange multipliers) for constraint-damping evolution is predetermined. We demonstrate this fact by showing the eigenvalues of constraint propagation equations. We also perform numerical tests of this adjusted evolution system using polarized Gowdy-wave propagation, which show robust evolutions against the violation of the constraints than that of the standard ADM formulation.

  6. A Comparison of Moments-Based Logo Recognition Methods

    Directory of Open Access Journals (Sweden)

    Zili Zhang

    2014-01-01

    Full Text Available Logo recognition is an important issue in document image, advertisement, and intelligent transportation. Although there are many approaches to study logos in these fields, logo recognition is an essential subprocess. Among the methods of logo recognition, the descriptor is very vital. The results of moments as powerful descriptors were not discussed before in terms of logo recognition. So it is unclear which moments are more appropriate to recognize which kind of logos. In this paper we find out the relations between logos with different transforms and moments, which moments are fit for logos with different transforms. The open datasets are employed from the University of Maryland. The comparisons based on moments are carried out from the aspects of logos with noise, and rotation, scaling, rotation and scaling.

  7. Comparison of selected methods for the enumeration of fecal coliforms and Escherichia coli in shellfish.

    Science.gov (United States)

    Grabow, W O; De Villiers, J C; Schildhauer, C I

    1992-09-01

    In a comparison of five selected methods for the enumeration of fecal coliforms and Escherichia coli in naturally contaminated and sewage-seeded mussels (Choromytilus spp.) and oysters (Ostrea spp.), a spread-plate procedure with mFC agar without rosolic acid and preincubation proved the method of choice for routine quality assessment.

  8. Effect of Internet peer-support groups on psychosocial adjustment to cancer

    DEFF Research Database (Denmark)

    Høybye, Mette Terp; Dalton, S O; Deltour, I

    2010-01-01

    BACKGROUND: We conducted a randomised study to investigate whether providing a self-guided Internet support group to cancer patients affected mood disturbance and adjustment to cancer. METHODS: Baseline and 1-, 6- and 12-month assessments were conducted from 2004 to 2006 at a national rehabilitat......BACKGROUND: We conducted a randomised study to investigate whether providing a self-guided Internet support group to cancer patients affected mood disturbance and adjustment to cancer. METHODS: Baseline and 1-, 6- and 12-month assessments were conducted from 2004 to 2006 at a national...... rehabilitation centre in Denmark. A total of 58 rehabilitation course weeks including 921 survivors of various cancers were randomly assigned to a control or an intervention group by cluster randomisation. The intervention was a lecture on the use of the Internet for support and information followed...... by participation in an Internet support group. Outcome measures included self-reported mood disturbance, adjustment to cancer and self-rated health. Differences in scores were compared between the control group and the intervention group. RESULTS: The effect of the intervention on mood disturbance and adjustment...

  9. Comparison of optical methods for surface roughness characterization

    International Nuclear Information System (INIS)

    Feidenhans’l, Nikolaj A; Hansen, Poul-Erik; Madsen, Morten H; Petersen, Jan C; Pilný, Lukáš; Bissacco, Giuliano; Taboryski, Rafael

    2015-01-01

    We report a study of the correlation between three optical methods for characterizing surface roughness: a laboratory scatterometer measuring the bi-directional reflection distribution function (BRDF instrument), a simple commercial scatterometer (rBRDF instrument), and a confocal optical profiler. For each instrument, the effective range of spatial surface wavelengths is determined, and the common bandwidth used when comparing the evaluated roughness parameters. The compared roughness parameters are: the root-mean-square (RMS) profile deviation (Rq), the RMS profile slope (Rdq), and the variance of the scattering angle distribution (Aq). The twenty-two investigated samples were manufactured with several methods in order to obtain a suitable diversity of roughness patterns.Our study shows a one-to-one correlation of both the Rq and the Rdq roughness values when obtained with the BRDF and the confocal instruments, if the common bandwidth is applied. Likewise, a correlation is observed when determining the Aq value with the BRDF and the rBRDF instruments.Furthermore, we show that it is possible to determine the Rq value from the Aq value, by applying a simple transfer function derived from the instrument comparisons. The presented method is validated for surfaces with predominantly 1D roughness, i.e. consisting of parallel grooves of various periods, and a reflectance similar to stainless steel. The Rq values are predicted with an accuracy of 38% at the 95% confidence interval. (paper)

  10. Case mix adjustment of health outcomes, resource use and process indicators in childbirth care: a register-based study.

    Science.gov (United States)

    Mesterton, Johan; Lindgren, Peter; Ekenberg Abreu, Anna; Ladfors, Lars; Lilja, Monica; Saltvedt, Sissel; Amer-Wåhlin, Isis

    2016-05-31

    Unwarranted variation in care practice and outcomes has gained attention and inter-hospital comparisons are increasingly being used to highlight and understand differences between hospitals. Adjustment for case mix is a prerequisite for meaningful comparisons between hospitals with different patient populations. The objective of this study was to identify and quantify maternal characteristics that impact a set of important indicators of health outcomes, resource use and care process and which could be used for case mix adjustment of comparisons between hospitals. In this register-based study, 139 756 deliveries in 2011 and 2012 were identified in regional administrative systems from seven Swedish regions, which together cover 67 % of all deliveries in Sweden. Data were linked to the Medical birth register and Statistics Sweden's population data. A number of important indicators in childbirth care were studied: Caesarean section (CS), induction of labour, length of stay, perineal tears, haemorrhage > 1000 ml and post-partum infections. Sociodemographic and clinical characteristics deemed relevant for case mix adjustment of outcomes and resource use were identified based on previous literature and based on clinical expertise. Adjustment using logistic and ordinary least squares regression analysis was performed to quantify the impact of these characteristics on the studied indicators. Almost all case mix factors analysed had an impact on CS rate, induction rate and length of stay and the effect was highly statistically significant for most factors. Maternal age, parity, fetal presentation and multiple birth were strong predictors of all these indicators but a number of additional factors such as born outside the EU, body mass index (BMI) and several complications during pregnancy were also important risk factors. A number of maternal characteristics had a noticeable impact on risk of perineal tears, while the impact of case mix factors was less pronounced for

  11. A comparison of radiosity with current methods of sound level prediction in commercial spaces

    Science.gov (United States)

    Beamer, C. Walter, IV; Muehleisen, Ralph T.

    2002-11-01

    The ray tracing and image methods (and variations thereof) are widely used for the computation of sound fields in architectural spaces. The ray tracing and image methods are best suited for spaces with mostly specular reflecting surfaces. The radiosity method, a method based on solving a system of energy balance equations, is best applied to spaces with mainly diffusely reflective surfaces. Because very few spaces are either purely specular or purely diffuse, all methods must deal with both types of reflecting surfaces. A comparison of the radiosity method to other methods for the prediction of sound levels in commercial environments is presented. [Work supported by NSF.

  12. Unit Root Properties of Seasonal Adjustment and Related Filters: Special Cases

    Directory of Open Access Journals (Sweden)

    Bell William.R.

    2017-03-01

    Full Text Available Bell (2012 catalogued unit root factors contained in linear filters used in seasonal adjustment (model-based or from the X-11 method but noted that, for model-based seasonal adjustment, special cases could arise where filters could contain more unit root factors than was indicated by the general results. This article reviews some special cases that occur with canonical ARIMA model based adjustment in which, with some commonly used ARIMA models, the symmetric seasonal filters contain two extra nonseasonal differences (i.e., they include an extra (1 - B(1 - F. This increases by two the degree of polynomials in time that are annihilated by the seasonal filter and reproduced by the seasonal adjustment filter. Other results for canonical ARIMA adjustment that are reported in Bell (2012, including properties of the trend and irregular filters, and properties of the asymmetric and finite filters, are unaltered in these special cases. Special cases for seasonal adjustment with structural ARIMA component models are also briefly discussed.

  13. THE SUBJECTIVAL CONTENT OF IMAGES “SUCCESSFUL MAN” AND “SUCCESSFUL WOMAN” AS A FACTOR OF PSYCHIC ADJUSTMENT OF WOMEN IN INVOLUNTARY UNEMPLOYMENT SITUATION

    Directory of Open Access Journals (Sweden)

    Ольга Геннадьевна Лопухова

    2013-09-01

    Full Text Available Purpose of the study is investigation of correlation between a content of gender appearance of image “successful person” and parameters of psychic and social adjustment of women belonging to different generations.Methodology.  Subjective image “successful person” means a stable and possibly gender differentiated element of “Ideal Me” images system. It includes cognitive component (conscious and verbalization representations of typical description of successful person, and affective component (positive or negative positions in regards to image “successful person”. We have compared results of survey and valuation of 265 women belonging to different generations and staying in different social situations: involuntary unemployment situation, employment and getting professional education. Subjective and projective methods were used in survey of cognitive and affective components of “successful person” image. Valuation of psychic adjustment based on parameters of internal conflict in comparison with manifestations of anxiety, frustration, aggression, rigidity, neurotic. Data analysis used parametric and nonparametric methods, including ANOVA/MANOVA.Results. It has been discovered that level of psychic adjustment of women depends much more on proper integration of subjective “successful person” image content with individual features of self-conception than on objective “social status” (being in involuntary unemployment situation.Practical implications are psychology consulting and correction of social or psychic unadjustment.DOI: http://dx.doi.org/10.12731/2218-7405-2013-6-41

  14. NWP-Based Adjustment of IMERG Precipitation for Flood-Inducing Complex Terrain Storms: Evaluation over CONUS

    Directory of Open Access Journals (Sweden)

    Xinxuan Zhang

    2018-04-01

    Full Text Available This paper evaluates the use of precipitation forecasts from a numerical weather prediction (NWP model for near-real-time satellite precipitation adjustment based on 81 flood-inducing heavy precipitation events in seven mountainous regions over the conterminous United States. The study is facilitated by the National Center for Atmospheric Research (NCAR real-time ensemble forecasts (called model, the Integrated Multi-satellitE Retrievals for GPM (IMERG near-real-time precipitation product (called raw IMERG and the Stage IV multi-radar/multi-sensor precipitation product (called Stage IV used as a reference. We evaluated four precipitation datasets (the model forecasts, raw IMERG, gauge-adjusted IMERG and model-adjusted IMERG through comparisons against Stage IV at six-hourly and event length scales. The raw IMERG product consistently underestimated heavy precipitation in all study regions, while the domain average rainfall magnitudes exhibited by the model were fairly accurate. The model exhibited error in the locations of intense precipitation over inland regions, however, while the IMERG product generally showed correct spatial precipitation patterns. Overall, the model-adjusted IMERG product performed best over inland regions by taking advantage of the more accurate rainfall magnitude from NWP and the spatial distribution from IMERG. In coastal regions, although model-based adjustment effectively improved the performance of the raw IMERG product, the model forecast performed even better. The IMERG product could benefit from gauge-based adjustment, as well, but the improvement from model-based adjustment was consistently more significant.

  15. Set up of a method for the adjustment of resonance parameters on integral experiments; Mise au point d`une methode d`ajustement des parametres de resonance sur des experiences integrales

    Energy Technology Data Exchange (ETDEWEB)

    Blaise, P.

    1996-12-18

    Resonance parameters for actinides play a significant role in the neutronic characteristics of all reactor types. All the major integral parameters strongly depend on the nuclear data of the isotopes in the resonance-energy regions.The author sets up a method for the adjustment of resonance parameters taking into account the self-shielding effects and restricting the cross section deconvolution problem to a limited energy region. (N.T.).

  16. Comparison of four statistical and machine learning methods for crash severity prediction.

    Science.gov (United States)

    Iranitalab, Amirfarrokh; Khattak, Aemal

    2017-11-01

    Crash severity prediction models enable different agencies to predict the severity of a reported crash with unknown severity or the severity of crashes that may be expected to occur sometime in the future. This paper had three main objectives: comparison of the performance of four statistical and machine learning methods including Multinomial Logit (MNL), Nearest Neighbor Classification (NNC), Support Vector Machines (SVM) and Random Forests (RF), in predicting traffic crash severity; developing a crash costs-based approach for comparison of crash severity prediction methods; and investigating the effects of data clustering methods comprising K-means Clustering (KC) and Latent Class Clustering (LCC), on the performance of crash severity prediction models. The 2012-2015 reported crash data from Nebraska, United States was obtained and two-vehicle crashes were extracted as the analysis data. The dataset was split into training/estimation (2012-2014) and validation (2015) subsets. The four prediction methods were trained/estimated using the training/estimation dataset and the correct prediction rates for each crash severity level, overall correct prediction rate and a proposed crash costs-based accuracy measure were obtained for the validation dataset. The correct prediction rates and the proposed approach showed NNC had the best prediction performance in overall and in more severe crashes. RF and SVM had the next two sufficient performances and MNL was the weakest method. Data clustering did not affect the prediction results of SVM, but KC improved the prediction performance of MNL, NNC and RF, while LCC caused improvement in MNL and RF but weakened the performance of NNC. Overall correct prediction rate had almost the exact opposite results compared to the proposed approach, showing that neglecting the crash costs can lead to misjudgment in choosing the right prediction method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Adjusting slash pine growth and yield for silvicultural treatments

    Science.gov (United States)

    Stephen R. Logan; Barry D. Shiver

    2006-01-01

    With intensive silvicultural treatments such as fertilization and competition control now commonplace in today's slash pine (Pinus elliottii Engelm.) plantations, a method to adjust current growth and yield models is required to accurately account for yield increases due to these practices. Some commonly used ad-hoc methods, such as raising site...

  18. The improved quasi-static method vs the direct method: a case study for CANDU reactor transients

    International Nuclear Information System (INIS)

    Kaveh, S.; Koclas, J.; Roy, R.

    1999-01-01

    Among the large number of methods for the transient analysis of nuclear reactors, the improved quasi-static procedure is one of the most widely used. In recent years, substantial increase in both computer speed and memory has motivated a rethinking of the limitations of this method. The overall goal of the present work is a systematic comparison between the improved quasi-static and the direct method (mesh-centered finite difference) for realistic CANDU transient simulations. The emphasis is on the accuracy of the solutions as opposed to the computational speed. Using the computer code NDF, a typical realistic transient of CANDU reactor has been analyzed. In this transient the response of the reactor regulating system to a substantial local perturbation (sudden extraction of the five adjuster rods) has been simulated. It is shown that when updating the detector responses is of major importance, it is better to use a well-optimized direct method rather than the improved quasi-static method. (author)

  19. Repatriation Adjustment: Literature Review

    Directory of Open Access Journals (Sweden)

    Gamze Arman

    2009-12-01

    Full Text Available Expatriation is a widely studied area of research in work and organizational psychology. After expatriates accomplish their missions in host countries, they return to their countries and this process is called repatriation. Adjustment constitutes a crucial part in repatriation research. In the present literature review, research about repatriation adjustment was reviewed with the aim of defining the whole picture in this phenomenon. Present research was classified on the basis of a theoretical model of repatriation adjustment. Basic frame consisted of antecedents, adjustment, outcomes as main variables and personal characteristics/coping strategies and organizational strategies as moderating variables.

  20. Developement of the method for realization of spectral irradiance scale featuring system of spectral comparisons

    International Nuclear Information System (INIS)

    Skerovic, V; Zarubica, V; Aleksic, M; Zekovic, L; Belca, I

    2010-01-01

    Realization of the scale of spectral responsivity of the detectors in the Directorate of Measures and Precious Metals (DMDM) is based on silicon detectors traceable to LNE-INM. In order to realize the unit of spectral irradiance in the laboratory for photometry and radiometry of the Bureau of Measures and Precious Metals, the new method based on the calibration of the spectroradiometer by comparison with standard detector has been established. The development of the method included realization of the System of Spectral Comparisons (SSC), together with the detector spectral responsivity calibrations by means of a primary spectrophotometric system. The linearity testing and stray light analysis were preformed to characterize the spectroradiometer. Measurement of aperture diameter and calibration of transimpedance amplifier were part of the overall experiment. In this paper, the developed method is presented and measurement results with the associated measurement uncertainty budget are shown.

  1. Developement of the method for realization of spectral irradiance scale featuring system of spectral comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Skerovic, V; Zarubica, V; Aleksic, M [Directorate of measures and precious metals, Optical radiation Metrology department, Mike Alasa 14, 11000 Belgrade (Serbia); Zekovic, L; Belca, I, E-mail: vladanskerovic@dmdm.r [Faculty of Physics, Department for Applied physics and metrology, Studentski trg 12-16, 11000 Belgrade (Serbia)

    2010-10-15

    Realization of the scale of spectral responsivity of the detectors in the Directorate of Measures and Precious Metals (DMDM) is based on silicon detectors traceable to LNE-INM. In order to realize the unit of spectral irradiance in the laboratory for photometry and radiometry of the Bureau of Measures and Precious Metals, the new method based on the calibration of the spectroradiometer by comparison with standard detector has been established. The development of the method included realization of the System of Spectral Comparisons (SSC), together with the detector spectral responsivity calibrations by means of a primary spectrophotometric system. The linearity testing and stray light analysis were preformed to characterize the spectroradiometer. Measurement of aperture diameter and calibration of transimpedance amplifier were part of the overall experiment. In this paper, the developed method is presented and measurement results with the associated measurement uncertainty budget are shown.

  2. Adjustement of multigroup cross sections using fast reactor integral data

    International Nuclear Information System (INIS)

    Renke, C.A.C.

    1982-01-01

    A methodology for the adjustment of multigroup cross section is presented, structured with aiming to compatibility the limitated number of measured values of integral parameters known and disponible, and the great number of cross sections to be adjusted the group of cross section used is that obtained from the Carnaval II calculation system, understanding as formular the sets of calculation methods and data bases. The adjustment is realized, using the INCOAJ computer code, developed in function of one statistical formulation, structural from the bayer considerations, taking in account the measurement processes of cross section and integral parameters defined on statistical bases. (E.G.) [pt

  3. Cross-Cultural Differences in Adjustment to Aging: A Comparison Between Mexico and Portugal

    Directory of Open Access Journals (Sweden)

    Neyda Ma. Mendoza-Ruvalcaba

    2017-08-01

    Full Text Available ObjectiveTo compare Adjustment to Aging (AtA and Satisfaction with Life in a Mexican and a Portuguese older sample.MethodA total of 723 (n = 340 Mexican and n = 383 Portuguese older adults were included and assessed with the AtA Scale (ATAS and the Satisfaction with Life Scale (SWL. Informed consent was obtained from all participants. Portuguese participants were significantly older than Mexicans (mean age 85.19 and 71.36 years old, respectively and showed higher education level (p < .001. No significant differences on gender and marital status were found.ResultsMexicans considered all aspects of AtA absolutely more important than their Portuguese counterparts (p < .001. For Mexicans, being cherished by their family (82.1%, being healthy, without pain or disease (75.9%, having spiritual religious and existential values (75% and having fun and laughter (75% were the most important for AtA, compared to having curiosity and an interest in learning (22.5%, creating and being creative (20.1% and leaving a mark and seed for the future (18.0% for Portuguese participants. Mexicans also reported a higher SWL than Portuguese participants. Mean scores were 6.10 (SD = 0.76 and 3.66 (SD = 1.47 respectively (p < .001. AtA and SWL were correlated in the Mexican sample (p = .001, but not in the Portuguese (p = .100.DiscussionDifferences on AtA between Mexican and Portuguese older adults should be explained considering their cultural and social context, and their socio-demographic characteristics. The enhancement of AtA, and its relevance to improve well-being and longevity can become a significant resource or health care interventions.

  4. Self-adjustable glasses in the developing world

    Directory of Open Access Journals (Sweden)

    Murthy Gudlavalleti VS

    2014-02-01

    -6 to +6 diopters, compliance with international standards, quality and affordability, and the likely impact on health systems. Self-adjustable spectacles show poor agreement with conventional refraction methods for high myopia and are unable to correct astigmatism. A limitation of the fluid-filled adjustable spectacles (AdSpecs, Adaptive Eyecare Ltd, Oxford, UK is that once the spectacles are self-adjusted and the power fixed, they become unalterable, just like conventional spectacles. Therefore, they will need to be changed as refractive power changes over time. Current costs of adjustable spectacles are high in developing countries and therefore not affordable to a large segment of the population. Self-adjustable spectacles have potential for "upscaling" if some of the concerns raised are addressed satisfactorily. Keywords: developing countries, eye disease, refractive error, spectacles

  5. ADJUSTABLE TRANSOBTURATOR SLING FOR TREATING PATIENTS WITH COMPLICATED STRESS URINARY INCONTINENCE

    Directory of Open Access Journals (Sweden)

    D. D. Shkarupa

    2017-01-01

    Full Text Available Introduction. The optimal tension of suburethral tape is an important component of effectiveness and safety of the surgery. By now, there is no common standardized guidance on the tensioning of the sling. There is a limited number of publications, devoted to adjustable systems with the ability to correct tape tension in postoperative period. To evaluate the effectiveness of this method, the long-term results of postoperative adjustment of the sling are necessary.Aim. To evaluate the results of complicated stress urinary incontinence (SUI surgical treatment using a transobturator adjustable sling Urosling (Lintex.Materials and methods. The study included 89 women with complicated SUI. All patients underwent the transobturator adjustable midurethral tape placement. The tension adjustment was performed during 3 days after surgery. Postoperative evaluation included vaginal examination, cough stress test, 1-h Pad-test, uroflowmetry, bladder ultrasound and post-void residual (PVR urine measurement, validated questionnaires (UDI-6, UIQ-7, ICIQ-SF, PICQ-12 and visual analogue scale (VAS.Results. Mean operative time was 15.74±7.49 min. The tension adjustment was performed in 45.0% (40/89 patients the next day after surgery. On the second day tension re-adjustment was required in 14,6% (13/89 patients. In 3.4% (3/89 women the tension was also tuned on the 3 day. The loosening of the sling was needed in 13.5% (12/89 patients. After adjustment, all patients were continent without any signs of bladder outlet obstruction (BOO. Mean follow-up was 14.3±2.1 months. The objective cure rate was 92.9%. There was no statistically significant difference in the urodynamic parameters. Assessment of patient satisfaction showed that 95.2% (80/84 of the patients were «very» or «very» satisfied.Conclusion. The adjustable transobturator suburethral tape Urosling allows to achieve high effectiveness of treatment in female patients with complicated SUI and to reduce the

  6. A comparison of different quasi-newton acceleration methods for partitioned multi-physics codes

    CSIR Research Space (South Africa)

    Haelterman, R

    2018-02-01

    Full Text Available & structures, 88/7, pp. 446–457 (2010) 8. J.E. Dennis, J.J. More´, Quasi-Newton methods: motivation and theory. SIAM Rev. 19, pp. 46–89 (1977) A Comparison of Quasi-Newton Acceleration Methods 15 9. J.E. Dennis, R.B. Schnabel, Least Change Secant Updates... Dois Metodos de Broyden. Mat. Apl. Comput. 1/2, pp. 135– 143 (1982) 25. J.M. Martinez, A quasi-Newton method with modification of one column per iteration. Com- puting 33, pp. 353–362 (1984) 26. J.M. Martinez, M.C. Zambaldi, An Inverse Column...

  7. Parenting Stress, Mental Health, Dyadic Adjustment: A Structural Equation Model

    Directory of Open Access Journals (Sweden)

    Luca Rollè

    2017-05-01

    Full Text Available Objective: In the 1st year of the post-partum period, parenting stress, mental health, and dyadic adjustment are important for the wellbeing of both parents and the child. However, there are few studies that analyze the relationship among these three dimensions. The aim of this study is to investigate the relationships between parenting stress, mental health (depressive and anxiety symptoms, and dyadic adjustment among first-time parents.Method: We studied 268 parents (134 couples of healthy babies. At 12 months post-partum, both parents filled out, in a counterbalanced order, the Parenting Stress Index-Short Form, the Edinburgh Post-natal Depression Scale, the State-Trait Anxiety Inventory, and the Dyadic Adjustment Scale. Structural equation modeling was used to analyze the potential mediating effects of mental health on the relationship between parenting stress and dyadic adjustment.Results: Results showed the full mediation effect of mental health between parenting stress and dyadic adjustment. A multi-group analysis further found that the paths did not differ across mothers and fathers.Discussion: The results suggest that mental health is an important dimension that mediates the relationship between parenting stress and dyadic adjustment in the transition to parenthood.

  8. Real-world comparison of two molecular methods for detection of respiratory viruses

    Directory of Open Access Journals (Sweden)

    Miller E Kathryn

    2011-06-01

    Full Text Available Abstract Background Molecular polymerase chain reaction (PCR based assays are increasingly used to diagnose viral respiratory infections and conduct epidemiology studies. Molecular assays have generally been evaluated by comparing them to conventional direct fluorescent antibody (DFA or viral culture techniques, with few published direct comparisons between molecular methods or between institutions. We sought to perform a real-world comparison of two molecular respiratory viral diagnostic methods between two experienced respiratory virus research laboratories. Methods We tested nasal and throat swab specimens obtained from 225 infants with respiratory illness for 11 common respiratory viruses using both a multiplex assay (Respiratory MultiCode-PLx Assay [RMA] and individual real-time RT-PCR (RT-rtPCR. Results Both assays detected viruses in more than 70% of specimens, but there was discordance. The RMA assay detected significantly more human metapneumovirus (HMPV and respiratory syncytial virus (RSV, while RT-rtPCR detected significantly more influenza A. We speculated that primer differences accounted for these discrepancies and redesigned the primers and probes for influenza A in the RMA assay, and for HMPV and RSV in the RT-rtPCR assay. The tests were then repeated and again compared. The new primers led to improved detection of HMPV and RSV by RT-rtPCR assay, but the RMA assay remained similar in terms of influenza detection. Conclusions Given the absence of a gold standard, clinical and research laboratories should regularly correlate the results of molecular assays with other PCR based assays, other laboratories, and with standard virologic methods to ensure consistency and accuracy.

  9. ORACLE: an adjusted cross-section and covariance library for fast-reactor analysis

    International Nuclear Information System (INIS)

    Yeivin, Y.; Marable, J.H.; Weisbin, C.R.; Wagschal, J.J.

    1980-01-01

    Benchmark integral-experiment values from six fast critical-reactor assemblies and two standard neutron fields are combined with corresponding calculations using group cross sections based on ENDF/B-V in a least-squares data adjustment using evaluated covariances from ENDF/B-V and supporting covariance evaluations. Purpose is to produce an adjusted cross-section and covariance library which is based on well-documented data and methods and which is suitable for fast-reactor design. By use of such a library, data- and methods-related biases of calculated performance parameters should be reduced and uncertainties of the calculated values minimized. Consistency of the extensive data base is analyzed using the chi-square test. This adjusted library ORACLE will be available shortly

  10. LC-filter Resonance Cancellation with DPWM Inverters in Adjustable Speed Drives

    DEFF Research Database (Denmark)

    Vadstrup, Casper; Wang, Xiongfei; Blaabjerg, Frede

    2015-01-01

    Discontinuous PWM methods cannot easily be used in LC-filtered inverters for adjustable speed drives, as it causes resonances in the filter. This paper present a new method to cancel the resonance and compares this method to previously proposed methods. Wide band gap devices are entering the market...

  11. Adjusted Wald Confidence Interval for a Difference of Binomial Proportions Based on Paired Data

    Science.gov (United States)

    Bonett, Douglas G.; Price, Robert M.

    2012-01-01

    Adjusted Wald intervals for binomial proportions in one-sample and two-sample designs have been shown to perform about as well as the best available methods. The adjusted Wald intervals are easy to compute and have been incorporated into introductory statistics courses. An adjusted Wald interval for paired binomial proportions is proposed here and…

  12. Psychological adjustment to IDDM: 10-year follow-up of an onset cohort of child and adolescent patients.

    Science.gov (United States)

    Jacobson, A M; Hauser, S T; Willett, J B; Wolfsdorf, J I; Dvorak, R; Herman, L; de Groot, M

    1997-05-01

    To evaluate the psychological adjustment of young adults with IDDM in comparison with similarly aged individuals without chronic illness. An onset cohort of young adults (n = 57), ages 19-26 years, who have been followed over a 10-year period since diagnosis, was compared with a similarly aged group of young adults identified at the time of a moderately severe, acute illness (n = 54) and followed over the same 10-year period. The groups were assessed at 10-year follow-up in terms of 1) sociodemographic indices (e.g., schooling, employment, delinquent activities, drug use), 2) psychiatric symptoms, and 3) perceived competence. In addition, IDDM patients were examined for longitudinal change in adjustment to diabetes. The groups differed only minimally in terms of sociodemographic indices, with similar rates of high school graduation, post-high school education, employment, and drug use. The IDDM group reported fewer criminal convictions and fewer non-diabetes-related illness episodes than the comparison group. There were no differences in psychiatric symptoms. However, IDDM patients reported lower perceived competence, with specific differences found on the global self-worth, sociability, physical appearance, being an adequate provider, and humor subscales. The IDDM patients reported improving adjustment to their diabetes over the course of the 10-year follow-up. Overall, the young adults with IDDM appeared to be as psychologically well adjusted as the young adults without a chronic illness. There were, however, indications of lower self-esteem in the IDDM patients that could either portend or predispose them to risk for future depression or other difficulties in adaptation.

  13. Comparison of DUPIC fuel composition heterogeneity control methods

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Hang Bok; Ko, Won Il [Korea Atomic Energy Research Institute, Taejon (Korea)

    1999-08-01

    A method to reduce the fuel composition heterogeneity effect on the core performance parameters has been studied for the DUPIC fuel which is made of spent pressurized water reactor (PWR) fuels by a dry refabrication process. This study focuses on the reactivity control method which uses either slightly enriched, depleted, or natural uranium to minimize the cost rise effect on the manufacturing of DUPIC fuel, when adjusting the excess reactivity of the spent PWR fuel. In order to reduce the variation of isotopic composition of the DUPIC fuel, the inter-assembly mixing operation was taken three times. Then, three options have been considered: reactivity control by slightly enriched and depleted uranium, reactivity control by natural uranium for high reactivity spent PWR fuels, and reactivity control by natural uranium for linear reactivity spent PWR fuels. The results of this study have shown that the reactivity of DUPIC fuel can be tightly controlled with the minimum amount of fresh uranium feed. For the reactivity control by slightly enriched and depleted uranium, all the spent PWR fuels can be utilized as the DUPIC fuel and the fraction of fresh uranium feed is 3.4% on an average. For the reactivity control by natural uranium, about 88% of spent PWR fuel can be utilized as the DUPIC fuel when the linear reactivity spent PWR fuels are used, and the amount of natural uranium feed needed to control the DUPIC fuel reactivity is negligible. 13 refs., 6 figs., 16 tabs. (Author)

  14. Interconnection and transportation networks: adjustments and stability; Reseaux d'interconnexion et de transport: reglages et stabilite

    Energy Technology Data Exchange (ETDEWEB)

    Bornard, P. [Reseau de Transport d' Electricite (RTE), Div. Systeme Electrique, 92 - Paris la Defense (France); Pavard, M. [Electricite de France (EDF), 75 - Paris (France); Testud, G. [Reseau de Transport d' Electricite (RTE), Dept. Exploitation du Systeme Electrique, 92 - Paris la Defense (France)

    2005-10-01

    Keeping the mastery of the safety of a power transportation system and respecting the contractual commitments with respect to the network users implies the implementation of efficient frequency and voltage adjustment systems. This article presents a synthetic overview of the methods and means implemented to ensure the adjustment of the voltage and frequency and the stability of very-high voltage power transportation networks: 1 - recalls of the general problem; 2 - frequency and active power adjustment: adapting generation to consumption, adapting consumption to generation; 3 - voltage and reactive power adjustment: duality of the voltage-reactive compensation adjustment, compensation of the reactive power, voltage adjustment chain, voltage adjustment of very high voltage networks, collapse of the voltage plan; 4 - alternators stability: static stability, transient stability, numerical simulation methods, stability improvement; 5 - conclusion. (J.S.)

  15. Social and emotional adjustment of adolescents extremely talented in verbal or mathematical reasoning.

    Science.gov (United States)

    Brody, L E; Benbow, C P

    1986-02-01

    Perceptions of self-esteem, locus of control, popularity, depression (or unhappiness), and discipline problems as indices of social and emotional adjustment were investigated in highly verbally or mathematically talented adolescents. Compared to a group of students who are much less gifted, the highly gifted students perceive themselves as less popular, but no differences were found in self-esteem, depression, or the incidence of discipline problems. The gifted students reported greater internal locus of control. Comparisons between the highly mathematically talented students and the highly verbally talented students suggested that the students in the latter group perceive themselves as less popular. Within both the gifted and comparison groups, there were also slight indications that higher verbal ability may be related to some social and emotional problems.

  16. Comparison of biochemical and molecular methods for the identification of bacterial isolates associated with failed loggerhead sea turtle eggs.

    Science.gov (United States)

    Awong-Taylor, J; Craven, K S; Griffiths, L; Bass, C; Muscarella, M

    2008-05-01

    Comparison of biochemical vs molecular methods for identification of microbial populations associated with failed loggerhead turtle eggs. Two biochemical (API and Microgen) and one molecular methods (16s rRNA analysis) were compared in the areas of cost, identification, corroboration of data with other methods, ease of use, resources and software. The molecular method was costly and identified only 66% of the isolates tested compared with 74% for API. A 74% discrepancy in identifications occurred between API and 16s rRNA analysis. The two biochemical methods were comparable in cost, but Microgen was easier to use and yielded the lowest discrepancy among identifications (29%) when compared with both API 20 enteric (API 20E) and API 20 nonenteric (API 20NE) combined. A comparison of API 20E and API 20NE indicated an 83% discrepancy between the two methods. The Microgen identification system appears to be better suited than API or 16s rRNA analysis for identification of environmental isolates associated with failed loggerhead eggs. Most identification methods are not intended for use with environmental isolates. A comparison of identification systems would provide better options for identifying environmental bacteria for ecological studies.

  17. Family functioning and psychosocial adjustment in overweight youngsters

    NARCIS (Netherlands)

    Stradmeijer, M.; Bosch, J; Koops, W; Seidell, J

    OBJECTIVE: To analyze the relationship between family functioning and psychosocial adjustment in Dutch overweight children and adolescents. METHOD: Seventy-three overweight (weight-for-height >P90) and 70 normal-weight youngsters between the ages of 10 and 16 years were recruited by school

  18. Adjustment of a direct method for the determination of man body burden in Pu-239 on by X-ray detection of U-235

    International Nuclear Information System (INIS)

    Boulay, P.

    1968-04-01

    The use of Pu-239 on a larger scale sets a problem about the contamination measurement by aerosol at lung level. A method of direct measurement of Pu-239 lung burden is possible, thanks to the use of a large area window proportional counter. A counter of such pattern, has been especially carried out for this purpose. The adjustment of the apparatus allows an adequate sensibility to detect a contamination at the maximum permissible body burden level. Besides, a method for individual 'internal calibration', with a plutonium mock: the protactinium-233, is reported. (author) [fr

  19. Indirect comparison of the antiviral efficacy of peginterferon alpha 2a plus ribavirin used with or without simeprevir in genotype 4 hepatitis C virus infection, where common comparator study arms are lacking: a special application of the matching adjusted indirect comparison methodology.

    Science.gov (United States)

    Van Sanden, Suzy; Pisini, Marta; Duchesne, Inge; Mehnert, Angelika; Belsey, Jonathan

    2016-01-01

    The need to assess relative efficacy in the absence of comparative clinical trials is a problem that is often encountered in economic modeling. The use of matching adjusted indirect comparison (MAIC) in this situation has been suggested. We present the results of a MAIC used to evaluate the incremental benefit offered by adding simeprevir (SMV) to standard therapy in the treatment of patients infected with genotype 4 hepatitis C virus (HCV). Individual patient data for a single arm study evaluating the use of SMV with peginterferon alfa 2a + ribavirin (PR) in genotype 4 HCV were available (RESTORE study). A systematic literature review was used to identify studies of PR alone used in the same patient group. By applying the inclusion criteria for each study in turn to the RESTORE dataset and then applying the published MAIC covariate matching algorithm, a series of pseudosamples from RESTORE were generated. After assessment of the matching outcomes, the best matched comparisons were used to derive estimates of efficacy for SMV + PR in patients equivalent to those participating in the PR trial. Five potential comparator studies were identified. After applying the matching process, two emerged as offering the greatest equivalence with the generated RESTORE pseudosamples and were used to estimate SMV + PR efficacy, expressed as the percentage of patients achieving sustained viral response (SVR). In one comparison, SVR in the SMV + PR group was 85% versus 63% for PR alone. In the second comparison, the corresponding SVRs were 77% and 44% respectively. After matching for varying baseline characteristics, both comparisons of RESTORE versus studies of PR alone yielded a benefit for SMV + PR vs PR alone in genotype 4 HCV-infected patients. The incremental gain in SVR associated with use of SMV ranged from 22% to 33%. In the absence of direct comparative studies, the MAIC gives a better perspective than simple comparison of absolute SVR from individual

  20. COMPARISON OF TWO METHODS FOR THE DETECTION OF LISTERIA MONOCYTOGENES

    Directory of Open Access Journals (Sweden)

    G. Tantillo

    2013-02-01

    Full Text Available The aim of this study was to compare the performance of the conventional methods for detection of Listeria monocytogenes in food using media Oxford and ALOA (Agar Listeria acc. to Ottaviani & Agosti in according to the ISO 11290-1 to a new chromogenic medium “CHROMagar Listeria” standardized in 2005 AFNOR ( CHR – 21/1-12/01. A total of 40 pre-packed ready-to-eat food samples were examined. Using two methods six samples were found positive for Listeria monocytogenes but the medium “CHROMagar Listeria” was more selective in comparison with the others. In conclusion this study has demonstrated that isolation medium able to target specifically the detection of L. monocytogenes such as “CHROMagar Listeria” is highly recommendable because of that detection time is significantly reduced and the analysis cost is less expensive.

  1. Differential and difference equations a comparison of methods of solution

    CERN Document Server

    Maximon, Leonard C

    2016-01-01

    This book, intended for researchers and graduate students in physics, applied mathematics and engineering, presents a detailed comparison of the important methods of solution for linear differential and difference equations - variation of constants, reduction of order, Laplace transforms and generating functions - bringing out the similarities as well as the significant differences in the respective analyses. Equations of arbitrary order are studied, followed by a detailed analysis for equations of first and second order. Equations with polynomial coefficients are considered and explicit solutions for equations with linear coefficients are given, showing significant differences in the functional form of solutions of differential equations from those of difference equations. An alternative method of solution involving transformation of both the dependent and independent variables is given for both differential and difference equations. A comprehensive, detailed treatment of Green’s functions and the associat...

  2. Comparison of three sensory profiling methods based on consumer perception

    DEFF Research Database (Denmark)

    Reinbach, Helene Christine; Giacalone, Davide; Ribeiro, Letícia Machado

    2014-01-01

    The present study compares three profiling methods based on consumer perceptions in their ability to discriminate and describe eight beers. Consumers (N=135) evaluated eight different beers using Check-All-That-Apply (CATA) methodology in two variations, with (n=63) and without (n=73) rating...... the intensity of the checked descriptors. With CATA, consumers rated 38 descriptors grouped in 7 overall categories (berries, floral, hoppy, nutty, roasted, spicy/herbal and woody). Additionally 40 of the consumers evaluated the same samples by partial Napping® followed by Ultra Flash Profiling (UFP). ANOVA...... comparisons the RV coefficients varied between 0.90 and 0.97, indicating a very high similarity between all three methods. These results show that the precision and reproducibility of sensory information obtained by consumers by CATA is comparable to that of Napping. The choice of methodology for consumer...

  3. Comparison of methods for measuring flux gradients in type II superconductors

    International Nuclear Information System (INIS)

    Kroeger, D.M.; Koch, C.C.; Charlesworth, J.P.

    1975-01-01

    A comparison has been made of four methods of measuring the critical current density J/sub c/ in hysteretic type II superconductors, having a wide range of K and J/sub c/ values, in magnetic fields up to 70 kOe. Two of the methods, (a) resistive measurements and (b) magnetization measurements, were carried out in static magnetic fields. The other two methods involved analysis of the response of the sample to a small alternating field superimposed on the static field. The response was analyzed either (c) by measuring the third-harmonic content or (d) by integration of the waveform to obtain measure of flux penetration. The results are discussed with reference to the agreement between the different techniques and the consistency of the critical state hypothesis on which all these techniques are based. It is concluded that flux-penetration measurements by method (d) provide the most detailed information about J/sub c/ but that one must be wary of minor failures of the critical state hypothesis. Best results are likely to be obtained by using more than one method. (U.S.)

  4. A linguistic approach to solving of the problem of technological adjustment of combines

    Directory of Open Access Journals (Sweden)

    Lyudmila V. Borisova

    2017-06-01

    Full Text Available Introduction: The article deals with a linguistic approach to the technological adjustment of difficult harvesters in field conditions. The short characteristic of subject domain is provided. The place of the task of adjustment of the combine harvester working bodies in harvesting is considered. Various groups of signs of the considered task are allocated: external signs of violation of quality of work, regulated parameters of the machine, and parameters of technical condition. The numerical data characterizing interrelations between external signs and parameters of the machine are provided. Materials and Methods: A combine harvester is the difficult dynamic system functioning under constantly changing external conditions. This fact imposes characteristics on the used methods of technological adjustment. Quantitative and qualitative information is used to control harvesting. Availability of different types of uncertainty in considering semantic spaces of factors of the external environment and parameters of the machine allows offering the method of technological adjustment based on an indistinct logical conclusion for the solution of the task. Results: As the analysis result, the decision making methodology for indistinct environment conditions is adapted for the studied subject domain. The generalized scheme of indistinct management of process is offered to technological adjustment of the machine. Models of the studied semantic spaces are considered. Feasibility of use of deductive and inductive conclusions of decisions for various tasks of preliminary setup and adjustment of technological adjustments is shown. The formal and logical scheme of the decision making process based on indistinct expert knowledge is offered. The scheme includes the main stages of the task solution: fazzifikation, composition and defazzifikation. The question of the quantitative assessment of expert knowledge coordination is considered. The examples of the formulation

  5. Influence of the method of optimizing adjustments of ARV-SD on attainable degree of system stability. Vliyaniye metoda optimizatsii nastroyek ARV-SD na dostizhimuyu stepen ustoychivosti sistemy

    Energy Technology Data Exchange (ETDEWEB)

    Gruzdev, I.A.; Trudospekova, G.Kh.

    1983-01-01

    An examination is made of the efficiency of the methods of successive and simultaneous optimization of adjustments of ARV-SD (ARV of strong action) of several PP. It is shown that with the use of the method of simultaneous optimization for an idealized model of complex EPS, it is possible to attain absolute controllability of the degree of stability.

  6. Parameters-adjustable front-end controller in digital nuclear measurement system

    International Nuclear Information System (INIS)

    Hao Dejian; Zhang Ruanyu; Yan Yangyang; Wang Peng; Tang Changjian

    2013-01-01

    Background: One digitizer is used to implement a digital nuclear measurement for the acquisition of nuclear information. Purpose: A principle and method of a parameter-adjustable front-end controller is presented for the sake of reducing the quantitative errors while getting the maximum ENOB (effective number of bits) of ADC (analog-to-digital converter) during waveform digitizing, as well as reducing the losing counts. Methods: First of all, the quantitative relationship among the radiation count rate (n), the amplitude of input signal (V in ), the conversion scale of ADC (±V) and the amplification factor (A) was derived. Secondly, the hardware and software of the front-end controller were designed to fulfill matching the output of different detectors, adjusting the amplification linearly through the control of channel switching, and setting of digital potentiometer by CPLD (Complex Programmable Logic Device). Results: (1) Through the measurement of γ-ray of Am-241 under our digital nuclear measurement set-up with CZT detector, it was validated that the amplitude of output signal of detectors of RC feedback type could be amplified linearly with adjustable amplification by the front-end controller. (2) Through the measurement of X-ray spectrum of Fe-5.5 under our digital nuclear measurement set-up with Si-PIN detector, it was validated that the front-end controller was suitable for the switch resetting type detectors, by which high precision measurement under various count rates could be fulfilled. Conclusion: The principle and method of the parameter-adjustable front-end controller presented in this paper is correct and feasible. (authors)

  7. Close Sequence Comparisons are Sufficient to Identify Humancis-Regulatory Elements

    Energy Technology Data Exchange (ETDEWEB)

    Prabhakar, Shyam; Poulin, Francis; Shoukry, Malak; Afzal, Veena; Rubin, Edward M.; Couronne, Olivier; Pennacchio, Len A.

    2005-12-01

    Cross-species DNA sequence comparison is the primary method used to identify functional noncoding elements in human and other large genomes. However, little is known about the relative merits of evolutionarily close and distant sequence comparisons, due to the lack of a universal metric for sequence conservation, and also the paucity of empirically defined benchmark sets of cis-regulatory elements. To address this problem, we developed a general-purpose algorithm (Gumby) that detects slowly-evolving regions in primate, mammalian and more distant comparisons without requiring adjustment of parameters, and ranks conserved elements by P-value using Karlin-Altschul statistics. We benchmarked Gumby predictions against previously identified cis-regulatory elements at diverse genomic loci, and also tested numerous extremely conserved human-rodent sequences for transcriptional enhancer activity using reporter-gene assays in transgenic mice. Human regulatory elements were identified with acceptable sensitivity and specificity by comparison with 1-5 other eutherian mammals or 6 other simian primates. More distant comparisons (marsupial, avian, amphibian and fish) failed to identify many of the empirically defined functional noncoding elements. We derived an intuitive relationship between ancient and recent noncoding sequence conservation from whole genome comparative analysis, which explains some of these findings. Lastly, we determined that, in addition to strength of conservation, genomic location and/or density of surrounding conserved elements must also be considered in selecting candidate enhancers for testing at embryonic time points.

  8. Realistic PIC modelling of laser-plasma interaction: a direct implicit method with adjustable damping and high order weight functions

    International Nuclear Information System (INIS)

    Drouin, M.

    2009-11-01

    This research thesis proposes a new formulation of the relativistic implicit direct method, based on the weak formulation of the wave equation which is solved by means of a Newton algorithm. The first part of this thesis deals with the properties of the explicit particle-in-cell (PIC) methods: properties and limitations of an explicit PIC code, linear analysis of a numerical plasma, numerical heating phenomenon, interest of a higher order interpolation function, and presentation of two applications in high density relativistic laser-plasma interaction. The second and main part of this report deals with adapting the direct implicit method to laser-plasma interaction: presentation of the state of the art, formulating of the direct implicit method, resolution of the wave equation. The third part concerns various numerical and physical validations of the ELIXIRS code: case of laser wave propagation in vacuum, demonstration of the adjustable damping which is a characteristic of the proposed algorithm, influence of space-time discretization on energy conservation, expansion of a thermal plasma in vacuum, two cases of plasma-beam unsteadiness in relativistic regime, and then a case of the overcritical laser-plasma interaction

  9. Cross-comparison of three surrogate safety methods to diagnose cyclist safety problems at intersections in Norway.

    Science.gov (United States)

    Laureshyn, Aliaksei; Goede, Maartje de; Saunier, Nicolas; Fyhri, Aslak

    2017-08-01

    Relying on accident records as the main data source for studying cyclists' safety has many drawbacks, such as high degree of under-reporting, the lack of accident details and particularly of information about the interaction processes that led to the accident. It is also an ethical problem as one has to wait for accidents to happen in order to make a statement about cyclists' (un-)safety. In this perspective, the use of surrogate safety measures based on actual observations in traffic is very promising. In this study we used video data from three intersections in Norway that were all independently analysed using three methods: the Swedish traffic conflict technique (Swedish TCT), the Dutch conflict technique (DOCTOR) and the probabilistic surrogate measures of safety (PSMS) technique developed in Canada. The first two methods are based on manual detection and counting of critical events in traffic (traffic conflicts), while the third considers probabilities of multiple trajectories for each interaction and delivers a density map of potential collision points per site. Due to extensive use of microscopic data, PSMS technique relies heavily on automated tracking of the road users in video. Across the three sites, the methods show similarities or are at least "compatible" with the accident records. The two conflict techniques agree quite well for the number, type and location of conflicts, but some differences with no obvious explanation are also found. PSMS reports many more safety-relevant interactions including less severe events. The location of the potential collision points is compatible with what the conflict techniques suggest, but the possibly significant share of false alarms due to inaccurate trajectories extracted from video complicates the comparison. The tested techniques still require enhancement, with respect to better adjustment to analysis of the situations involving cyclists (and vulnerable road users in general) and further validation. However, we

  10. COMPARISON OF ULTRASOUND IMAGE FILTERING METHODS BY MEANS OF MULTIVARIABLE KURTOSIS

    Directory of Open Access Journals (Sweden)

    Mariusz Nieniewski

    2017-06-01

    Full Text Available Comparison of the quality of despeckled US medical images is complicated because there is no image of a human body that would be free of speckles and could serve as a reference. A number of various image metrics are currently used for comparison of filtering methods; however, they do not satisfactorily represent the visual quality of images and medical expert’s satisfaction with images. This paper proposes an innovative use of relative multivariate kurtosis for the evaluation of the most important edges in an image. Multivariate kurtosis allows one to introduce an order among the filtered images and can be used as one of the metrics for image quality evaluation. At present there is no method which would jointly consider individual metrics. Furthermore, these metrics are typically defined by comparing the noisy original and filtered images, which is incorrect since the noisy original cannot serve as a golden standard. In contrast to this, the proposed kurtosis is the absolute measure, which is calculated independently of any reference image and it agrees with the medical expert’s satisfaction to a large extent. The paper presents a numerical procedure for calculating kurtosis and describes results of such calculations for a computer-generated noisy image, images of a general purpose phantom and a cyst phantom, as well as real-life images of thyroid and carotid artery obtained with SonixTouch ultrasound machine. 16 different methods of image despeckling are compared via kurtosis. The paper shows that visually more satisfactory despeckling results are associated with higher kurtosis, and to a certain degree kurtosis can be used as a single metric for evaluation of image quality.

  11. Adjusting the Adjusted X[superscript 2]/df Ratio Statistic for Dichotomous Item Response Theory Analyses: Does the Model Fit?

    Science.gov (United States)

    Tay, Louis; Drasgow, Fritz

    2012-01-01

    Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…

  12. Methods for cost management during product development: A review and comparison of different literatures

    NARCIS (Netherlands)

    Wouters, M.; Morales, S.; Grollmuss, S.; Scheer, M.

    2016-01-01

    Purpose The paper provides an overview of research published in the innovation and operations management (IOM) literature on 15 methods for cost management in new product development, and it provides a comparison to an earlier review of the management accounting (MA) literature (Wouters & Morales,

  13. Comparison of quantification methods for the analysis of polychlorinated alkanes using electron capture negative ionization mass spectrometry.

    NARCIS (Netherlands)

    Rusina, T.; Korytar, P.; de Boer, J.

    2011-01-01

    Four quantification methods for short-chain chlorinated paraffins (SCCPs) or polychlorinated alkanes (PCAs) using gas chromatography electron capture negative ionisation low resolution mass spectrometry (GC-ECNI-LRMS) were investigated. The method based on visual comparison of congener group

  14. Comparison of quantification methods for the analysis of polychlorinated alkanes using electron capture negative ionisation mass spectrometry

    NARCIS (Netherlands)

    Rusina, T.; Korytar, P.; Boer, de J.

    2011-01-01

    Four quantification methods for short-chain chlorinated paraffins (SCCPs) or polychlorinated alkanes (PCAs) using gas chromatography electron capture negative ionisation low resolution mass spectrometry (GC-ECNI-LRMS) were investigated. The method based on visual comparison of congener group

  15. Adjusting the displaced tip of peripherally inserted central catheter under DSA guidance

    International Nuclear Information System (INIS)

    Mao Yanjun; Dong Huijuan; Zhang Lingjuan; Li Hongmei; Xu Lianqin

    2009-01-01

    Objective: To explore a new method to adjust the displaced tip of peripherally inserted central catheter (PICC) under DSA guidance. Methods: Under DSA guidance, the displaced tip of PICC was repositioned to the ideal junction area of superior vena cava with right atrium with proper manipulation. Results: Under DSA guidance, the displaced tip of PICC was successfully corrected in 13 cases. The mean operative time was 15.53 minutes, which was markedly shorter than that needed by blind adjusting beside the bed. Conclusion: The displacement of PICC tip is a common occurrence, which is hard to be avoided. Under DSA guidance, the adjusting manipulation of the displaced PICC tip is safe and time-saving with high successful rate. It is worth popularizing this technique in clinical practice. (authors)

  16. Long-Term Large-Scale Bias-Adjusted Precipitation Estimates at High Spatial and Temporal Resolution Derived from the National Mosaic and Multi-Sensor QPE (NMQ/Q2) Precipitation Reanalysis over CONUS

    Science.gov (United States)

    Prat, O. P.; Nelson, B. R.; Stevens, S. E.; Seo, D. J.; Kim, B.

    2014-12-01

    The processing of radar-only precipitation via the reanalysis from the National Mosaic and Multi-Sensor Quantitative (NMQ/Q2) based on the WSR-88D Next-generation Radar (Nexrad) network over Continental United States (CONUS) is nearly completed for the period covering from 2000 to 2012. This important milestone constitutes a unique opportunity to study precipitation processes at a 1-km spatial resolution for a 5-min temporal resolution. However, in order to be suitable for hydrological, meteorological and climatological applications, the radar-only product needs to be bias-adjusted and merged with in-situ rain gauge information. Rain gauge networks such as the Hydrometeorological Automated Data System (HADS), the Automated Surface Observing Systems (ASOS), the Climate Reference Network (CRN), and the Global Historical Climatology Network - Daily (GHCN-D) are used to adjust for those biases and to merge with the radar only product to provide a multi-sensor estimate. The challenges related to incorporating non-homogeneous networks over a vast area and for a long-term record are enormous. Among the challenges we are facing are the difficulties incorporating differing resolution and quality surface measurements to adjust gridded estimates of precipitation. Another challenge is the type of adjustment technique. After assessing the bias and applying reduction or elimination techniques, we are investigating the kriging method and its variants such as simple kriging (SK), ordinary kriging (OK), and conditional bias-penalized Kriging (CBPK) among others. In addition we hope to generate estimates of uncertainty for the gridded estimate. In this work the methodology is presented as well as a comparison between the radar-only product and the final multi-sensor QPE product. The comparison is performed at various time scales from the sub-hourly, to annual. In addition, comparisons over the same period with a suite of lower resolution QPEs derived from ground based radar

  17. Consistent comparison of angle Kappa adjustment between Oculyzer and Topolyzer Vario topography guided LASIK for myopia by EX500 excimer laser.

    Science.gov (United States)

    Sun, Ming-Shen; Zhang, Li; Guo, Ning; Song, Yan-Zheng; Zhang, Feng-Ju

    2018-01-01

    To evaluate and compare the uniformity of angle Kappa adjustment between Oculyzer and Topolyzer Vario topography guided ablation of laser in situ keratomileusis (LASIK) by EX500 excimer laser for myopia. Totally 145 cases (290 consecutive eyes )with myopia received LASIK with a target of emmetropia. The ablation for 86 cases (172 eyes) was guided manually based on Oculyzer topography (study group), while the ablation for 59 cases (118 eyes) was guided automatically by Topolyzer Vario topography (control group). Measurement of adjustment values included data respectively in horizontal and vertical direction of cornea. Horizontally, synclastic adjustment between manually actual values (dx manu ) and Oculyzer topography guided data (dx ocu ) accounts 35.5% in study group, with mean dx manu /dx ocu of 0.78±0.48; while in control group, synclastic adjustment between automatically actual values (dx auto ) and Oculyzer topography data (dx ocu ) accounts 54.2%, with mean dx auto /dx ocu of 0.79±0.66. Vertically, synclastic adjustment between dy manu and dy ocu accounts 55.2% in study group, with mean dy manu /dy ocu of 0.61±0.42; while in control group, synclastic adjustment between dy auto and dy ocu accounts 66.1%, with mean dy auto /dy ocu of 0.66±0.65. There was no statistically significant difference in ratio of actual values/Oculyzer topography guided data in horizontal and vertical direction between two groups ( P =0.951, 0.621). There is high consistency in angle Kappa adjustment guided manually by Oculyzer and guided automatically by Topolyzer Vario topography during corneal refractive surgery by WaveLight EX500 excimer laser.

  18. Schizotypy and Behavioural Adjustment and the Role of Neuroticism

    Science.gov (United States)

    Völter, Christoph; Strobach, Tilo; Aichert, Désirée S.; Wöstmann, Nicola; Costa, Anna; Möller, Hans-Jürgen; Schubert, Torsten; Ettinger, Ulrich

    2012-01-01

    Objective In the present study the relationship between behavioural adjustment following cognitive conflict and schizotypy was investigated using a Stroop colour naming paradigm. Previous research has found deficits with behavioural adjustment in schizophrenia patients. Based on these findings, we hypothesized that individual differences in schizotypy, a personality trait reflecting the subclinical expression of the schizophrenia phenotype, would be associated with behavioural adjustment. Additionally, we investigated whether such a relationship would be explained by individual differences in neuroticism, a non-specific measure of negative trait emotionality known to be correlated with schizotypy. Methods 106 healthy volunteers (mean age: 25.1, 60% females) took part. Post-conflict adjustment was measured in a computer-based version of the Stroop paradigm. Schizotypy was assessed using the Schizotypal Personality Questionnaire (SPQ) and Neuroticism using the NEO-FFI. Results We found a negative correlation between schizotypy and post-conflict adjustment (r = −.30, p<.01); this relationship remained significant when controlling for effects of neuroticism. Regression analysis revealed that particularly the subscale No Close Friends drove the effect. Conclusion Previous findings of deficits in cognitive control in schizophrenia patients were extended to the subclinical personality expression of the schizophrenia phenotype and found to be specific to schizotypal traits over and above the effects of negative emotionality. PMID:22363416

  19. Schizotypy and behavioural adjustment and the role of neuroticism.

    Directory of Open Access Journals (Sweden)

    Christoph Völter

    Full Text Available OBJECTIVE: In the present study the relationship between behavioural adjustment following cognitive conflict and schizotypy was investigated using a Stroop colour naming paradigm. Previous research has found deficits with behavioural adjustment in schizophrenia patients. Based on these findings, we hypothesized that individual differences in schizotypy, a personality trait reflecting the subclinical expression of the schizophrenia phenotype, would be associated with behavioural adjustment. Additionally, we investigated whether such a relationship would be explained by individual differences in neuroticism, a non-specific measure of negative trait emotionality known to be correlated with schizotypy. METHODS: 106 healthy volunteers (mean age: 25.1, 60% females took part. Post-conflict adjustment was measured in a computer-based version of the Stroop paradigm. Schizotypy was assessed using the Schizotypal Personality Questionnaire (SPQ and Neuroticism using the NEO-FFI. RESULTS: We found a negative correlation between schizotypy and post-conflict adjustment (r = -.30, p<.01; this relationship remained significant when controlling for effects of neuroticism. Regression analysis revealed that particularly the subscale No Close Friends drove the effect. CONCLUSION: Previous findings of deficits in cognitive control in schizophrenia patients were extended to the subclinical personality expression of the schizophrenia phenotype and found to be specific to schizotypal traits over and above the effects of negative emotionality.

  20. Adjusting a cancer mortality-prediction model for disease status-related eligibility criteria

    Directory of Open Access Journals (Sweden)

    Kimmel Marek

    2011-05-01

    Full Text Available Abstract Background Volunteering participants in disease studies tend to be healthier than the general population partially due to specific enrollment criteria. Using modeling to accurately predict outcomes of cohort studies enrolling volunteers requires adjusting for the bias introduced in this way. Here we propose a new method to account for the effect of a specific form of healthy volunteer bias resulting from imposing disease status-related eligibility criteria, on disease-specific mortality, by explicitly modeling the length of the time interval between the moment when the subject becomes ineligible for the study, and the outcome. Methods Using survival time data from 1190 newly diagnosed lung cancer patients at MD Anderson Cancer Center, we model the time from clinical lung cancer diagnosis to death using an exponential distribution to approximate the length of this interval for a study where lung cancer death serves as the outcome. Incorporating this interval into our previously developed lung cancer risk model, we adjust for the effect of disease status-related eligibility criteria in predicting the number of lung cancer deaths in the control arm of CARET. The effect of the adjustment using the MD Anderson-derived approximation is compared to that based on SEER data. Results Using the adjustment developed in conjunction with our existing lung cancer model, we are able to accurately predict the number of lung cancer deaths observed in the control arm of CARET. Conclusions The resulting adjustment was accurate in predicting the lower rates of disease observed in the early years while still maintaining reasonable prediction ability in the later years of the trial. This method could be used to adjust for, or predict the duration and relative effect of any possible biases related to disease-specific eligibility criteria in modeling studies of volunteer-based cohorts.

  1. How Many Alternatives Can Be Ranked? A Comparison of the Paired Comparison and Ranking Methods.

    Science.gov (United States)

    Ock, Minsu; Yi, Nari; Ahn, Jeonghoon; Jo, Min-Woo

    2016-01-01

    To determine the feasibility of converting ranking data into paired comparison (PC) data and suggest the number of alternatives that can be ranked by comparing a PC and a ranking method. Using a total of 222 health states, a household survey was conducted in a sample of 300 individuals from the general population. Each respondent performed a PC 15 times and a ranking method 6 times (two attempts of ranking three, four, and five health states, respectively). The health states of the PC and the ranking method were constructed to overlap each other. We converted the ranked data into PC data and examined the consistency of the response rate. Applying probit regression, we obtained the predicted probability of each method. Pearson correlation coefficients were determined between the predicted probabilities of those methods. The mean absolute error was also assessed between the observed and the predicted values. The overall consistency of the response rate was 82.8%. The Pearson correlation coefficients were 0.789, 0.852, and 0.893 for ranking three, four, and five health states, respectively. The lowest mean absolute error was 0.082 (95% confidence interval [CI] 0.074-0.090) in ranking five health states, followed by 0.123 (95% CI 0.111-0.135) in ranking four health states and 0.126 (95% CI 0.113-0.138) in ranking three health states. After empirically examining the consistency of the response rate between a PC and a ranking method, we suggest that using five alternatives in the ranking method may be superior to using three or four alternatives. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  2. Passive pH adjustment of nuclear reactor containment flood water

    International Nuclear Information System (INIS)

    Gerlowski, T.J.

    1986-01-01

    A method is described of automatically and passively adjusting the pH of the recirculating liquid used to flood the containment structure of a nuclear reactor upon the occurence of an accident in order to cool the reactor core, wherein the containment structure has a concrete floor which is provided with at least one sump from which the liquid is withdrawn for recirculation via at least one outlet pipe. The method consists of: prior to flooding and during or prior to normal operation of the reactor, providing at least one perforated basket within at least one sump with the basket containing crystals of a pH adjusting chemical which is soluble in the liquid, and covering each basket with a plastic coating which is likewise soluble in the liquid, whereby upon flooding of the containment structure the liquid in the sump will reach the level of the baskets, causing the coating and the crystals to be dissolved and the chemical to mix with the recirculating liquid to adjust the pH

  3. An evaluation of bias in propensity score-adjusted non-linear regression models.

    Science.gov (United States)

    Wan, Fei; Mitra, Nandita

    2018-03-01

    Propensity score methods are commonly used to adjust for observed confounding when estimating the conditional treatment effect in observational studies. One popular method, covariate adjustment of the propensity score in a regression model, has been empirically shown to be biased in non-linear models. However, no compelling underlying theoretical reason has been presented. We propose a new framework to investigate bias and consistency of propensity score-adjusted treatment effects in non-linear models that uses a simple geometric approach to forge a link between the consistency of the propensity score estimator and the collapsibility of non-linear models. Under this framework, we demonstrate that adjustment of the propensity score in an outcome model results in the decomposition of observed covariates into the propensity score and a remainder term. Omission of this remainder term from a non-collapsible regression model leads to biased estimates of the conditional odds ratio and conditional hazard ratio, but not for the conditional rate ratio. We further show, via simulation studies, that the bias in these propensity score-adjusted estimators increases with larger treatment effect size, larger covariate effects, and increasing dissimilarity between the coefficients of the covariates in the treatment model versus the outcome model.

  4. EFFECTIVENESS OF CHIROPRACTIC ADJUSTMENT IN LUMBAR PAIN IN CROSSFIT PRACTITIONERS

    Directory of Open Access Journals (Sweden)

    DESIREE MOEHLECKE

    Full Text Available ABSTRACT Objective: To evaluate the efficacy of acute chiropractic adjustment in individuals who practice CrossFit with regard to complaints of low back pain and the joint range of motion in this region. Methods: A randomized clinical trial comprised of CrossFit practitioners from a box in Novo Hamburgo-RS, of both sexes and aged 18 to 40 years who had low back pain at the time of the study. The following tools were used: Semi-structured Anamnesis Questionnaire, Visual Analog Scale, McGill Pain Questionnaire, and SF-36 Quality of Life Questionnaire. Individuals in the control group answered the questionnaires before and after CrossFit training. The chiropractic group performed the same procedure, plus pre-training chiropractic adjustment and joint range of motion (ROM before and after lumbar adjustment. Results: There was a significant increase in pain in the control group, and a significant decrease in pain in the chiropractic group, including one day after the chiropractic adjustment. In the chiropractic group, the joint ranges of motion had a significant increase in flexion and extension of the lumbar spine after chiropractic adjustment. Conclusion: The chiropractic group achieved a significant improvement in pain level and joint range of motion, suggesting that acute chiropractic adjustment was effective in reducing low back pain.

  5. Should adjustment disorder be conceptualized as transitional disorder? In pursuit of adjustment disorders definition.

    Science.gov (United States)

    Israelashvili, Moshe

    2012-12-01

    The DSM classification of an adjustment disorder is frequently criticized for not being well differentiated from other disorders. A possible reason for this is the vague definition of the term adjustment in social science literature. Hence, the current paper discusses the definition of adjustment and its implications for understanding maladjustment. Differential definitions of the terms adjustment, adaptation, socialization and coping are outlined, leading to the proposition that each one of them represents a different type of demand that is imposed on an individual who encounters a transitional event. Moreover, the four types of demands might be the possible sources of maladjustment. Helping people in transition requires an identification of the source, or combination of sources, that have led to the adjustment problem first, followed by the implementation of an adequate helping approach.

  6. A comparison of methods to predict historical daily streamflow time series in the southeastern United States

    Science.gov (United States)

    Farmer, William H.; Archfield, Stacey A.; Over, Thomas M.; Hay, Lauren E.; LaFontaine, Jacob H.; Kiang, Julie E.

    2015-01-01

    Effective and responsible management of water resources relies on a thorough understanding of the quantity and quality of available water. Streamgages cannot be installed at every location where streamflow information is needed. As part of its National Water Census, the U.S. Geological Survey is planning to provide streamflow predictions for ungaged locations. In order to predict streamflow at a useful spatial and temporal resolution throughout the Nation, efficient methods need to be selected. This report examines several methods used for streamflow prediction in ungaged basins to determine the best methods for regional and national implementation. A pilot area in the southeastern United States was selected to apply 19 different streamflow prediction methods and evaluate each method by a wide set of performance metrics. Through these comparisons, two methods emerged as the most generally accurate streamflow prediction methods: the nearest-neighbor implementations of nonlinear spatial interpolation using flow duration curves (NN-QPPQ) and standardizing logarithms of streamflow by monthly means and standard deviations (NN-SMS12L). It was nearly impossible to distinguish between these two methods in terms of performance. Furthermore, neither of these methods requires significantly more parameterization in order to be applied: NN-SMS12L requires 24 regional regressions—12 for monthly means and 12 for monthly standard deviations. NN-QPPQ, in the application described in this study, required 27 regressions of particular quantiles along the flow duration curve. Despite this finding, the results suggest that an optimal streamflow prediction method depends on the intended application. Some methods are stronger overall, while some methods may be better at predicting particular statistics. The methods of analysis presented here reflect a possible framework for continued analysis and comprehensive multiple comparisons of methods of prediction in ungaged basins (PUB

  7. Comparison of Methods for Computing the Exchange Energy of quantum helium and hydrogen

    International Nuclear Information System (INIS)

    Cayao, J. L. C. D.

    2009-01-01

    I investigate approach methods to find the exchange energy for quantum helium and hydrogen. I focus on Heitler-London, Hund-Mullikan, Molecular Orbital and variational approach methods. I use Fock-Darwin states centered at the potential minima as the single electron wavefunctions. Using these we build Slater determinants as the basis for the two electron problem. I do a comparison of methods for two electron double dot (quantum hydrogen) and for two electron single dot (quantum helium) in zero and finite magnetic field. I show that the variational, Hund-Mullikan and Heitler-London methods are in agreement with the exact solutions. Also I show that the exchange energy calculation by Heitler-London (HL) method is an excellent approximation for large inter dot distances and for single dot in magnetic field is an excellent approximation the Variational method. (author)

  8. Technical and economic comparison of irradiation and conventional methods

    International Nuclear Information System (INIS)

    1988-04-01

    Radiation processing is based on the use of ionizing radiation (gamma radiation, high energy electrons) as a source of energy in different industrial applications. For about 30 years of steady growth it became almost routine in a number of industrial processes. The growth resulted, among other things, from the availability of radiation sources matching requirements of industrial production and from the wide basis of fundamental and applied radiation research. The most important, however, was the fact that radiation processing proved in practice to be safe, reliable, easy to control, and economical. In comparison with other competitive techniques, it also offered a better alternative from the viewpoint of occupational safety and environmental considerations. In order to review the current status of comparative analysis of radiation and competitive techniques from the technological and economic viewpoint, the Agency convened an Advisory Group in Dubrovnik, Yugoslavia, 6-8 October 1986. The present publication is based on contributions presented at the meeting, discussions held at the meeting, and subsequent editing work. It is expected that this updated technological and economic comparative analysis will provide useful information for all those who are implementing or considering the introduction of radiation processing to industry. The presented data may be essential for decision makers and could contribute to all feasibility and pre-investment studies. As such, this publication is expected to promote industrially oriented projects based on radiation processing techniques. The actual figures given in the individual reports are as accurate as possible. However, it should be understood that time factor and local conditions may play a significant role and corresponding adjustment may be required. Refs, figs and tabs

  9. Comparison of methods used to estimate conventional undiscovered petroleum resources: World examples

    Science.gov (United States)

    Ahlbrandt, T.S.; Klett, T.R.

    2005-01-01

    Various methods for assessing undiscovered oil, natural gas, and natural gas liquid resources were compared in support of the USGS World Petroleum Assessment 2000. Discovery process, linear fractal, parabolic fractal, engineering estimates, PETRIMES, Delphi, and the USGS 2000 methods were compared. Three comparisons of these methods were made in: (1) the Neuquen Basin province, Argentina (different assessors, same input data); (2) provinces in North Africa, Oman, and Yemen (same assessors, different methods); and (3) the Arabian Peninsula, Arabian (Persian) Gulf, and North Sea (different assessors, different methods). A fourth comparison (same assessors, same assessment methods but different geologic models), between results from structural and stratigraphic assessment units in the North Sea used only the USGS 2000 method, and hence compared the type of assessment unit rather than the method. In comparing methods, differences arise from inherent differences in assumptions regarding: (1) the underlying distribution of the parent field population (all fields, discovered and undiscovered), (2) the population of fields being estimated; that is, the entire parent distribution or the undiscovered resource distribution, (3) inclusion or exclusion of large outlier fields; (4) inclusion or exclusion of field (reserve) growth, (5) deterministic or probabilistic models, (6) data requirements, and (7) scale and time frame of the assessment. Discovery process, Delphi subjective consensus, and the USGS 2000 method yield comparable results because similar procedures are employed. In mature areas such as the Neuquen Basin province in Argentina, the linear and parabolic fractal and engineering methods were conservative compared to the other five methods and relative to new reserve additions there since 1995. The PETRIMES method gave the most optimistic estimates in the Neuquen Basin. In less mature areas, the linear fractal method yielded larger estimates relative to other methods

  10. Comparison of immersion density and improved microstructural characterization methods for measuring swelling in small irradiated disk specimens

    International Nuclear Information System (INIS)

    Sawai, T.; Suzuki, M.; Hishinuma, A.; Maziasz, P.J.

    1992-01-01

    The procedure of obtaining microstructural data from reactor-irradiated specimens has been carefully checked for accuracy by comparison of swelling data obtained from transmission electron microscopy (TEM) observations of cavities with density-change data measured using the Precision Densitometer at Oak Ridge National Laboratory (ORNL). Comparison of data measured by both methods on duplicate or, in some cases, on the same specimen has shown some appreciable discrepancies for US/Japan collaborative experiments irradiated in the High Flux Isotope Reactor (HFIR). The contamination spot separation (CSS) method was used in the past to determine the thickness of a TEM foil. Recent work has revealed an appreciable error in this method that can result in an overestimation of the foil thickness. This error causes lower swelling values to be measured by TEM microstructural observation relative to the Precision Densitometer. An improved method is proposed for determining the foil thickness by the CSS method, which includes a correction for the systematic overestimation of foil thickness. (orig.)

  11. Differential impact of fathers' authoritarian parenting on early adolescent adjustment in conservative protestant versus other families.

    Science.gov (United States)

    Gunnoe, Marjorie Lindner; Hetherington, E Mavis; Reiss, David

    2006-12-01

    The purpose of the study was to determine whether well-established associations between authoritarian parenting and adolescent adjustment pertain to conservative Protestant (CP) families. Structural equation modeling was used to test paths from biological fathers' authoritarian parenting to adolescent adjustment in 65 CP versus 170 comparison families in the Nonshared Environment and Adolescent Development Study (NEAD; D. Reiss et al., 1994). The hypothesis that adolescents in CP families would be less harmed by authoritarian parenting than would adolescents in control families was partially supported: Authoritarian parenting directly predicted greater externalizing and internalizing for adolescents in control families but not for adolescents in CP families. In contrast, parents' religious affiliation failed to moderate the negative associations between authoritarian parenting and positive adjustment. Understanding family processes specific to the CP subculture is important for helping these families raise competent children. (c) 2006 APA, all rights reserved.

  12. Comparison of DNA extraction methods for detection of citrus huanglongbing in Colombia

    Directory of Open Access Journals (Sweden)

    Jorge Evelio Ángel

    2014-04-01

    Full Text Available Four DNA citrus plant tissue extraction protocols and three methods of DNA extraction from vector psyllid Diaphorina citri Kuwayama (Hemiptera: Psyllidae were compared as part of the validation process and standardization for detection of huanglongbing (HLB. The comparison was done using several criterias such as integrity, purity and concentration. The best quality parameters presented in terms of extraction of DNA from plant midribs tissue of citrus, were cited by Murray and Thompson (1980 and Rodríguez et al. (2010, while for the DNA extraction from psyllid vectors of HLB, the best extraction method was suggested by Manjunath et al.(2008.

  13. Standardization of Tc-99 by two methods and participation at the CCRI(II)-K2. Tc-99 comparison.

    Science.gov (United States)

    Sahagia, M; Antohe, A; Ioan, R; Luca, A; Ivan, C

    2014-05-01

    The work accomplished within the participation at the 2012 key comparison of Tc-99 is presented. The solution was standardized for the first time in IFIN-HH by two methods: LSC-TDCR and 4π(PC)β-γ efficiency tracer. The methods are described and the results are compared. For the LSC-TDCR method, the program TDCR07c, written and provided by P. Cassette, was used for processing the measurement data. The results are 2.1% higher than when applying the TDCR06b program; the higher value, calculated with the software TDCR07c, was used for reporting the final result in the comparison. The tracer used for the 4π(PC)β-γ efficiency tracer method was a standard (60)Co solution. The sources were prepared from the mixture (60)Co+(99)Tc solution and a general extrapolation curve, type: N(βTc-99)/(M)(Tc-99)=f [1-ε(Co-60)], was drawn. This value was not used for the final result of the comparison. The difference between the values of activity concentration obtained by the two methods was within the limit of the combined standard uncertainty of the difference of these two results. © 2013 Published by Elsevier Ltd.

  14. Intelligent Adjustment of Printhead Driving Waveform Parameters for 3D Electronic Printing

    Directory of Open Access Journals (Sweden)

    Lin Na

    2017-01-01

    Full Text Available In practical applications of 3D electronic printing, a major challenge is to adjust the printhead for a high print resolution and accuracy. However, an exhausting manual selective process inevitably wastes a lot of time. Therefore, in this paper, we proposed a new intelligent adjustment method, which adopts artificial bee colony algorithm to optimize the printhead driving waveform parameters for getting the desired printhead state. Experimental results show that this method can quickly and accuracy find out the suitable combination of driving waveform parameters to meet the needs of applications.

  15. Results of an interlaboratory comparison of analytical methods for contaminants of emerging concern in water.

    Science.gov (United States)

    Vanderford, Brett J; Drewes, Jörg E; Eaton, Andrew; Guo, Yingbo C; Haghani, Ali; Hoppe-Jones, Christiane; Schluesener, Michael P; Snyder, Shane A; Ternes, Thomas; Wood, Curtis J

    2014-01-07

    An evaluation of existing analytical methods used to measure contaminants of emerging concern (CECs) was performed through an interlaboratory comparison involving 25 research and commercial laboratories. In total, 52 methods were used in the single-blind study to determine method accuracy and comparability for 22 target compounds, including pharmaceuticals, personal care products, and steroid hormones, all at ng/L levels in surface and drinking water. Method biases ranged from caffeine, NP, OP, and triclosan had false positive rates >15%. In addition, some methods reported false positives for 17β-estradiol and 17α-ethynylestradiol in unspiked drinking water and deionized water, respectively, at levels higher than published predicted no-effect concentrations for these compounds in the environment. False negative rates were also generally contamination, misinterpretation of background interferences, and/or inappropriate setting of detection/quantification levels for analysis at low ng/L levels. The results of both comparisons were collectively assessed to identify parameters that resulted in the best overall method performance. Liquid chromatography-tandem mass spectrometry coupled with the calibration technique of isotope dilution were able to accurately quantify most compounds with an average bias of <10% for both matrixes. These findings suggest that this method of analysis is suitable at environmentally relevant levels for most of the compounds studied. This work underscores the need for robust, standardized analytical methods for CECs to improve data quality, increase comparability between studies, and help reduce false positive and false negative rates.

  16. Comparison of the auxiliary function method and the discrete-ordinate method for solving the radiative transfer equation for light scattering.

    Science.gov (United States)

    da Silva, Anabela; Elias, Mady; Andraud, Christine; Lafait, Jacques

    2003-12-01

    Two methods for solving the radiative transfer equation are compared with the aim of computing the angular distribution of the light scattered by a heterogeneous scattering medium composed of a single flat layer or a multilayer. The first method [auxiliary function method (AFM)], recently developed, uses an auxiliary function and leads to an exact solution; the second [discrete-ordinate method (DOM)] is based on the channel concept and needs an angular discretization. The comparison is applied to two different media presenting two typical and extreme scattering behaviors: Rayleigh and Mie scattering with smooth or very anisotropic phase functions, respectively. A very good agreement between the predictions of the two methods is observed in both cases. The larger the number of channels used in the DOM, the better the agreement. The principal advantages and limitations of each method are also listed.

  17. A comparison of ancestral state reconstruction methods for quantitative characters.

    Science.gov (United States)

    Royer-Carenzi, Manuela; Didier, Gilles

    2016-09-07

    Choosing an ancestral state reconstruction method among the alternatives available for quantitative characters may be puzzling. We present here a comparison of seven of them, namely the maximum likelihood, restricted maximum likelihood, generalized least squares under Brownian, Brownian-with-trend and Ornstein-Uhlenbeck models, phylogenetic independent contrasts and squared parsimony methods. A review of the relations between these methods shows that the maximum likelihood, the restricted maximum likelihood and the generalized least squares under Brownian model infer the same ancestral states and can only be distinguished by the distributions accounting for the reconstruction uncertainty which they provide. The respective accuracy of the methods is assessed over character evolution simulated under a Brownian motion with (and without) directional or stabilizing selection. We give the general form of ancestral state distributions conditioned on leaf states under the simulation models. Ancestral distributions are used first, to give a theoretical lower bound of the expected reconstruction error, and second, to develop an original evaluation scheme which is more efficient than comparing the reconstructed and the simulated states. Our simulations show that: (i) the distributions of the reconstruction uncertainty provided by the methods generally make sense (some more than others); (ii) it is essential to detect the presence of an evolutionary trend and to choose a reconstruction method accordingly; (iii) all the methods show good performances on characters under stabilizing selection; (iv) without trend or stabilizing selection, the maximum likelihood method is generally the most accurate. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Adjustable Pitot Probe

    Science.gov (United States)

    Ashby, George C., Jr.; Robbins, W. Eugene; Horsley, Lewis A.

    1991-01-01

    Probe readily positionable in core of uniform flow in hypersonic wind tunnel. Formed of pair of mating cylindrical housings: transducer housing and pitot-tube housing. Pitot tube supported by adjustable wedge fairing attached to top of pitot-tube housing with semicircular foot. Probe adjusted both radially and circumferentially. In addition, pressure-sensing transducer cooled internally by water or other cooling fluid passing through annulus of cooling system.

  19. Adjustment problems and residential care environment

    OpenAIRE

    Jan Sebastian Novotný

    2015-01-01

    Problem: Residential care environment represents a specific social space that is associated with a number of negative consequences, covering most aspects of children and youth functioning. The paper analyzes of the presence of adjustment problems among adolescents from institutional care environment and compares this results with a population of adolescents who grew up in a family. Methods: The sample consisted of two groups of adolescents. The first group included 285 adolescents currently g...

  20. Comparison of Chemical Extraction Methods for Determination of Soil Potassium in Different Soil Types

    Science.gov (United States)

    Zebec, V.; Rastija, D.; Lončarić, Z.; Bensa, A.; Popović, B.; Ivezić, V.

    2017-12-01

    Determining potassium supply of soil plays an important role in intensive crop production, since it is the basis for balancing nutrients and issuing fertilizer recommendations for achieving high and stable yields within economic feasibility. The aim of this study was to compare the different extraction methods of soil potassium from arable horizon of different types of soils with ammonium lactate method (KAL), which is frequently used as analytical method for determining the accessibility of nutrients and it is a common method used for issuing fertilizer recommendations in many Europe countries. In addition to the ammonium lactate method (KAL, pH 3.75), potassium was extracted with ammonium acetate (KAA, pH 7), ammonium acetate ethylenediaminetetraacetic acid (KAAEDTA, pH 4.6), Bray (KBRAY, pH 2.6) and with barium chloride (K_{BaCl_2 }, pH 8.1). The analyzed soils were extremely heterogeneous with a wide range of determined values. Soil pH reaction ( {pH_{H_2 O} } ) ranged from 4.77 to 8.75, organic matter content ranged from 1.87 to 4.94% and clay content from 8.03 to 37.07%. In relation to KAL method as the standard method, K_{BaCl_2 } method extracts 12.9% more on average of soil potassium, while in relation to standard method, on average KAA extracts 5.3%, KAAEDTA 10.3%, and KBRAY 27.5% less of potassium. Comparison of analyzed extraction methods of potassium from the soil is of high precision, and most reliable comparison was KAL method with KAAEDTA, followed by a: KAA, K_{BaCl_2 } and KBRAY method. Extremely significant statistical correlation between different extractive methods for determining potassium in the soil indicates that any of the methods can be used to accurately predict the concentration of potassium in the soil, and that carried out research can be used to create prediction model for concentration of potassium based on different methods of extraction.

  1. Accuracy of two face-bow/semi-adjustable articulator systems in transferring the maxillary occlusal cant.

    Science.gov (United States)

    Nazir, Nazia; Sujesh, M; Kumar, Ravi; Sreenivas, P

    2012-01-01

    The precision of an arbitrary face-bow in accurately transferring the orientation of the maxillary cast to the articulator has been questioned because the maxillary cast is mounted in relation to arbitrary measurements and anatomic landmarks that vary among individuals. This study was intended to evaluate the sagittal inclination of mounted maxillary casts on two semi-adjustable articulator/face-bow systems in comparison to the occlusal cant on lateral cephalograms. Maxillary casts were mounted on the Hanau and Girrbach semi-adjustable articulators following face-bow transfer with their respective face-bows. The sagittal inclination of these casts was measured in relation to the fixed horizontal reference plane using physical measurements. Occlusal cant was measured on lateral cephalograms. SPSS software (version 11.0, Chicago, IL, USA) was used for statistical analysis. Repeated measures analysis of variance and Tukey's tests were used to evaluate the results (P occlusal cant on the articulators and cephalogram revealed statistically significant differences. Occlusal plane was steeper on Girrbach Artex articulator in comparison to the Hanau articulator. Within the limitations of this study, it was found that the sagittal inclination of the mounted maxillary cast achieved with Hanau articulator was closer to the cephalometric occlusal cant as compared to that of the Girrbach articulator. Among the two articulators and face-bow systems, the steepness of sagittal inclination was greater on Girrbach semi-adjustable articulator. Different face-bow/articulator systems could result in different orientation of the maxillary cast, resulting in variation in stability, cuspal inclines and cuspal heights.

  2. Diagnostic specificity of poor premorbid adjustment: comparison of schizophrenia, schizoaffective disorder, and mood disorder with psychotic features.

    Science.gov (United States)

    Tarbox, Sarah I; Brown, Leslie H; Haas, Gretchen L

    2012-10-01

    Individuals with schizophrenia have significant deficits in premorbid social and academic adjustment compared to individuals with non-psychotic diagnoses. However, it is unclear how severity and developmental trajectory of premorbid maladjustment compare across psychotic disorders. This study examined the association between premorbid functioning (in childhood, early adolescence, and late adolescence) and psychotic disorder diagnosis in a first-episode sample of 105 individuals: schizophrenia (n=68), schizoaffective disorder (n=22), and mood disorder with psychotic features (n=15). Social and academic maladjustment was assessed using the Cannon-Spoor Premorbid Adjustment Scale. Worse social functioning in late adolescence was associated with higher odds of schizophrenia compared to odds of either schizoaffective disorder or mood disorder with psychotic features, independently of child and early adolescent maladjustment. Greater social dysfunction in childhood was associated with higher odds of schizoaffective disorder compared to odds of schizophrenia. Premorbid decline in academic adjustment was observed for all groups, but did not predict diagnosis at any stage of development. Results suggest that social functioning is disrupted in the premorbid phase of both schizophrenia and schizoaffective disorder, but remains fairly stable in mood disorders with psychotic features. Disparities in the onset and time course of social dysfunction suggest important developmental differences between schizophrenia and schizoaffective disorder. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Social and Psychological Adjustment in Foster Care Alumni: Education and Employment

    OpenAIRE

    Archakova T.O.

    2015-01-01

    The article analyses issues in social and psychological adjustment of young adults, grown up in foster families. The psychological and socio-pedagogical factors facilitating professional education, successful employment and financial independence are emphasized. The methods and results of several large simple design researches of adjustment in foster care alumni, conducted in USA, are described. Recommendations for services and specialists working with young adults leaving state care are prov...

  4. Application of adjusted subpixel method (ASM) in HRCT measurements of the bronchi in bronchial asthma patients and healthy individuals

    International Nuclear Information System (INIS)

    Mincewicz, Grzegorz; Rumiński, Jacek; Krzykowski, Grzegorz

    2012-01-01

    Background: Recently, we described a model system which included corrections of high-resolution computed tomography (HRCT) bronchial measurements based on the adjusted subpixel method (ASM). Objective: To verify the clinical application of ASM by comparing bronchial measurements obtained by means of the traditional eye-driven method, subpixel method alone and ASM in a group comprised of bronchial asthma patients and healthy individuals. Methods: The study included 30 bronchial asthma patients and the control group comprised of 20 volunteers with no symptoms of asthma. The lowest internal and external diameters of the bronchial cross-sections (ID and ED) and their derivative parameters were determined in HRCT scans using: (1) traditional eye-driven method, (2) subpixel technique, and (3) ASM. Results: In the case of the eye-driven method, lower ID values along with lower bronchial lumen area and its percentage ratio to total bronchial area were basic parameters that differed between asthma patients and healthy controls. In the case of the subpixel method and ASM, both groups were not significantly different in terms of ID. Significant differences were observed in values of ED and total bronchial area with both parameters being significantly higher in asthma patients. Compared to ASM, the eye-driven method overstated the values of ID and ED by about 30% and 10% respectively, while understating bronchial wall thickness by about 18%. Conclusions: Results obtained in this study suggest that the traditional eye-driven method of HRCT-based measurement of bronchial tree components probably overstates the degree of bronchial patency in asthma patients.

  5. Testing statistical significance scores of sequence comparison methods with structure similarity

    Directory of Open Access Journals (Sweden)

    Leunissen Jack AM

    2006-10-01

    Full Text Available Abstract Background In the past years the Smith-Waterman sequence comparison algorithm has gained popularity due to improved implementations and rapidly increasing computing power. However, the quality and sensitivity of a database search is not only determined by the algorithm but also by the statistical significance testing for an alignment. The e-value is the most commonly used statistical validation method for sequence database searching. The CluSTr database and the Protein World database have been created using an alternative statistical significance test: a Z-score based on Monte-Carlo statistics. Several papers have described the superiority of the Z-score as compared to the e-value, using simulated data. We were interested if this could be validated when applied to existing, evolutionary related protein sequences. Results All experiments are performed on the ASTRAL SCOP database. The Smith-Waterman sequence comparison algorithm with both e-value and Z-score statistics is evaluated, using ROC, CVE and AP measures. The BLAST and FASTA algorithms are used as reference. We find that two out of three Smith-Waterman implementations with e-value are better at predicting structural similarities between proteins than the Smith-Waterman implementation with Z-score. SSEARCH especially has very high scores. Conclusion The compute intensive Z-score does not have a clear advantage over the e-value. The Smith-Waterman implementations give generally better results than their heuristic counterparts. We recommend using the SSEARCH algorithm combined with e-values for pairwise sequence comparisons.

  6. Detailed disc assembly temperature prediction: comparison between CFD and simplified engineering methods

    CSIR Research Space (South Africa)

    Snedden, Glen C

    2003-09-01

    Full Text Available Institute of Aeronautics and Astronautics Inc. All rights reserved. DETAILED DISC ASSEMBLY TEMPERATURE PREDICTION: COMPARISON BETWEEN CFD AND SIMPLIFIED ENGINEERING METHODS ISABE-2005-1130 Glen Snedden, Thomas Roos and Kavendra Naidoo CSIR, Defencetek... transfer and conduction code (Gaugler, 1978) Taw Adiabatic Wall Temperature y+ Near wall Reynolds number Introduction In order to calculate life degradation of gas turbine disc assemblies, it is necessary to model the transient thermal and mechanical...

  7. Comparison of computer code calculations with FEBA test data

    International Nuclear Information System (INIS)

    Zhu, Y.M.

    1988-06-01

    The FEBA forced feed reflood experiments included base line tests with unblocked geometry. The experiments consisted of separate effect tests on a full-length 5x5 rod bundle. Experimental cladding temperatures and heat transfer coefficients of FEBA test No. 216 are compared with the analytical data postcalculated utilizing the SSYST-3 computer code. The comparison indicates a satisfactory matching of the peak cladding temperatures, quench times and heat transfer coefficients for nearly all axial positions. This agreement was made possible by the use of an artificially adjusted value of the empirical code input parameter in the heat transfer for the dispersed flow regime. A limited comparison of test data and calculations using the RELAP4/MOD6 transient analysis code are also included. In this case the input data for the water entrainment fraction and the liquid weighting factor in the heat transfer for the dispersed flow regime were adjusted to match the experimental data. On the other hand, no fitting of the input parameters was made for the COBRA-TF calculations which are included in the data comparison. (orig.) [de

  8. Evaluation of an Advanced Harmonic Filter for Adjustable Speed Drives using a Toolbox Approach

    DEFF Research Database (Denmark)

    Asiminoaei, Lucian; Hansen, Steffan; Blaabjerg, Frede

    2004-01-01

    A large diversity of solutions exists to reduce the harmonic emission of the 6-pulse Adjustable Speed Drive in order to fulfill the requirements of the international harmonic standards. Among them, new types of advanced harmonic filters recently gained an increased attention due to their good...... a combination of a pre-stored database and new interpolation techniques the toolbox can provide the harmonic data on real applications allowing comparisons between different mitigation solutions....

  9. A risk adjustment approach to estimating the burden of skin disease in the United States.

    Science.gov (United States)

    Lim, Henry W; Collins, Scott A B; Resneck, Jack S; Bolognia, Jean; Hodge, Julie A; Rohrer, Thomas A; Van Beek, Marta J; Margolis, David J; Sober, Arthur J; Weinstock, Martin A; Nerenz, David R; Begolka, Wendy Smith; Moyano, Jose V

    2018-01-01

    Direct insurance claims tabulation and risk adjustment statistical methods can be used to estimate health care costs associated with various diseases. In this third manuscript derived from the new national Burden of Skin Disease Report from the American Academy of Dermatology, a risk adjustment method that was based on modeling the average annual costs of individuals with or without specific diseases, and specifically tailored for 24 skin disease categories, was used to estimate the economic burden of skin disease. The results were compared with the claims tabulation method used in the first 2 parts of this project. The risk adjustment method estimated the direct health care costs of skin diseases to be $46 billion in 2013, approximately $15 billion less than estimates using claims tabulation. For individual skin diseases, the risk adjustment cost estimates ranged from 11% to 297% of those obtained using claims tabulation for the 10 most costly skin disease categories. Although either method may be used for purposes of estimating the costs of skin disease, the choice of method will affect the end result. These findings serve as an important reference for future discussions about the method chosen in health care payment models to estimate both the cost of skin disease and the potential cost impact of care changes. Copyright © 2017 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.

  10. Comparison of measurement methods for benzene and toluene

    Science.gov (United States)

    Wideqvist, U.; Vesely, V.; Johansson, C.; Potter, A.; Brorström-Lundén, E.; Sjöberg, K.; Jonsson, T.

    Diffusive sampling and active (pumped) sampling (tubes filled with Tenax TA or Carbopack B) were compared with an automatic BTX instrument (Chrompack, GC/FID) for measurements of benzene and toluene. The measurements were made during differing pollution levels and different weather conditions at a roof-top site and in a densely trafficked street canyon in Stockholm, Sweden. The BTX instrument was used as the reference method for comparison with the other methods. Considering all data the Perkin-Elmer diffusive samplers, containing Tenax TA and assuming a constant uptake rate of 0.406 cm3 min-1, showed about 30% higher benzene values compared to the BTX instrument. This discrepancy may be explained by a dose-dependent uptake rate with higher uptake rates at lower dose as suggested by laboratory experiments presented in the literature. After correction by applying the relationship between uptake rate and dose as suggested by Roche et al. (Atmos. Environ. 33 (1999) 1905), the two methods agreed almost perfectly. For toluene there was much better agreement between the two methods. No sign of a dose-dependent uptake could be seen. The mean concentrations and 95% confidence intervals of all toluene measurements (67 values) were (10.80±1.6) μg m -3 for diffusive sampling and (11.3±1.6) μg m -3 for the BTX instrument, respectively. The overall ratio between the concentrations obtained using diffusive sampling and the BTX instrument was 0.91±0.07 (95% confidence interval). Tenax TA was found to be equal to Carbopack B for measuring benzene and toluene in this concentration range, although it has been proposed not to be optimal for benzene. There was also good agreement between the active samplers and the BTX instrument.

  11. Rapid descriptive sensory methodsComparison of Free Multiple Sorting, Partial Napping, Napping, Flash Profiling and conventional profiling

    DEFF Research Database (Denmark)

    Dehlholm, Christian; Brockhoff, Per B.; Meinert, Lene

    2012-01-01

    is a modal restriction of Napping to specific sensory modalities, directing sensation and still allowing a holistic approach to products. The new methods are compared to Flash Profiling, Napping and conventional descriptive sensory profiling. Evaluations are performed by several panels of expert assessors......Two new rapid descriptive sensory evaluation methods are introduced to the field of food sensory evaluation. The first method, free multiple sorting, allows subjects to perform ad libitum free sortings, until they feel that no more relevant dissimilarities among products remain. The second method...... are applied for the graphical validation and comparisons. This allows similar comparisons and is applicable to single-block evaluation designs such as Napping. The partial Napping allows repetitions on multiple sensory modalities, e.g. appearance, taste and mouthfeel, and shows the average...

  12. Comparison of selected methods of prediction of wine exports and imports

    Directory of Open Access Journals (Sweden)

    Radka Šperková

    2008-01-01

    Full Text Available For prediction of future events, there exist a number of methods usable in managerial practice. Decision on which of them should be used in a particular situation depends not only on the amount and quality of input information, but also on a subjective managerial judgement. Paper performs a practical application and consequent comparison of results of two selected methods, which are statistical method and deductive method. Both methods were used for predicting wine exports and imports in (from the Czech Republic. Prediction was done in 2003 and it related to the economic years 2003/2004, 2004/2005, 2005/2006, and 2006/2007, within which it was compared with the real values of the given indicators.Within the deductive methods there were characterized the most important factors of external environment including the most important influence according to authors’ opinion, which was the integration of the Czech Republic into the EU from 1st May, 2004. On the contrary, the statistical method of time-series analysis did not regard the integration, which is comes out of its principle. Statistics only calculates based on data from the past, and cannot incorporate the influence of irregular future conditions, just as the EU integration. Because of this the prediction based on deductive method was more optimistic and more precise in terms of its difference from real development in the given field.

  13. Comparison of sample preparation methods for reliable plutonium and neptunium urinalysis using automatic extraction chromatography

    DEFF Research Database (Denmark)

    Qiao, Jixin; Xu, Yihong; Hou, Xiaolin

    2014-01-01

    This paper describes improvement and comparison of analytical methods for simultaneous determination of trace-level plutonium and neptunium in urine samples by inductively coupled plasma mass spectrometry (ICP-MS). Four sample pre-concentration techniques, including calcium phosphate, iron......), it endows urinalysis methods with better reliability and repeatability compared with co-precipitation techniques. In view of the applicability of different pre-concentration techniques proposed previously in the literature, the main challenge behind relevant method development is pointed to be the release...

  14. Living with Moebius syndrome: adjustment, social competence, and satisfaction with life.

    Science.gov (United States)

    Bogart, Kathleen Rives; Matsumoto, David

    2010-03-01

    Moebius syndrome is a rare congenital condition that results in bilateral facial paralysis. Several studies have reported social interaction and adjustment problems in people with Moebius syndrome and other facial movement disorders, presumably resulting from lack of facial expression. To determine whether adults with Moebius syndrome experience increased anxiety and depression and/or decreased social competence and satisfaction with life compared with people without facial movement disorders. Internet-based quasi-experimental study with comparison group. Thirty-seven adults with Moebius syndrome recruited through the United States-based Moebius Syndrome Foundation newsletter and Web site and 37 age- and gender-matched control participants recruited through a university participant database. Anxiety and depression, social competence, satisfaction with life, ability to express emotion facially, and questions about Moebius syndrome symptoms. People with Moebius syndrome reported significantly lower social competence than the matched control group and normative data but did not differ significantly from the control group or norms in anxiety, depression, or satisfaction with life. In people with Moebius syndrome, degree of facial expression impairment was not significantly related to the adjustment variables. Many people with Moebius syndrome are better adjusted than previous research suggests, despite their difficulties with social interaction. To enhance interaction, people with Moebius syndrome could compensate for the lack of facial expression with alternative expressive channels.

  15. Reduction in the Number of Comparisons Required to Create Matrix of Expert Judgment in the Comet Method

    Directory of Open Access Journals (Sweden)

    Sałabun Wojciech

    2014-09-01

    Full Text Available Multi-criteria decision-making (MCDM methods are associated with the ranking of alternatives based on expert judgments made using a number of criteria. In the MCDM field, the distance-based approach is one popular method for receiving a final ranking. One of the newest MCDM method, which uses the distance-based approach, is the Characteristic Objects Method (COMET. In this method, the preferences of each alternative are obtained on the basis of the distance from the nearest characteristic ob jects and their values. For this purpose, the domain and fuzzy numbers set for all the considered criteria are determined. The characteristic objects are obtained as the combination of the crisp values of all the fuzzy numbers. The preference values of all the characteristic ob ject are determined based on the tournament method and the principle of indifference. Finally, the fuzzy model is constructed and is used to calculate preference values of the alternatives. In this way, a multi-criteria model is created and it is free of rank reversal phenomenon. In this approach, the matrix of expert judgment is necessary to create. For this purpose, an expert has to compare all the characteristic ob jects with each other. The number of necessary comparisons depends squarely to the number of ob jects. This study proposes the improvement of the COMET method by using the transitivity of pairwise comparisons. Three numerical examples are used to illustrate the efficiency of the proposed improvement with respect to results from the original approach. The proposed improvement reduces significantly the number of necessary comparisons to create the matrix of expert judgment.

  16. Siblings of children with life-limiting conditions: psychological adjustment and sibling relationships.

    Science.gov (United States)

    Fullerton, J M; Totsika, V; Hain, R; Hastings, R P

    2017-05-01

    This study explored psychological adjustment and sibling relationships of siblings of children with life-limiting conditions (LLCs), expanding on previous research by defining LLCs using a systematic classification of these conditions. Thirty-nine siblings participated, aged 3-16 years. Parents completed measures of siblings' emotional and behavioural difficulties, quality of life, sibling relationships and impact on families and siblings. Sibling and family adjustment and relationships were compared with population norms, where available, and to a matched comparison group of siblings of children with autistic spectrum disorder (ASD), as a comparable 'high risk' group. LLC siblings presented significantly higher levels of emotional and behavioural difficulties, and lower quality of life than population norms. Their difficulties were at levels comparable to siblings of children with ASD. A wider impact on the family was confirmed. Family socio-economic position, time since diagnosis, employment and accessing hospice care were factors associated with better psychological adjustment. Using a systematic classification of LLCs, the study supported earlier findings of increased levels of psychological difficulties in siblings of children with a LLC. The evidence is (i) highlighting the need to provide support to these siblings and their families, and (ii) that intervention approaches could be drawn from the ASD field. © 2016 John Wiley & Sons Ltd.

  17. Marital adjustment of patients with substance dependence, schizophrenia and bipolar affective disorder

    Directory of Open Access Journals (Sweden)

    Shital S Muke

    2014-01-01

    Full Text Available Background: Marital adjustment is considered as a part of social well-being. Disturbed marital relationship can directly affect the disease adjustment and the way they face disease outcomes and complications. It may adversely affect physical health, mental health, the quality-of-life and even economic status of individuals. Aim: The aim of this study was to compare the marital adjustment among patients with substance dependence, schizophrenia and bipolar affective disorder. Materials and Methods: The sample consisted of each 30 patients with substance dependence, bipolar affective disorder and schizophrenia, diagnosed as per international classification of diseases-10 diagnostic criteria for research with a minimum duration of illness of 1 year were evaluated using marital adjustment questionnaire. The data was analyzed using parametric and non-parametric statistics. Results: Prevalence of poor marital adjustment in patients with schizophrenia, bipolar affective disorder and substance dependence was 60%, 70% and 50% respectively. There was a significant difference on overall marital adjustment among substance dependence and bipolar affective disorder patients. There was no significant difference on overall marital adjustment among patients with substance dependence and schizophrenia as well as among patients with schizophrenia and bipolar affective disorder. On marital adjustment domains, schizophrenia patients had significantly poor sexual adjustment than substance dependence patients while bipolar affective disorder patients had significantly poor sexual and social adjustment compared with substance dependence patients. Conclusion: Patients with substance dependence have significant better overall marital adjustment compared with bipolar affective disorder patients. Patients with substance dependence have significantly better social and sexual adjustment than patients with bipolar affective disorder as well as significantly better sexual

  18. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method

    International Nuclear Information System (INIS)

    Shidahara, Miho; Kato, Takashi; Kawatsu, Shoji; Yoshimura, Kumiko; Ito, Kengo; Watabe, Hiroshi; Kim, Kyeong Min; Iida, Hidehiro; Kato, Rikio

    2005-01-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99m Tc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I AC μb with Chang's attenuation correction factor. The scatter component image is estimated by convolving I AC μb with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99m Tc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine. (orig.)

  19. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method.

    Science.gov (United States)

    Shidahara, Miho; Watabe, Hiroshi; Kim, Kyeong Min; Kato, Takashi; Kawatsu, Shoji; Kato, Rikio; Yoshimura, Kumiko; Iida, Hidehiro; Ito, Kengo

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99mTc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I(mub)AC with Chang's attenuation correction factor. The scatter component image is estimated by convolving I(mub)AC with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine.

  20. Method for a national tariff comparison for natural gas, electricity and heat. Set-up and presentation

    International Nuclear Information System (INIS)

    1998-05-01

    Several groups (within distribution companies and outside those companies) have a need for information and data on energy tariffs. It is the opinion of the ad-hoc working group that a comparison of tariffs on the basis of standard cases is the most practical method to meet the information demand of all the parties involved. Those standard cases are formulated and presented for prices of electricity, natural gas and heat, including applied consumption parameters. A comparison of such tariffs must be made periodically