WorldWideScience

Sample records for model testing systematic

  1. Testing Scientific Software: A Systematic Literature Review

    Science.gov (United States)

    Kanewala, Upulee; Bieman, James M.

    2014-01-01

    Context Scientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code. Objective This study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software. Method We conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software. Results We found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them. Conclusions Scientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques. PMID:25125798

  2. Testing Scientific Software: A Systematic Literature Review.

    Science.gov (United States)

    Kanewala, Upulee; Bieman, James M

    2014-10-01

    Scientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code. This study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software. We conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software. We found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them. Scientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques.

  3. Testing flow diversion in animal models: a systematic review.

    Science.gov (United States)

    Fahed, Robert; Raymond, Jean; Ducroux, Célina; Gentric, Jean-Christophe; Salazkin, Igor; Ziegler, Daniela; Gevry, Guylaine; Darsaut, Tim E

    2016-04-01

    Flow diversion (FD) is increasingly used to treat intracranial aneurysms. We sought to systematically review published studies to assess the quality of reporting and summarize the results of FD in various animal models. Databases were searched to retrieve all animal studies on FD from 2000 to 2015. Extracted data included species and aneurysm models, aneurysm and neck dimensions, type of flow diverter, occlusion rates, and complications. Articles were evaluated using a checklist derived from the Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines. Forty-two articles reporting the results of FD in nine different aneurysm models were included. The rabbit elastase-induced aneurysm model was the most commonly used, with 3-month occlusion rates of 73.5%, (95%CI [61.9-82.6%]). FD of surgical sidewall aneurysms, constructed in rabbits or canines, resulted in high occlusion rates (100% [65.5-100%]). FD resulted in modest occlusion rates (15.4% [8.9-25.1%]) when tested in six complex canine aneurysm models designed to reproduce more difficult clinical contexts (large necks, bifurcation, or fusiform aneurysms). Adverse events, including branch occlusion, were rarely reported. There were no hemorrhagic complications. Articles complied with 20.8 ± 3.9 of 41 ARRIVE items; only a small number used randomization (3/42 articles [7.1%]) or a control group (13/42 articles [30.9%]). Preclinical studies on FD have shown various results. Occlusion of elastase-induced aneurysms was common after FD. The model is not challenging but standardized in many laboratories. Failures of FD can be reproduced in less standardized but more challenging surgical canine constructions. The quality of reporting could be improved.

  4. Systematic vacuum study of the ITER model cryopump by test particle Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Xueli; Haas, Horst; Day, Christian [Institute for Technical Physics, Karlsruhe Institute of Technology, P.O. Box 3640, 76021 Karlsruhe (Germany)

    2011-07-01

    The primary pumping systems on the ITER torus are based on eight tailor-made cryogenic pumps because not any standard commercial vacuum pump can meet the ITER working criteria. This kind of cryopump can provide high pumping speed, especially for light gases, by the cryosorption on activated charcoal at 4.5 K. In this paper we will present the systematic Monte Carlo simulation results of the model pump in a reduced scale by ProVac3D, a new Test Particle Monte Carlo simulation program developed by KIT. The simulation model has included the most important mechanical structures such as sixteen cryogenic panels working at 4.5 K, the 80 K radiation shield envelope with baffles, the pump housing, inlet valve and the TIMO (Test facility for the ITER Model Pump) test facility. Three typical gas species, i.e., deuterium, protium and helium are simulated. The pumping characteristics have been obtained. The result is in good agreement with the experiment data up to the gas throughput of 1000 sccm, which marks the limit for free molecular flow. This means that ProVac3D is a useful tool in the design of the prototype cryopump of ITER. Meanwhile, the capture factors at different critical positions are calculated. They can be used as the important input parameters for a follow-up Direct Simulation Monte Carlo (DSMC) simulation for higher gas throughput.

  5. A permutation test to analyse systematic bias and random measurement errors of medical devices via boosting location and scale models.

    Science.gov (United States)

    Mayr, Andreas; Schmid, Matthias; Pfahlberg, Annette; Uter, Wolfgang; Gefeller, Olaf

    2017-06-01

    Measurement errors of medico-technical devices can be separated into systematic bias and random error. We propose a new method to address both simultaneously via generalized additive models for location, scale and shape (GAMLSS) in combination with permutation tests. More precisely, we extend a recently proposed boosting algorithm for GAMLSS to provide a test procedure to analyse potential device effects on the measurements. We carried out a large-scale simulation study to provide empirical evidence that our method is able to identify possible sources of systematic bias as well as random error under different conditions. Finally, we apply our approach to compare measurements of skin pigmentation from two different devices in an epidemiological study.

  6. Evidence used in model-based economic evaluations for evaluating pharmacogenetic and pharmacogenomic tests: a systematic review protocol.

    Science.gov (United States)

    Peters, Jaime L; Cooper, Chris; Buchanan, James

    2015-11-11

    Decision models can be used to conduct economic evaluations of new pharmacogenetic and pharmacogenomic tests to ensure they offer value for money to healthcare systems. These models require a great deal of evidence, yet research suggests the evidence used is diverse and of uncertain quality. By conducting a systematic review, we aim to investigate the test-related evidence used to inform decision models developed for the economic evaluation of genetic tests. We will search electronic databases including MEDLINE, EMBASE and NHS EEDs to identify model-based economic evaluations of pharmacogenetic and pharmacogenomic tests. The search will not be limited by language or date. Title and abstract screening will be conducted independently by 2 reviewers, with screening of full texts and data extraction conducted by 1 reviewer, and checked by another. Characteristics of the decision problem, the decision model and the test evidence used to inform the model will be extracted. Specifically, we will identify the reported evidence sources for the test-related evidence used, describe the study design and how the evidence was identified. A checklist developed specifically for decision analytic models will be used to critically appraise the models described in these studies. Variations in the test evidence used in the decision models will be explored across the included studies, and we will identify gaps in the evidence in terms of both quantity and quality. The findings of this work will be disseminated via a peer-reviewed journal publication and at national and international conferences. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  7. A hybrid model for combining case-control and cohort studies in systematic reviews of diagnostic tests

    Science.gov (United States)

    Chen, Yong; Liu, Yulun; Ning, Jing; Cormier, Janice; Chu, Haitao

    2014-01-01

    Systematic reviews of diagnostic tests often involve a mixture of case-control and cohort studies. The standard methods for evaluating diagnostic accuracy only focus on sensitivity and specificity and ignore the information on disease prevalence contained in cohort studies. Consequently, such methods cannot provide estimates of measures related to disease prevalence, such as population averaged or overall positive and negative predictive values, which reflect the clinical utility of a diagnostic test. In this paper, we propose a hybrid approach that jointly models the disease prevalence along with the diagnostic test sensitivity and specificity in cohort studies, and the sensitivity and specificity in case-control studies. In order to overcome the potential computational difficulties in the standard full likelihood inference of the proposed hybrid model, we propose an alternative inference procedure based on the composite likelihood. Such composite likelihood based inference does not suffer computational problems and maintains high relative efficiency. In addition, it is more robust to model mis-specifications compared to the standard full likelihood inference. We apply our approach to a review of the performance of contemporary diagnostic imaging modalities for detecting metastases in patients with melanoma. PMID:25897179

  8. Systematic reviews of diagnostic test accuracy

    DEFF Research Database (Denmark)

    Leeflang, Mariska M G; Deeks, Jonathan J; Gatsonis, Constantine

    2008-01-01

    More and more systematic reviews of diagnostic test accuracy studies are being published, but they can be methodologically challenging. In this paper, the authors present some of the recent developments in the methodology for conducting systematic reviews of diagnostic test accuracy studies....... Restrictive electronic search filters are discouraged, as is the use of summary quality scores. Methods for meta-analysis should take into account the paired nature of the estimates and their dependence on threshold. Authors of these reviews are advised to use the hierarchical summary receiver...

  9. Model-Based Security Testing

    Directory of Open Access Journals (Sweden)

    Ina Schieferdecker

    2012-02-01

    Full Text Available Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.

  10. EMG Biofeedback Training Versus Systematic Desensitization for Test Anxiety Reduction

    Science.gov (United States)

    Romano, John L.; Cabianca, William A.

    1978-01-01

    Biofeedback training to reduce test anxiety among university students was investigated. Biofeedback training with systematic desensitization was compared to an automated systematic desensitization program not using EMG feedback. Biofeedback training is a useful technique for reducing test anxiety, but not necessarily more effective than systematic…

  11. Hypnosis Versus Systematic Desensitization in the Treatment of Test Anxiety

    Science.gov (United States)

    Melnick, Joseph; Russell, Ronald W.

    1976-01-01

    This study compared the effectiveness of systematic desensitization and the directed experience hypnotic technique in reducing self-reported test anxiety and increasing the academic performance of test-anxious undergraduates (N=36). The results are discussed as evidence for systematic desensitization as the more effective treatment in reducing…

  12. Model Checking and Model-based Testing in the Railway Domain

    DEFF Research Database (Denmark)

    Haxthausen, Anne Elisabeth; Peleska, Jan

    2015-01-01

    This chapter describes some approaches and emerging trends for verification and model-based testing of railway control systems. We describe state-of-the-art methods and associated tools for verifying interlocking systems and their configuration data, using bounded model checking and k...... with good test strength are explained. Interlocking systems represent just one class of many others, where concrete system instances are created from generic representations, using configuration data for determining the behaviour of the instances. We explain how the systematic transition from generic...... to concrete instances in the development path is complemented by associated transitions in the verification and testing paths....

  13. Effectiveness of Structured Psychodrama and Systematic Desensitization in Reducing Test Anxiety.

    Science.gov (United States)

    Kipper, David A.; Giladi, Daniel

    1978-01-01

    Students with examination anxiety took part in study of effectiveness of two kinds of treatment, structured psychodrama and systematic desensitization, in reducing test anxiety. Results showed that subjects in both treatment groups significantly reduced test-anxiety scores. Structured psychodrama is as effective as systematic desensitization in…

  14. Recommendations for reporting of systematic reviews and meta-analyses of diagnostic test accuracy: a systematic review

    NARCIS (Netherlands)

    McGrath, Trevor A.; Alabousi, Mostafa; Skidmore, Becky; Korevaar, Daniël A.; Bossuyt, Patrick M. M.; Moher, David; Thombs, Brett; McInnes, Matthew D. F.

    2017-01-01

    This study is to perform a systematic review of existing guidance on quality of reporting and methodology for systematic reviews of diagnostic test accuracy (DTA) in order to compile a list of potential items that might be included in a reporting guideline for such reviews: Preferred Reporting Items

  15. Absorbing systematic effects to obtain a better background model in a search for new physics

    International Nuclear Information System (INIS)

    Caron, S; Horner, S; Sundermann, J E; Cowan, G; Gross, E

    2009-01-01

    This paper presents a novel approach to estimate the Standard Model backgrounds based on modifying Monte Carlo predictions within their systematic uncertainties. The improved background model is obtained by altering the original predictions with successively more complex correction functions in signal-free control selections. Statistical tests indicate when sufficient compatibility with data is reached. In this way, systematic effects are absorbed into the new background model. The same correction is then applied on the Monte Carlo prediction in the signal region. Comparing this method to other background estimation techniques shows improvements with respect to statistical and systematic uncertainties. The proposed method can also be applied in other fields beyond high energy physics.

  16. Effects of waveform model systematics on the interpretation of GW150914

    Science.gov (United States)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Ananyeva, A.; Anderson, S. B.; Anderson, W. G.; Appert, S.; Arai, K.; Araya, M. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Avila-Alvarez, A.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; E Barclay, S.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Beer, C.; Bejger, M.; Belahcene, I.; Belgin, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Billman, C. R.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackman, J.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bohe, A.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; E Brau, J.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; E Broida, J.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, H.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, H.-P.; Chincarini, A.; Chiummo, A.; Chmiel, T.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, A. J. K.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Cocchieri, C.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conti, L.; Cooper, S. J.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Covas, P. B.; E Cowan, E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; E Creighton, J. D.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cullen, T. J.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Davis, D.; Daw, E. J.; Day, B.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devenson, J.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Doctor, Z.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorrington, I.; Douglas, R.; Dovale Álvarez, M.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; E Dwyer, S.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Eisenstein, R. A.; Essick, R. C.; Etienne, Z.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fernández Galiana, A.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fong, H.; Forsyth, S. S.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fries, E. M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H.; Gadre, B. U.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gayathri, V.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghonge, S.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gorodetsky, M. L.; E Gossan, S.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; E Gushwa, K.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; E Holz, D.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Junker, J.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Keitel, D.; Kelley, D. B.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J. C.; Kim, Whansun; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kirchhoff, R.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koch, P.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Krämer, C.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lang, R. N.; Lange, J.; Lantz, B.; Lanza, R. K.; Lartaux-Vollard, A.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lehmann, J.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Liu, J.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; E Lord, J.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lovelace, G.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macfoy, S.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; E McClelland, D.; McCormick, S.; McGrath, C.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; E Mikhailov, E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Muniz, E. A. M.; Murray, P. G.; Mytidis, A.; Napier, K.; Nardecchia, I.; Naticchioni, L.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Nery, M.; Neunzert, A.; Newport, J. M.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Noack, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; E Pace, A.; Page, J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perez, C. J.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Pratt, J. W. W.; Predoi, V.; Prestegard, T.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Rhoades, E.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L. M.; Sanchez, E. J.; Sandberg, V.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Scheuer, J.; Schmidt, E.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Schwalbe, S. G.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T. J.; Shahriar, M. S.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, B.; Smith, J. R.; E Smith, R. J.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Spencer, A. P.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson, S. P.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; E Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Taracchini, A.; Taylor, R.; Theeg, T.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tippens, T.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Trinastic, J.; Tringali, M. C.; Trozzo, L.; Tse, M.; Tso, R.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Varma, V.; Vass, S.; Vasúth, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Venugopalan, G.; Verkindt, D.; Vetrano, F.; Viceré, A.; Viets, A. D.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; E Wade, L.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Watchi, J.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Whittle, C.; Williams, D.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, Hang; Yu, Haocun; Yvert, M.; Zadrożny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, T.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, S. J.; Zhu, X. J.; E Zucker, M.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration; Boyle, M.; Chu, T.; Hemberger, D.; Hinder, I.; E Kidder, L.; Ossokine, S.; Scheel, M.; Szilagyi, B.; Teukolsky, S.; Vano Vinuales, A.

    2017-05-01

    Parameter estimates of GW150914 were obtained using Bayesian inference, based on three semi-analytic waveform models for binary black hole coalescences. These waveform models differ from each other in their treatment of black hole spins, and all three models make some simplifying assumptions, notably to neglect sub-dominant waveform harmonic modes and orbital eccentricity. Furthermore, while the models are calibrated to agree with waveforms obtained by full numerical solutions of Einstein’s equations, any such calibration is accurate only to some non-zero tolerance and is limited by the accuracy of the underlying phenomenology, availability, quality, and parameter-space coverage of numerical simulations. This paper complements the original analyses of GW150914 with an investigation of the effects of possible systematic errors in the waveform models on estimates of its source parameters. To test for systematic errors we repeat the original Bayesian analysis on mock signals from numerical simulations of a series of binary configurations with parameters similar to those found for GW150914. Overall, we find no evidence for a systematic bias relative to the statistical error of the original parameter recovery of GW150914 due to modeling approximations or modeling inaccuracies. However, parameter biases are found to occur for some configurations disfavored by the data of GW150914: for binaries inclined edge-on to the detector over a small range of choices of polarization angles, and also for eccentricities greater than  ˜0.05. For signals with higher signal-to-noise ratio than GW150914, or in other regions of the binary parameter space (lower masses, larger mass ratios, or higher spins), we expect that systematic errors in current waveform models may impact gravitational-wave measurements, making more accurate models desirable for future observations.

  17. The air forces on a systematic series of biplane and triplane cellule models

    Science.gov (United States)

    Munk, Max M

    1927-01-01

    The air forces on a systematic series of biplane and triplane cellule models are the subject of this report. The test consist in the determination of the lift, drag, and moment of each individual airfoil in each cellule, mostly with the same wing section. The magnitude of the gap and of the stagger is systematically varied; not, however, the decalage, which is zero throughout the tests. Certain check tests with a second wing section make the tests more complete and conclusions more convincing. The results give evidence that the present army and navy specifications for the relative lifts of biplanes are good. They furnish material for improving such specifications for the relative lifts of triplanes. A larger number of factors can now be prescribed to take care of different cases.

  18. Personal utility in genomic testing: a systematic literature review.

    Science.gov (United States)

    Kohler, Jennefer N; Turbitt, Erin; Biesecker, Barbara B

    2017-06-01

    Researchers and clinicians refer to outcomes of genomic testing that extend beyond clinical utility as 'personal utility'. No systematic delineation of personal utility exists, making it challenging to appreciate its scope. Identifying empirical elements of personal utility reported in the literature offers an inventory that can be subsequently ranked for its relative value by those who have undergone genomic testing. A systematic review was conducted of the peer-reviewed literature reporting non-health-related outcomes of genomic testing from 1 January 2003 to 5 August 2016. Inclusion criteria specified English language, date of publication, and presence of empirical evidence. Identified outcomes were iteratively coded into unique domains. The search returned 551 abstracts from which 31 studies met the inclusion criteria. Study populations and type of genomic testing varied. Coding resulted in 15 distinct elements of personal utility, organized into three domains related to personal outcomes: affective, cognitive, and behavioral; and one domain related to social outcomes. The domains of personal utility may inform pre-test counseling by helping patients anticipate potential value of test results beyond clinical utility. Identified elements may also inform investigations into the prevalence and importance of personal utility to future test users.

  19. Clinical tests to diagnose lumbar spondylolysis and spondylolisthesis: A systematic review.

    Science.gov (United States)

    Alqarni, Abdullah M; Schneiders, Anthony G; Cook, Chad E; Hendrick, Paul A

    2015-08-01

    The aim of this paper was to systematically review the diagnostic ability of clinical tests to detect lumbar spondylolysis and spondylolisthesis. A systematic literature search of six databases, with no language restrictions, from 1950 to 2014 was concluded on February 1, 2014. Clinical tests were required to be compared against imaging reference standards and report, or allow computation, of common diagnostic values. The systematic search yielded a total of 5164 articles with 57 retained for full-text examination, from which 4 met the full inclusion criteria for the review. Study heterogeneity precluded a meta-analysis of included studies. Fifteen different clinical tests were evaluated for their ability to diagnose lumbar spondylolisthesis and one test for its ability to diagnose lumbar spondylolysis. The one-legged hyperextension test demonstrated low to moderate sensitivity (50%-73%) and low specificity (17%-32%) to diagnose lumbar spondylolysis, while the lumbar spinous process palpation test was the optimal diagnostic test for lumbar spondylolisthesis; returning high specificity (87%-100%) and moderate to high sensitivity (60-88) values. Lumbar spondylolysis and spondylolisthesis are identifiable causes of LBP in athletes. There appears to be utility to lumbar spinous process palpation for the diagnosis of lumbar spondylolisthesis, however the one-legged hyperextension test has virtually no value in diagnosing patients with spondylolysis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Cognitive Modification and Systematic Desensitization with Test Anxious High School Students.

    Science.gov (United States)

    Leal, Lois L.; And Others

    1981-01-01

    Compares the relative effectiveness of cognitive modification and systematic desensitization with test anxious high school students (N=30). The systematic desensitization treatment appeared to be significantly more effective on the performance measure while cognitive modification was more effective on one of the self-report measures. (Author/JAC)

  1. Development and pilot test of a process to identify research needs from a systematic review.

    Science.gov (United States)

    Saldanha, Ian J; Wilson, Lisa M; Bennett, Wendy L; Nicholson, Wanda K; Robinson, Karen A

    2013-05-01

    To ensure appropriate allocation of research funds, we need methods for identifying high-priority research needs. We developed and pilot tested a process to identify needs for primary clinical research using a systematic review in gestational diabetes mellitus. We conducted eight steps: abstract research gaps from a systematic review using the Population, Intervention, Comparison, Outcomes, and Settings (PICOS) framework; solicit feedback from the review authors; translate gaps into researchable questions using the PICOS framework; solicit feedback from multidisciplinary stakeholders at our institution; establish consensus among multidisciplinary external stakeholders on the importance of the research questions using the Delphi method; prioritize outcomes; develop conceptual models to highlight research needs; and evaluate the process. We identified 19 research questions. During the Delphi method, external stakeholders established consensus for 16 of these 19 questions (15 with "high" and 1 with "medium" clinical benefit/importance). We pilot tested an eight-step process to identify clinically important research needs. Before wider application of this process, it should be tested using systematic reviews of other diseases. Further evaluation should include assessment of the usefulness of the research needs generated using this process for primary researchers and funders. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. A Model for Quantifying Sources of Variation in Test-day Milk Yield ...

    African Journals Online (AJOL)

    A cow's test-day milk yield is influenced by several systematic environmental effects, which have to be removed when estimating the genetic potential of an animal. The present study quantified the variation due to test date and month of test in test-day lactation yield records using full and reduced models. The data consisted ...

  3. Test models for improving filtering with model errors through stochastic parameter estimation

    International Nuclear Information System (INIS)

    Gershgorin, B.; Harlim, J.; Majda, A.J.

    2010-01-01

    The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

  4. Software Testing and Verification in Climate Model Development

    Science.gov (United States)

    Clune, Thomas L.; Rood, RIchard B.

    2011-01-01

    Over the past 30 years most climate models have grown from relatively simple representations of a few atmospheric processes to a complex multi-disciplinary system. Computer infrastructure over that period has gone from punch card mainframes to modem parallel clusters. Model implementations have become complex, brittle, and increasingly difficult to extend and maintain. Existing verification processes for model implementations rely almost exclusively upon some combination of detailed analysis of output from full climate simulations and system-level regression tests. In additional to being quite costly in terms of developer time and computing resources, these testing methodologies are limited in terms of the types of defects that can be detected, isolated and diagnosed. Mitigating these weaknesses of coarse-grained testing with finer-grained "unit" tests has been perceived as cumbersome and counter-productive. In the commercial software sector, recent advances in tools and methodology have led to a renaissance for systematic fine-grained testing. We discuss the availability of analogous tools for scientific software and examine benefits that similar testing methodologies could bring to climate modeling software. We describe the unique challenges faced when testing complex numerical algorithms and suggest techniques to minimize and/or eliminate the difficulties.

  5. Evaluating test-retest reliability in patient-reported outcome measures for older people: A systematic review.

    Science.gov (United States)

    Park, Myung Sook; Kang, Kyung Ja; Jang, Sun Joo; Lee, Joo Yun; Chang, Sun Ju

    2018-03-01

    This study aimed to evaluate the components of test-retest reliability including time interval, sample size, and statistical methods used in patient-reported outcome measures in older people and to provide suggestions on the methodology for calculating test-retest reliability for patient-reported outcomes in older people. This was a systematic literature review. MEDLINE, Embase, CINAHL, and PsycINFO were searched from January 1, 2000 to August 10, 2017 by an information specialist. This systematic review was guided by both the Preferred Reporting Items for Systematic Reviews and Meta-Analyses checklist and the guideline for systematic review published by the National Evidence-based Healthcare Collaborating Agency in Korea. The methodological quality was assessed by the Consensus-based Standards for the selection of health Measurement Instruments checklist box B. Ninety-five out of 12,641 studies were selected for the analysis. The median time interval for test-retest reliability was 14days, and the ratio of sample size for test-retest reliability to the number of items in each measure ranged from 1:1 to 1:4. The most frequently used statistical methods for continuous scores was intraclass correlation coefficients (ICCs). Among the 63 studies that used ICCs, 21 studies presented models for ICC calculations and 30 studies reported 95% confidence intervals of the ICCs. Additional analyses using 17 studies that reported a strong ICC (>0.09) showed that the mean time interval was 12.88days and the mean ratio of the number of items to sample size was 1:5.37. When researchers plan to assess the test-retest reliability of patient-reported outcome measures for older people, they need to consider an adequate time interval of approximately 13days and the sample size of about 5 times the number of items. Particularly, statistical methods should not only be selected based on the types of scores of the patient-reported outcome measures, but should also be described clearly in

  6. Systematic review, meta-analysis and economic modelling of molecular diagnostic tests for antibiotic resistance in tuberculosis.

    Science.gov (United States)

    Drobniewski, Francis; Cooke, Mary; Jordan, Jake; Casali, Nicola; Mugwagwa, Tendai; Broda, Agnieszka; Townsend, Catherine; Sivaramakrishnan, Anand; Green, Nathan; Jit, Mark; Lipman, Marc; Lord, Joanne; White, Peter J; Abubakar, Ibrahim

    2015-05-01

    Drug-resistant tuberculosis (TB), especially multidrug-resistant (MDR, resistance to rifampicin and isoniazid) disease, is associated with a worse patient outcome. Drug resistance diagnosed using microbiological culture takes days to weeks, as TB bacteria grow slowly. Rapid molecular tests for drug resistance detection (1 day) are commercially available and may promote faster initiation of appropriate treatment. To (1) conduct a systematic review of evidence regarding diagnostic accuracy of molecular genetic tests for drug resistance, (2) conduct a health-economic evaluation of screening and diagnostic strategies, including comparison of alternative models of service provision and assessment of the value of targeting rapid testing at high-risk subgroups, and (3) construct a transmission-dynamic mathematical model that translates the estimates of diagnostic accuracy into estimates of clinical impact. A standardised search strategy identified relevant studies from EMBASE, PubMed, MEDLINE, Bioscience Information Service (BIOSIS), System for Information on Grey Literature in Europe Social Policy & Practice (SIGLE) and Web of Science, published between 1 January 2000 and 15 August 2013. Additional 'grey' sources were included. Quality was assessed using quality assessment of diagnostic accuracy studies version 2 (QUADAS-2). For each diagnostic strategy and population subgroup, a care pathway was constructed to specify which medical treatments and health services that individuals would receive from presentation to the point where they either did or did not complete TB treatment successfully. A total cost was estimated from a health service perspective for each care pathway, and the health impact was estimated in terms of the mean discounted quality-adjusted life-years (QALYs) lost as a result of disease and treatment. Costs and QALYs were both discounted at 3.5% per year. An integrated transmission-dynamic and economic model was used to evaluate the cost-effectiveness of

  7. Systematic review, meta-analysis and economic modelling of molecular diagnostic tests for antibiotic resistance in tuberculosis.

    Science.gov (United States)

    Drobniewski, Francis; Cooke, Mary; Jordan, Jake; Casali, Nicola; Mugwagwa, Tendai; Broda, Agnieszka; Townsend, Catherine; Sivaramakrishnan, Anand; Green, Nathan; Jit, Mark; Lipman, Marc; Lord, Joanne; White, Peter J; Abubakar, Ibrahim

    2015-01-01

    BACKGROUND Drug-resistant tuberculosis (TB), especially multidrug-resistant (MDR, resistance to rifampicin and isoniazid) disease, is associated with a worse patient outcome. Drug resistance diagnosed using microbiological culture takes days to weeks, as TB bacteria grow slowly. Rapid molecular tests for drug resistance detection (1 day) are commercially available and may promote faster initiation of appropriate treatment. OBJECTIVES To (1) conduct a systematic review of evidence regarding diagnostic accuracy of molecular genetic tests for drug resistance, (2) conduct a health-economic evaluation of screening and diagnostic strategies, including comparison of alternative models of service provision and assessment of the value of targeting rapid testing at high-risk subgroups, and (3) construct a transmission-dynamic mathematical model that translates the estimates of diagnostic accuracy into estimates of clinical impact. REVIEW METHODS AND DATA SOURCES A standardised search strategy identified relevant studies from EMBASE, PubMed, MEDLINE, Bioscience Information Service (BIOSIS), System for Information on Grey Literature in Europe Social Policy & Practice (SIGLE) and Web of Science, published between 1 January 2000 and 15 August 2013. Additional 'grey' sources were included. Quality was assessed using quality assessment of diagnostic accuracy studies version 2 (QUADAS-2). For each diagnostic strategy and population subgroup, a care pathway was constructed to specify which medical treatments and health services that individuals would receive from presentation to the point where they either did or did not complete TB treatment successfully. A total cost was estimated from a health service perspective for each care pathway, and the health impact was estimated in terms of the mean discounted quality-adjusted life-years (QALYs) lost as a result of disease and treatment. Costs and QALYs were both discounted at 3.5% per year. An integrated transmission-dynamic and

  8. Systematic reviews of diagnostic tests in endocrinology: an audit of methods, reporting, and performance.

    Science.gov (United States)

    Spencer-Bonilla, Gabriela; Singh Ospina, Naykky; Rodriguez-Gutierrez, Rene; Brito, Juan P; Iñiguez-Ariza, Nicole; Tamhane, Shrikant; Erwin, Patricia J; Murad, M Hassan; Montori, Victor M

    2017-07-01

    Systematic reviews provide clinicians and policymakers estimates of diagnostic test accuracy and their usefulness in clinical practice. We identified all available systematic reviews of diagnosis in endocrinology, summarized the diagnostic accuracy of the tests included, and assessed the credibility and clinical usefulness of the methods and reporting. We searched Ovid MEDLINE, EMBASE, and Cochrane CENTRAL from inception to December 2015 for systematic reviews and meta-analyses reporting accuracy measures of diagnostic tests in endocrinology. Experienced reviewers independently screened for eligible studies and collected data. We summarized the results, methods, and reporting of the reviews. We performed subgroup analyses to categorize diagnostic tests as most useful based on their accuracy. We identified 84 systematic reviews; half of the tests included were classified as helpful when positive, one-fourth as helpful when negative. Most authors adequately reported how studies were identified and selected and how their trustworthiness (risk of bias) was judged. Only one in three reviews, however, reported an overall judgment about trustworthiness and one in five reported using adequate meta-analytic methods. One in four reported contacting authors for further information and about half included only patients with diagnostic uncertainty. Up to half of the diagnostic endocrine tests in which the likelihood ratio was calculated or provided are likely to be helpful in practice when positive as are one-quarter when negative. Most diagnostic systematic reviews in endocrine lack methodological rigor, protection against bias, and offer limited credibility. Substantial efforts, therefore, seem necessary to improve the quality of diagnostic systematic reviews in endocrinology.

  9. Composite Material Testing Data Reduction to Adjust for the Systematic 6-DOF Testing Machine Aberrations

    Science.gov (United States)

    Athanasios lliopoulos; John G. Michopoulos; John G. C. Hermanson

    2012-01-01

    This paper describes a data reduction methodology for eliminating the systematic aberrations introduced by the unwanted behavior of a multiaxial testing machine, into the massive amounts of experimental data collected from testing of composite material coupons. The machine in reference is a custom made 6-DoF system called NRL66.3 and developed at the NAval...

  10. Systematic Unit Testing in a Read-eval-print Loop

    DEFF Research Database (Denmark)

    Nørmark, Kurt

    2010-01-01

    .  The process of collecting the expressions and their results imposes only little extra work on the programmer.  The use of the tool provides for creation of test repositories, and it is intended to catalyze a much more systematic approach to unit testing in a read-eval-print loop.  In the paper we also discuss...... how to use a test repository for other purposes than testing.  As a concrete contribution we show how to use test cases as examples in library interface documentation.  It is hypothesized---but not yet validated---that the tool will motivate the Lisp programmer to take the transition from casual...

  11. HIV Testing and Counseling Among Female Sex Workers : A Systematic Literature Review

    NARCIS (Netherlands)

    Tokar, Anna; Broerse, Jacqueline E.W.; Blanchard, James; Roura, Maria

    2018-01-01

    HIV testing uptake continues to be low among Female Sex Workers (FSWs). We synthesizes evidence on barriers and facilitators to HIV testing among FSW as well as frequencies of testing, willingness to test, and return rates to collect results. We systematically searched the MEDLINE/PubMed, EMBASE,

  12. Measurement properties of the craniocervical flexion test: a systematic review protocol.

    Science.gov (United States)

    Araujo, Francisco Xavier de; Ferreira, Giovanni Esteves; Scholl Schell, Maurício; Castro, Marcelo Peduzzi de; Silva, Marcelo Faria; Ribeiro, Daniel Cury

    2018-02-22

    Neck pain is the leading cause of years lived with disability worldwide and it accounts for high economic and societal burden. Altered activation of the neck muscles is a common musculoskeletal impairment presented by patients with neck pain. The craniocervical flexion test with pressure biofeedback unit has been widely used in clinical practice to assess function of deep neck flexor muscles. This systematic review will assess the measurement properties of the craniocervical flexion test for assessing deep cervical flexor muscles. This is a protocol for a systematic review that will follow the Preferred Reporting Items for Systematic Review and Meta-Analysis statement. MEDLINE (via PubMed), EMBASE, PEDro, Cochrane Central Register of Controlled Trials (CENTRAL), Scopus and Science Direct will be systematically searched from inception. Studies of any design that have investigated and reported at least one measurement property of the craniocervical flexion test for assessing the deep cervical flexor muscles will be included. All measurement properties will be considered as outcomes. Two reviewers will independently rate the risk of bias of individual studies using the updated COnsensus-based Standards for the selection of health Measurement Instruments risk of bias checklist. A structured narrative synthesis will be used for data analysis. Quantitative findings for each measurement property will be summarised. The overall rating for a measurement property will be classified as 'positive', 'indeterminate' or 'negative'. The overall rating will be accompanied with a level of evidence. Ethical approval and patient consent are not required since this is a systematic review based on published studies. Findings will be submitted to a peer-reviewed journal for publication. CRD42017062175. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  13. Hydrocarbon Fuel Thermal Performance Modeling based on Systematic Measurement and Comprehensive Chromatographic Analysis

    Science.gov (United States)

    2016-07-31

    distribution unlimited Hydrocarbon Fuel Thermal Performance Modeling based on Systematic Measurement and Comprehensive Chromatographic Analysis Matthew...vital importance for hydrocarbon -fueled propulsion systems: fuel thermal performance as indicated by physical and chemical effects of cooling passage... analysis . The selection and acquisition of a set of chemically diverse fuels is pivotal for a successful outcome since test method validation and

  14. Systematic model development for partial nitrification of landfill leachate in a SBR

    DEFF Research Database (Denmark)

    Ganigue, R.; Volcke, E.I.P.; Puig, S.

    2010-01-01

    ), confirmed by statistical tests. Good model fits were also obtained for pH, despite a slight bias in pH prediction, probably caused by the high salinity of the leachate. Future work will be addressed to the model-based evaluation of the interaction of different factors (aeration, stripping, pH, inhibitions....... Following a systematic procedure, the model was successfully constructed, calibrated and validated using data from short-term (one cycle) operation of the PN-SBR. The evaluation of the model revealed a good fit to the main physical-chemical measurements (ammonium, nitrite, nitrate and inorganic carbon......, among others) and their impact on the process performance....

  15. Social Media Interventions to Promote HIV Testing, Linkage, Adherence, and Retention: Systematic Review and Meta-Analysis

    Science.gov (United States)

    Gupta, Somya; Wang, Jiangtao; Hightow-Weidman, Lisa B; Muessig, Kathryn E; Tang, Weiming; Pan, Stephen; Pendse, Razia; Tucker, Joseph D

    2017-01-01

    Background Social media is increasingly used to deliver HIV interventions for key populations worldwide. However, little is known about the specific uses and effects of social media on human immunodeficiency virus (HIV) interventions. Objective This systematic review examines the effectiveness of social media interventions to promote HIV testing, linkage, adherence, and retention among key populations. Methods We used the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist and Cochrane guidelines for this review and registered it on the International Prospective Register of Systematic Reviews, PROSPERO. We systematically searched six databases and three conference websites using search terms related to HIV, social media, and key populations. We included studies where (1) the intervention was created or implemented on social media platforms, (2) study population included men who have sex with men (MSM), transgender individuals, people who inject drugs (PWID), and/or sex workers, and (3) outcomes included promoting HIV testing, linkage, adherence, and/or retention. Meta-analyses were conducted by Review Manager, version 5.3. Pooled relative risk (RR) and 95% confidence intervals were calculated by random-effects models. Results Among 981 manuscripts identified, 26 studies met the inclusion criteria. We found 18 studies from high-income countries, 8 in middle-income countries, and 0 in low-income countries. Eight were randomized controlled trials, and 18 were observational studies. All studies (n=26) included MSM; five studies also included transgender individuals. The focus of 21 studies was HIV testing, four on HIV testing and linkage to care, and one on antiretroviral therapy adherence. Social media interventions were used to do the following: build online interactive communities to encourage HIV testing/adherence (10 studies), provide HIV testing services (9 studies), disseminate HIV information (9 studies), and develop

  16. Treatment of Test Anxiety by Cue-Controlled Relaxation and Systematic Desensitization

    Science.gov (United States)

    Russell, Richard K.; And Others

    1976-01-01

    Test-anxious subjects (N=19) participated in an outcome study comparing systematic desensitization, cue-controlled relaxation, and no treatment. The treatment groups demonstrated significant improvement on the self-report measures of test and state anxiety but not on the behavioral indices. The potential advantages of this technique over…

  17. Comparison of Three Methods of Reducing Test Anxiety: Systematic Desensitization, Implosive Therapy, and Study Counseling

    Science.gov (United States)

    Cornish, Richard D.; Dilley, Josiah S.

    1973-01-01

    Systematic desensitization, implosive therapy, and study counseling have all been effective in reducing test anxiety. In addition, systematic desensitization has been compared to study counseling for effectiveness. This study compares all three methods and suggests that systematic desentization is more effective than the others, and that implosive…

  18. Testing the Structure of Hydrological Models using Genetic Programming

    Science.gov (United States)

    Selle, B.; Muttil, N.

    2009-04-01

    Genetic Programming is able to systematically explore many alternative model structures of different complexity from available input and response data. We hypothesised that genetic programming can be used to test the structure hydrological models and to identify dominant processes in hydrological systems. To test this, genetic programming was used to analyse a data set from a lysimeter experiment in southeastern Australia. The lysimeter experiment was conducted to quantify the deep percolation response under surface irrigated pasture to different soil types, water table depths and water ponding times during surface irrigation. Using genetic programming, a simple model of deep percolation was consistently evolved in multiple model runs. This simple and interpretable model confirmed the dominant process contributing to deep percolation represented in a conceptual model that was published earlier. Thus, this study shows that genetic programming can be used to evaluate the structure of hydrological models and to gain insight about the dominant processes in hydrological systems.

  19. Systematization of Angra-1 operation attendance - Maintenance and periodic testings

    International Nuclear Information System (INIS)

    Furieri, E.B.; Carvalho Bruno, N. de; Salaverry, N.A.

    1988-01-01

    A maintenance analysis, their types and their functions for the safety of nuclear power plants is done. Programs and present trends in the reactor maintenance, as well as the maintenance program and periodic tests of Angra I, are analysed. The necessities of safety analysis and a systematization for maintenance attendance are discussed and the periodic testing as well as the attendance of international experience. (M.C.K.) [pt

  20. Social Media Interventions to Promote HIV Testing, Linkage, Adherence, and Retention: Systematic Review and Meta-Analysis.

    Science.gov (United States)

    Cao, Bolin; Gupta, Somya; Wang, Jiangtao; Hightow-Weidman, Lisa B; Muessig, Kathryn E; Tang, Weiming; Pan, Stephen; Pendse, Razia; Tucker, Joseph D

    2017-11-24

    Social media is increasingly used to deliver HIV interventions for key populations worldwide. However, little is known about the specific uses and effects of social media on human immunodeficiency virus (HIV) interventions. This systematic review examines the effectiveness of social media interventions to promote HIV testing, linkage, adherence, and retention among key populations. We used the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist and Cochrane guidelines for this review and registered it on the International Prospective Register of Systematic Reviews, PROSPERO. We systematically searched six databases and three conference websites using search terms related to HIV, social media, and key populations. We included studies where (1) the intervention was created or implemented on social media platforms, (2) study population included men who have sex with men (MSM), transgender individuals, people who inject drugs (PWID), and/or sex workers, and (3) outcomes included promoting HIV testing, linkage, adherence, and/or retention. Meta-analyses were conducted by Review Manager, version 5.3. Pooled relative risk (RR) and 95% confidence intervals were calculated by random-effects models. Among 981 manuscripts identified, 26 studies met the inclusion criteria. We found 18 studies from high-income countries, 8 in middle-income countries, and 0 in low-income countries. Eight were randomized controlled trials, and 18 were observational studies. All studies (n=26) included MSM; five studies also included transgender individuals. The focus of 21 studies was HIV testing, four on HIV testing and linkage to care, and one on antiretroviral therapy adherence. Social media interventions were used to do the following: build online interactive communities to encourage HIV testing/adherence (10 studies), provide HIV testing services (9 studies), disseminate HIV information (9 studies), and develop intervention materials (1 study). Of the

  1. Systematic evaluation of non-animal test methods for skin sensitisation safety assessment.

    Science.gov (United States)

    Reisinger, Kerstin; Hoffmann, Sebastian; Alépée, Nathalie; Ashikaga, Takao; Barroso, Joao; Elcombe, Cliff; Gellatly, Nicola; Galbiati, Valentina; Gibbs, Susan; Groux, Hervé; Hibatallah, Jalila; Keller, Donald; Kern, Petra; Klaric, Martina; Kolle, Susanne; Kuehnl, Jochen; Lambrechts, Nathalie; Lindstedt, Malin; Millet, Marion; Martinozzi-Teissier, Silvia; Natsch, Andreas; Petersohn, Dirk; Pike, Ian; Sakaguchi, Hitoshi; Schepky, Andreas; Tailhardat, Magalie; Templier, Marie; van Vliet, Erwin; Maxwell, Gavin

    2015-02-01

    The need for non-animal data to assess skin sensitisation properties of substances, especially cosmetics ingredients, has spawned the development of many in vitro methods. As it is widely believed that no single method can provide a solution, the Cosmetics Europe Skin Tolerance Task Force has defined a three-phase framework for the development of a non-animal testing strategy for skin sensitization potency prediction. The results of the first phase – systematic evaluation of 16 test methods – are presented here. This evaluation involved generation of data on a common set of ten substances in all methods and systematic collation of information including the level of standardisation, existing test data,potential for throughput, transferability and accessibility in cooperation with the test method developers.A workshop was held with the test method developers to review the outcome of this evaluation and to discuss the results. The evaluation informed the prioritisation of test methods for the next phase of the non-animal testing strategy development framework. Ultimately, the testing strategy – combined with bioavailability and skin metabolism data and exposure consideration – is envisaged to allow establishment of a data integration approach for skin sensitisation safety assessment of cosmetic ingredients.

  2. In-vitro orthodontic bond strength testing : A systematic review and meta-analysis

    NARCIS (Netherlands)

    Finnema, K.J.; Ozcan, M.; Post, W.J.; Ren, Y.J.; Dijkstra, P.U.

    INTRODUCTION: The aims of this study were to systematically review the available literature regarding in-vitro orthodontic shear bond strength testing and to analyze the influence of test conditions on bond strength. METHODS: Our data sources were Embase and Medline. Relevant studies were selected

  3. ANALYSIS AND CORRECTION OF SYSTEMATIC HEIGHT MODEL ERRORS

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2016-06-01

    Full Text Available The geometry of digital height models (DHM determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC. Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3 has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP, but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM digital surface model (DSM or the new AW3D30 DSM, based on ALOS

  4. Systematic Digital Forensic Investigation Model

    OpenAIRE

    Systematic Digital Forensic Investigation Model

    2011-01-01

    Law practitioners are in an uninterrupted battle with criminals in the application of digital/computertechnologies, and require the development of a proper methodology to systematically searchdigital devices for significant evidence. Computer fraud and digital crimes are growing day by dayand unfortunately less than two percent of the reported cases result in confidence. This paperexplores the development of the digital forensics process model, compares digital forensicmethodologies, and fina...

  5. Systematic Testing should not be a Topic in the Computer Science Curriculum!

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak

    2003-01-01

    of high quality. We point out that we, as teachers, are partly to blame that many software products are of low quality. We describe a set of teaching guidelines that conveys our main pedagogical point to the students: that systematic testing is important, rewarding, and fun, and that testing should...

  6. Preferred Reporting Items for a Systematic Review and Meta-analysis of Diagnostic Test Accuracy Studies: The PRISMA-DTA Statement.

    Science.gov (United States)

    McInnes, Matthew D F; Moher, David; Thombs, Brett D; McGrath, Trevor A; Bossuyt, Patrick M; Clifford, Tammy; Cohen, Jérémie F; Deeks, Jonathan J; Gatsonis, Constantine; Hooft, Lotty; Hunt, Harriet A; Hyde, Christopher J; Korevaar, Daniël A; Leeflang, Mariska M G; Macaskill, Petra; Reitsma, Johannes B; Rodin, Rachel; Rutjes, Anne W S; Salameh, Jean-Paul; Stevens, Adrienne; Takwoingi, Yemisi; Tonelli, Marcello; Weeks, Laura; Whiting, Penny; Willis, Brian H

    2018-01-23

    Systematic reviews of diagnostic test accuracy synthesize data from primary diagnostic studies that have evaluated the accuracy of 1 or more index tests against a reference standard, provide estimates of test performance, allow comparisons of the accuracy of different tests, and facilitate the identification of sources of variability in test accuracy. To develop the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) diagnostic test accuracy guideline as a stand-alone extension of the PRISMA statement. Modifications to the PRISMA statement reflect the specific requirements for reporting of systematic reviews and meta-analyses of diagnostic test accuracy studies and the abstracts for these reviews. Established standards from the Enhancing the Quality and Transparency of Health Research (EQUATOR) Network were followed for the development of the guideline. The original PRISMA statement was used as a framework on which to modify and add items. A group of 24 multidisciplinary experts used a systematic review of articles on existing reporting guidelines and methods, a 3-round Delphi process, a consensus meeting, pilot testing, and iterative refinement to develop the PRISMA diagnostic test accuracy guideline. The final version of the PRISMA diagnostic test accuracy guideline checklist was approved by the group. The systematic review (produced 64 items) and the Delphi process (provided feedback on 7 proposed items; 1 item was later split into 2 items) identified 71 potentially relevant items for consideration. The Delphi process reduced these to 60 items that were discussed at the consensus meeting. Following the meeting, pilot testing and iterative feedback were used to generate the 27-item PRISMA diagnostic test accuracy checklist. To reflect specific or optimal contemporary systematic review methods for diagnostic test accuracy, 8 of the 27 original PRISMA items were left unchanged, 17 were modified, 2 were added, and 2 were omitted. The 27-item

  7. Disentangling dark energy and cosmic tests of gravity from weak lensing systematics

    Science.gov (United States)

    Laszlo, Istvan; Bean, Rachel; Kirk, Donnacha; Bridle, Sarah

    2012-06-01

    We consider the impact of key astrophysical and measurement systematics on constraints on dark energy and modifications to gravity on cosmic scales. We focus on upcoming photometric ‘stage III’ and ‘stage IV’ large-scale structure surveys such as the Dark Energy Survey (DES), the Subaru Measurement of Images and Redshifts survey, the Euclid survey, the Large Synoptic Survey Telescope (LSST) and Wide Field Infra-Red Space Telescope (WFIRST). We illustrate the different redshift dependencies of gravity modifications compared to intrinsic alignments, the main astrophysical systematic. The way in which systematic uncertainties, such as galaxy bias and intrinsic alignments, are modelled can change dark energy equation-of-state parameter and modified gravity figures of merit by a factor of 4. The inclusion of cross-correlations of cosmic shear and galaxy position measurements helps reduce the loss of constraining power from the lensing shear surveys. When forecasts for Planck cosmic microwave background and stage IV surveys are combined, constraints on the dark energy equation-of-state parameter and modified gravity model are recovered, relative to those from shear data with no systematic uncertainties, provided fewer than 36 free parameters in total are used to describe the galaxy bias and intrinsic alignment models as a function of scale and redshift. While some uncertainty in the intrinsic alignment (IA) model can be tolerated, it is going to be important to be able to parametrize IAs well in order to realize the full potential of upcoming surveys. To facilitate future investigations, we also provide a fitting function for the matter power spectrum arising from the phenomenological modified gravity model we consider.

  8. User testing of an adaptation of fishbone diagrams to depict results of systematic reviews.

    Science.gov (United States)

    Gartlehner, Gerald; Schultes, Marie-Therese; Titscher, Viktoria; Morgan, Laura C; Bobashev, Georgiy V; Williams, Peyton; West, Suzanne L

    2017-12-12

    Summary of findings tables in systematic reviews are highly informative but require epidemiological training to be interpreted correctly. The usage of fishbone diagrams as graphical displays could offer researchers an effective approach to simplify content for readers with limited epidemiological training. In this paper we demonstrate how fishbone diagrams can be applied to systematic reviews and present the results of an initial user testing. Findings from two systematic reviews were graphically depicted in the form of the fishbone diagram. To test the utility of fishbone diagrams compared with summary of findings tables, we developed and pilot-tested an online survey using Qualtrics. Respondents were randomized to the fishbone diagram or a summary of findings table presenting the same body of evidence. They answered questions in both open-ended and closed-answer formats; all responses were anonymous. Measures of interest focused on first and second impressions, the ability to find and interpret critical information, as well as user experience with both displays. We asked respondents about the perceived utility of fishbone diagrams compared to summary of findings tables. We analyzed quantitative data by conducting t-tests and comparing descriptive statistics. Based on real world systematic reviews, we provide two different fishbone diagrams to show how they might be used to display complex information in a clear and succinct manner. User testing on 77 students with basic epidemiological training revealed that participants preferred summary of findings tables over fishbone diagrams. Significantly more participants liked the summary of findings table than the fishbone diagram (71.8% vs. 44.8%; p testing, however, did not support the utility of such graphical displays.

  9. Systematic review of model-based cervical screening evaluations.

    Science.gov (United States)

    Mendes, Diana; Bains, Iren; Vanni, Tazio; Jit, Mark

    2015-05-01

    Optimising population-based cervical screening policies is becoming more complex due to the expanding range of screening technologies available and the interplay with vaccine-induced changes in epidemiology. Mathematical models are increasingly being applied to assess the impact of cervical cancer screening strategies. We systematically reviewed MEDLINE®, Embase, Web of Science®, EconLit, Health Economic Evaluation Database, and The Cochrane Library databases in order to identify the mathematical models of human papillomavirus (HPV) infection and cervical cancer progression used to assess the effectiveness and/or cost-effectiveness of cervical cancer screening strategies. Key model features and conclusions relevant to decision-making were extracted. We found 153 articles meeting our eligibility criteria published up to May 2013. Most studies (72/153) evaluated the introduction of a new screening technology, with particular focus on the comparison of HPV DNA testing and cytology (n = 58). Twenty-eight in forty of these analyses supported HPV DNA primary screening implementation. A few studies analysed more recent technologies - rapid HPV DNA testing (n = 3), HPV DNA self-sampling (n = 4), and genotyping (n = 1) - and were also supportive of their introduction. However, no study was found on emerging molecular markers and their potential utility in future screening programmes. Most evaluations (113/153) were based on models simulating aggregate groups of women at risk of cervical cancer over time without accounting for HPV infection transmission. Calibration to country-specific outcome data is becoming more common, but has not yet become standard practice. Models of cervical screening are increasingly used, and allow extrapolation of trial data to project the population-level health and economic impact of different screening policy. However, post-vaccination analyses have rarely incorporated transmission dynamics. Model calibration to country

  10. Group Systematic Desensitization Versus Covert Positive Reinforcement in the Reduction of Test Anxiety

    Science.gov (United States)

    Kostka, Marion P.; Galassi, John P.

    1974-01-01

    The study compared modified versions of systematic desensitization and covert positive reinforcement to a no-treatment control condition in the reduction of test anxiety. On an anagrams performance test, the covert reinforcement and control groups were superior to the desensitization group. (Author)

  11. Testing the structure of a hydrological model using Genetic Programming

    Science.gov (United States)

    Selle, Benny; Muttil, Nitin

    2011-01-01

    SummaryGenetic Programming is able to systematically explore many alternative model structures of different complexity from available input and response data. We hypothesised that Genetic Programming can be used to test the structure of hydrological models and to identify dominant processes in hydrological systems. To test this, Genetic Programming was used to analyse a data set from a lysimeter experiment in southeastern Australia. The lysimeter experiment was conducted to quantify the deep percolation response under surface irrigated pasture to different soil types, watertable depths and water ponding times during surface irrigation. Using Genetic Programming, a simple model of deep percolation was recurrently evolved in multiple Genetic Programming runs. This simple and interpretable model supported the dominant process contributing to deep percolation represented in a conceptual model that was published earlier. Thus, this study shows that Genetic Programming can be used to evaluate the structure of hydrological models and to gain insight about the dominant processes in hydrological systems.

  12. Systematic testing of flood adaptation options in urban areas through simulations

    Science.gov (United States)

    Löwe, Roland; Urich, Christian; Sto. Domingo, Nina; Mark, Ole; Deletic, Ana; Arnbjerg-Nielsen, Karsten

    2016-04-01

    While models can quantify flood risk in great detail, the results are subject to a number of deep uncertainties. Climate dependent drivers such as sea level and rainfall intensities, population growth and economic development all have a strong influence on future flood risk, but future developments can only be estimated coarsely. In such a situation, robust decision making frameworks call for the systematic evaluation of mitigation measures against ensembles of potential futures. We have coupled the urban development software DAnCE4Water and the 1D-2D hydraulic simulation package MIKE FLOOD to create a framework that allows for such systematic evaluations, considering mitigation measures under a variety of climate futures and urban development scenarios. A wide spectrum of mitigation measures can be considered in this setup, ranging from structural measures such as modifications of the sewer network over local retention of rainwater and the modification of surface flow paths to policy measures such as restrictions on urban development in flood prone areas or master plans that encourage compact development. The setup was tested in a 300 ha residential catchment in Melbourne, Australia. The results clearly demonstrate the importance of considering a range of potential futures in the planning process. For example, local rainwater retention measures strongly reduce flood risk a scenario with moderate increase of rain intensities and moderate urban growth, but their performance strongly varies, yielding very little improvement in situations with pronounced climate change. The systematic testing of adaptation measures further allows for the identification of so-called adaptation tipping points, i.e. levels for the drivers of flood risk where the desired level of flood risk is exceeded despite the implementation of (a combination of) mitigation measures. Assuming a range of development rates for the drivers of flood risk, such tipping points can be translated into

  13. Tests of local Lorentz invariance violation of gravity in the standard model extension with pulsars.

    Science.gov (United States)

    Shao, Lijing

    2014-03-21

    The standard model extension is an effective field theory introducing all possible Lorentz-violating (LV) operators to the standard model and general relativity (GR). In the pure-gravity sector of minimal standard model extension, nine coefficients describe dominant observable deviations from GR. We systematically implemented 27 tests from 13 pulsar systems to tightly constrain eight linear combinations of these coefficients with extensive Monte Carlo simulations. It constitutes the first detailed and systematic test of the pure-gravity sector of minimal standard model extension with the state-of-the-art pulsar observations. No deviation from GR was detected. The limits of LV coefficients are expressed in the canonical Sun-centered celestial-equatorial frame for the convenience of further studies. They are all improved by significant factors of tens to hundreds with existing ones. As a consequence, Einstein's equivalence principle is verified substantially further by pulsar experiments in terms of local Lorentz invariance in gravity.

  14. Model- and calibration-independent test of cosmic acceleration

    International Nuclear Information System (INIS)

    Seikel, Marina; Schwarz, Dominik J.

    2009-01-01

    We present a calibration-independent test of the accelerated expansion of the universe using supernova type Ia data. The test is also model-independent in the sense that no assumptions about the content of the universe or about the parameterization of the deceleration parameter are made and that it does not assume any dynamical equations of motion. Yet, the test assumes the universe and the distribution of supernovae to be statistically homogeneous and isotropic. A significant reduction of systematic effects, as compared to our previous, calibration-dependent test, is achieved. Accelerated expansion is detected at significant level (4.3σ in the 2007 Gold sample, 7.2σ in the 2008 Union sample) if the universe is spatially flat. This result depends, however, crucially on supernovae with a redshift smaller than 0.1, for which the assumption of statistical isotropy and homogeneity is less well established

  15. User testing of an adaptation of fishbone diagrams to depict results of systematic reviews

    Directory of Open Access Journals (Sweden)

    Gerald Gartlehner

    2017-12-01

    Full Text Available Abstract Background Summary of findings tables in systematic reviews are highly informative but require epidemiological training to be interpreted correctly. The usage of fishbone diagrams as graphical displays could offer researchers an effective approach to simplify content for readers with limited epidemiological training. In this paper we demonstrate how fishbone diagrams can be applied to systematic reviews and present the results of an initial user testing. Methods Findings from two systematic reviews were graphically depicted in the form of the fishbone diagram. To test the utility of fishbone diagrams compared with summary of findings tables, we developed and pilot-tested an online survey using Qualtrics. Respondents were randomized to the fishbone diagram or a summary of findings table presenting the same body of evidence. They answered questions in both open-ended and closed-answer formats; all responses were anonymous. Measures of interest focused on first and second impressions, the ability to find and interpret critical information, as well as user experience with both displays. We asked respondents about the perceived utility of fishbone diagrams compared to summary of findings tables. We analyzed quantitative data by conducting t-tests and comparing descriptive statistics. Results Based on real world systematic reviews, we provide two different fishbone diagrams to show how they might be used to display complex information in a clear and succinct manner. User testing on 77 students with basic epidemiological training revealed that participants preferred summary of findings tables over fishbone diagrams. Significantly more participants liked the summary of findings table than the fishbone diagram (71.8% vs. 44.8%; p < .01; significantly more participants found the fishbone diagram confusing (63.2% vs. 35.9%, p < .05 or indicated that it was difficult to find information (65.8% vs. 45%; p < .01. However, more than half

  16. Diagnostic accuracy of scapular physical examination tests for shoulder disorders: a systematic review.

    Science.gov (United States)

    Wright, Alexis A; Wassinger, Craig A; Frank, Mason; Michener, Lori A; Hegedus, Eric J

    2013-09-01

    To systematically review and critique the evidence regarding the diagnostic accuracy of physical examination tests for the scapula in patients with shoulder disorders. A systematic, computerised literature search of PubMED, EMBASE, CINAHL and the Cochrane Library databases (from database inception through January 2012) using keywords related to diagnostic accuracy of physical examination tests of the scapula. The Quality Assessment of Diagnostic Accuracy Studies tool was used to critique the quality of each paper. Eight articles met the inclusion criteria; three were considered to be of high quality. Of the three high-quality studies, two were in reference to a 'diagnosis' of shoulder pain. Only one high-quality article referenced specific shoulder pathology of acromioclavicular dislocation with reported sensitivity of 71% and 41% for the scapular dyskinesis and SICK scapula test, respectively. Overall, no physical examination test of the scapula was found to be useful in differentially diagnosing pathologies of the shoulder.

  17. Methods Used in Economic Evaluations of Chronic Kidney Disease Testing — A Systematic Review

    Science.gov (United States)

    Sutton, Andrew J.; Breheny, Katie; Deeks, Jon; Khunti, Kamlesh; Sharpe, Claire; Ottridge, Ryan S.; Stevens, Paul E.; Cockwell, Paul; Kalra, Philp A.; Lamb, Edmund J.

    2015-01-01

    Background The prevalence of chronic kidney disease (CKD) is high in general populations around the world. Targeted testing and screening for CKD are often conducted to help identify individuals that may benefit from treatment to ameliorate or prevent their disease progression. Aims This systematic review examines the methods used in economic evaluations of testing and screening in CKD, with a particular focus on whether test accuracy has been considered, and how analysis has incorporated issues that may be important to the patient, such as the impact of testing on quality of life and the costs they incur. Methods Articles that described model-based economic evaluations of patient testing interventions focused on CKD were identified through the searching of electronic databases and the hand searching of the bibliographies of the included studies. Results The initial electronic searches identified 2,671 papers of which 21 were included in the final review. Eighteen studies focused on proteinuria, three evaluated glomerular filtration rate testing and one included both tests. The full impact of inaccurate test results was frequently not considered in economic evaluations in this setting as a societal perspective was rarely adopted. The impact of false positive tests on patients in terms of the costs incurred in re-attending for repeat testing, and the anxiety associated with a positive test was almost always overlooked. In one study where the impact of a false positive test on patient quality of life was examined in sensitivity analysis, it had a significant impact on the conclusions drawn from the model. Conclusion Future economic evaluations of kidney function testing should examine testing and monitoring pathways from the perspective of patients, to ensure that issues that are important to patients, such as the possibility of inaccurate test results, are properly considered in the analysis. PMID:26465773

  18. A SYSTEMATIC STUDY OF SOFTWARE QUALITY MODELS

    OpenAIRE

    Dr.Vilas. M. Thakare; Ashwin B. Tomar

    2011-01-01

    This paper aims to provide a basis for software quality model research, through a systematic study ofpapers. It identifies nearly seventy software quality research papers from journals and classifies paper asper research topic, estimation approach, study context and data set. The paper results combined withother knowledge provides support for recommendations in future software quality model research, toincrease the area of search for relevant studies, carefully select the papers within a set ...

  19. Causal judgment from contingency information: a systematic test of the pCI rule.

    Science.gov (United States)

    White, Peter A

    2004-04-01

    Contingency information is information about the occurrence or nonoccurrence of an effect when a possible cause is present or absent. Under the evidential evaluation model, instances of contingency information are transformed into evidence and causal judgment is based on the proportion of relevant instances evaluated as confirmatory for the candidate cause. In this article, two experiments are reported that were designed to test systematic manipulations of the proportion of confirming instances in relation to other variables: the proportion of instances on which the candidate cause is present, the proportion of instances in which the effect occurs when the cause is present, and the objective contingency. Results showed that both unweighted and weighted versions of the proportion-of-confirmatory-instances rule successfully predicted the main features of the results, with the weighted version proving more successful. Other models, including the power PC theory, failed to predict the results.

  20. Diagnostic accuracy of physical examination tests of the ankle/foot complex: a systematic review.

    Science.gov (United States)

    Schwieterman, Braun; Haas, Deniele; Columber, Kirby; Knupp, Darren; Cook, Chad

    2013-08-01

    Orthopedic special tests of the ankle/foot complex are routinely used during the physical examination process in order to help diagnose ankle/lower leg pathologies. The purpose of this systematic review was to investigate the diagnostic accuracy of ankle/lower leg special tests. A search of the current literature was conducted using PubMed, CINAHL, SPORTDiscus, ProQuest Nursing and Allied Health Sources, Scopus, and Cochrane Library. Studies were eligible if they included the following: 1) a diagnostic clinical test of musculoskeletal pathology in the ankle/foot complex, 2) description of the clinical test or tests, 3) a report of the diagnostic accuracy of the clinical test (e.g. sensitivity and specificity), and 4) an acceptable reference standard for comparison. The quality of included studies was determined by two independent reviewers using the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) tool. Nine diagnostic accuracy studies met the inclusion criteria for this systematic review; analyzing a total of 16 special tests of the ankle/foot complex. After assessment using the QUADAS-2, only one study had low risk of bias and low concerns regarding applicability. Most ankle/lower leg orthopedic special tests are confirmatory in nature and are best utilized at the end of the physical examination. Most of the studies included in this systematic review demonstrate notable biases, which suggest that results and recommendations in this review should be taken as a guide rather than an outright standard. There is need for future research with more stringent study design criteria so that more accurate diagnostic power of ankle/lower leg special tests can be determined. 3a.

  1. Modelling the transuranic contamination in soils by using a generic model and systematic sampling

    International Nuclear Information System (INIS)

    Breitenecker, Katharina; Brandl, Alexander; Bock, Helmut; Villa, Mario

    2008-01-01

    Full text: In the course of the decommissioning the former ASTRA Research Reactor, the Seibersdorf site is to be surveyed for possible contamination by radioactive materials, including transuranium elements. To limit costs due to systematic sampling and time consuming laboratory analyses, a mathematical model that describes the migration of transuranium elements and that includes the local topography of the area where deposition has occurred, was established.The project basis is to find a mathematical function that determines the contamination by modelling the pathways of transuranium elements. The model approach chosen is cellular automata (CA). For this purpose, a hypothetical activity of transuranium elements is released on the ground in the centre of a simulated area. Under the assumption that migration of these elements only takes place by diffusion, transport and sorption, their equations are modelled in the CA-model by a simple discretization for the existing problem. To include local topography, most of the simulated area consists of a green corridor, where migration proceeds quite slowly; streets, where the migrational behaviour is different, and migration velocities in ditches are also modelled. The Migration of three different plutonium isotopes ( 238P u, 239+240P u, 241P u), the migration of one americium isotope ( 241A m), the radioactive decay of 241P u via 241A m to 237N p and the radioactive decay of 238P u to 234U were considered in this model. Due to the special modelling approach of CA, the physical necessity of conservation of the amount of substance is always fulfilled. The entire system was implemented in MATLAB. Systematic sampling onto a featured test site, followed by detailed laboratory analyses were done to compare the underlying CA-model to real data. On this account a nuclide vector with 241A m as the reference nuclide was established. As long as the initial parameters (e.g. meteorological data) are well known, the model describes the

  2. Systematic review of studies on cost-effectiveness of cystic fibrosis carrier testing

    Directory of Open Access Journals (Sweden)

    Ernesto Andrade-Cerquera

    2016-10-01

    Full Text Available Introduction: Cystic fibrosis is considered the most common autosomal disease with multisystem complications in non-Hispanic white population. Objective: To review the available evidence on cost-effectiveness of the cystic fibrosis carrier testing compared to no intervention. Materials and methods: The databases of MEDLINE, Embase, NHS, EBM Reviews - Cochrane Database of Systematic Reviews, LILACS, Health Technology Assessment, Genetests.org, Genetsickkids.org and Web of Science were used to conduct a systematic review of the cost-effectiveness of performing the genetic test in cystic fibrosis patients. Cost-effectiveness studies were included without language or date of publication restrictions. Results: Only 13 studies were relevant for full review. Prenatal, preconception and mixed screening strategies were found. Health perspective was the most used; the discount rate applied was heterogeneous between 3.5% and 5%; the main analysis unit was the cost per detected carrier couple, followed by cost per averted birth with cystic fibrosis. It was evident that the most cost-effective strategy was preconception screening associated with prenatal test. Conclusions: A marked heterogeneity in the methodology was found, which led to incomparable results and to conclude that there are different approaches to this genetic test.

  3. Systematic identification of crystallization kinetics within a generic modelling framework

    DEFF Research Database (Denmark)

    Abdul Samad, Noor Asma Fazli Bin; Meisler, Kresten Troelstrup; Gernaey, Krist

    2012-01-01

    A systematic development of constitutive models within a generic modelling framework has been developed for use in design, analysis and simulation of crystallization operations. The framework contains a tool for model identification connected with a generic crystallizer modelling tool-box, a tool...

  4. Accuracy of monofilament testing to diagnose peripheral neuropathy: a systematic review

    NARCIS (Netherlands)

    Dros, Jacquelien; Wewerinke, Astrid; Bindels, Patrick J.; van Weert, Henk C.

    2009-01-01

    We wanted to summarize evidence about the diagnostic accuracy of the 5.07/10-g monofilament test in peripheral neuropathy. We conducted a systematic review of studies in which the accuracy of the 5.07/10-g monofilament was evaluated to detect peripheral neuropathy of any cause using nerve conduction

  5. Accuracy of Monofilament Testing to Diagnose Peripheral Neuropathy: A Systematic Review

    NARCIS (Netherlands)

    Dros, J.; Wewerinke, A.; Bindels, P.J.; van Weert, H.C.

    2009-01-01

    PURPOSE We wanted to summarize evidence about the diagnostic accuracy of the 5.07/10-g monofilament test in peripheral neuropathy. METHODS We conducted a systematic review of studies in which the accuracy of the 5.07/10-g monofilament was evaluated to detect peripheral neuropathy of any cause using

  6. Systematic literature review and meta-analysis of diagnostic test accuracy in Alzheimer's disease and other dementia using autopsy as standard of truth.

    Science.gov (United States)

    Cure, Sandrine; Abrams, Keith; Belger, Mark; Dell'agnello, Grazzia; Happich, Michael

    2014-01-01

    Early diagnosis of Alzheimer's disease (AD) is crucial to implement the latest treatment strategies and management of AD symptoms. Diagnostic procedures play a major role in this detection process but evidence on their respective accuracy is still limited. To conduct a systematic literature on the sensitivity and specificity of different test modalities to identify AD patients and perform meta-analyses on the test accuracy values of studies focusing on autopsy-confirmation as the standard of truth. The systematic review identified all English papers published between 1984 and 2011 on diagnostic imaging tests and cerebrospinal fluid biomarkers including results on the newest technologies currently investigated in this area. Meta-analyses using bivariate fixed and random-effect models and hierarchical summary receiver operating curve (HSROC) random-effect model were applied. Out of the 1,189 records, 20 publications were identified to report the accuracy of diagnostic tests in distinguishing autopsy-confirmed AD patients from other dementia types and healthy controls. Looking at all tests and comparator populations together, sensitivity was calculated at 85.4% (95% confidence interval [CI]: 80.9%-90.0%) and specificity at 77.7% (95% CI: 70.2%-85.1%). The area under the HSROC curve was 0.88. Sensitivity and specificity values were higher for imaging procedures, and slightly lower for CSF biomarkers. Test-specific random-effect models could not be calculated due to the small number of studies. The review and meta-analysis point to a slight advantage of imaging procedures in correctly detecting AD patients but also highlight the limited evidence on autopsy-confirmations and heterogeneity in study designs.

  7. Systematic modelling and simulation of refrigeration systems

    DEFF Research Database (Denmark)

    Rasmussen, Bjarne D.; Jakobsen, Arne

    1998-01-01

    The task of developing a simulation model of a refrigeration system can be very difficult and time consuming. In order for this process to be effective, a systematic method for developing the system model is required. This method should aim at guiding the developer to clarify the purpose...... of the simulation, to select appropriate component models and to set up the equations in a well-arranged way. In this paper the outline of such a method is proposed and examples showing the use of this method for simulation of refrigeration systems are given....

  8. A 'Turing' Test for Landscape Evolution Models

    Science.gov (United States)

    Parsons, A. J.; Wise, S. M.; Wainwright, J.; Swift, D. A.

    2008-12-01

    Resolving the interactions among tectonics, climate and surface processes at long timescales has benefited from the development of computer models of landscape evolution. However, testing these Landscape Evolution Models (LEMs) has been piecemeal and partial. We argue that a more systematic approach is required. What is needed is a test that will establish how 'realistic' an LEM is and thus the extent to which its predictions may be trusted. We propose a test based upon the Turing Test of artificial intelligence as a way forward. In 1950 Alan Turing posed the question of whether a machine could think. Rather than attempt to address the question directly he proposed a test in which an interrogator asked questions of a person and a machine, with no means of telling which was which. If the machine's answer could not be distinguished from those of the human, the machine could be said to demonstrate artificial intelligence. By analogy, if an LEM cannot be distinguished from a real landscape it can be deemed to be realistic. The Turing test of intelligence is a test of the way in which a computer behaves. The analogy in the case of an LEM is that it should show realistic behaviour in terms of form and process, both at a given moment in time (punctual) and in the way both form and process evolve over time (dynamic). For some of these behaviours, tests already exist. For example there are numerous morphometric tests of punctual form and measurements of punctual process. The test discussed in this paper provides new ways of assessing dynamic behaviour of an LEM over realistically long timescales. However challenges remain in developing an appropriate suite of challenging tests, in applying these tests to current LEMs and in developing LEMs that pass them.

  9. Physical examination tests for the diagnosis of posterior cruciate ligament rupture: a systematic review.

    Science.gov (United States)

    Kopkow, Christian; Freiberg, Alice; Kirschner, Stephan; Seidler, Andreas; Schmitt, Jochen

    2013-11-01

    Systematic literature review. To summarize and evaluate research on the accuracy of physical examination tests for diagnosis of posterior cruciate ligament (PCL) tear. Rupture of the PCL is a severe knee injury that can lead to delayed rehabilitation, instability, or chronic knee pathologies. To our knowledge, there is currently no systematic review of studies on the diagnostic accuracy of clinical examination tests to evaluate the integrity of the PCL. A comprehensive systematic literature search was conducted in MEDLINE from 1946, Embase from 1974, and the Allied and Complementary Medicine Database from 1985 until April 30, 2012. Studies were considered eligible if they compared the results of physical examination tests performed in the context of a PCL physical examination to those of a reference standard (arthroscopy, arthrotomy, magnetic resonance imaging). Methodological quality assessment was performed by 2 independent reviewers using the revised version of the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. The search strategy revealed 1307 articles, of which 11 met the inclusion criteria for this review. In these studies, 11 different physical examination tests were identified. Due to differences in study types, different patient populations, and methodological quality, meta-analysis was not indicated. Presently, most physical examination tests have not been evaluated sufficiently enough to be confident in their ability to either confirm or rule out a PCL tear. The diagnostic accuracy of physical examination tests to assess the integrity of the PCL is largely unknown. There is a strong need for further research in this area. Level of Evidence Diagnosis, level 3a.

  10. Systematic review and modelling of the cost-effectiveness of cardiac magnetic resonance imaging compared with current existing testing pathways in ischaemic cardiomyopathy.

    Science.gov (United States)

    Campbell, Fiona; Thokala, Praveen; Uttley, Lesley C; Sutton, Anthea; Sutton, Alex J; Al-Mohammad, Abdallah; Thomas, Steven M

    2014-09-01

    Cardiac magnetic resonance imaging (CMR) is increasingly used to assess patients for myocardial viability prior to revascularisation. This is important to ensure that only those likely to benefit are subjected to the risk of revascularisation. To assess current evidence on the accuracy and cost-effectiveness of CMR to test patients prior to revascularisation in ischaemic cardiomyopathy; to develop an economic model to assess cost-effectiveness for different imaging strategies; and to identify areas for further primary research. Databases searched were: MEDLINE including MEDLINE In-Process & Other Non-Indexed Citations Initial searches were conducted in March 2011 in the following databases with dates: MEDLINE including MEDLINE In-Process & Other Non-Indexed Citations via Ovid (1946 to March 2011); Bioscience Information Service (BIOSIS) Previews via Web of Science (1969 to March 2011); EMBASE via Ovid (1974 to March 2011); Cochrane Database of Systematic Reviews via The Cochrane Library (1996 to March 2011); Cochrane Central Register of Controlled Trials via The Cochrane Library 1998 to March 2011; Database of Abstracts of Reviews of Effects via The Cochrane Library (1994 to March 2011); NHS Economic Evaluation Database via The Cochrane Library (1968 to March 2011); Health Technology Assessment Database via The Cochrane Library (1989 to March 2011); and the Science Citation Index via Web of Science (1900 to March 2011). Additional searches were conducted from October to November 2011 in the following databases with dates: MEDLINE including MEDLINE In-Process & Other Non-Indexed Citations via Ovid (1946 to November 2011); BIOSIS Previews via Web of Science (1969 to October 2011); EMBASE via Ovid (1974 to November 2011); Cochrane Database of Systematic Reviews via The Cochrane Library (1996 to November 2011); Cochrane Central Register of Controlled Trials via The Cochrane Library (1998 to November 2011); Database of Abstracts of Reviews of Effects via The Cochrane

  11. Systematic review and meta-analysis of studies evaluating diagnostic test accuracy: A practical review for clinical researchers-Part II. general guidance and tips

    International Nuclear Information System (INIS)

    Kim, Kyung Won; Choi, Sang Hyun; Huh, Jimi; Park, Seong Ho; Lee, June Young

    2015-01-01

    Meta-analysis of diagnostic test accuracy studies differs from the usual meta-analysis of therapeutic/interventional studies in that, it is required to simultaneously analyze a pair of two outcome measures such as sensitivity and specificity, instead of a single outcome. Since sensitivity and specificity are generally inversely correlated and could be affected by a threshold effect, more sophisticated statistical methods are required for the meta-analysis of diagnostic test accuracy. Hierarchical models including the bivariate model and the hierarchical summary receiver operating characteristic model are increasingly being accepted as standard methods for meta-analysis of diagnostic test accuracy studies. We provide a conceptual review of statistical methods currently used and recommended for meta-analysis of diagnostic test accuracy studies. This article could serve as a methodological reference for those who perform systematic review and meta-analysis of diagnostic test accuracy studies

  12. Thermal sensation models: a systematic comparison.

    Science.gov (United States)

    Koelblen, B; Psikuta, A; Bogdan, A; Annaheim, S; Rossi, R M

    2017-05-01

    Thermal sensation models, capable of predicting human's perception of thermal surroundings, are commonly used to assess given indoor conditions. These models differ in many aspects, such as the number and type of input conditions, the range of conditions in which the models can be applied, and the complexity of equations. Moreover, the models are associated with various thermal sensation scales. In this study, a systematic comparison of seven existing thermal sensation models has been performed with regard to exposures including various air temperatures, clothing thermal insulation, and metabolic rate values after a careful investigation of the models' range of applicability. Thermo-physiological data needed as input for some of the models were obtained from a mathematical model for human physiological responses. The comparison showed differences between models' predictions for the analyzed conditions, mostly higher than typical intersubject differences in votes. Therefore, it can be concluded that the choice of model strongly influences the assessment of indoor spaces. The issue of comparing different thermal sensation scales has also been discussed. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. Testing the effectiveness of simplified search strategies for updating systematic reviews.

    Science.gov (United States)

    Rice, Maureen; Ali, Muhammad Usman; Fitzpatrick-Lewis, Donna; Kenny, Meghan; Raina, Parminder; Sherifali, Diana

    2017-08-01

    The objective of the study was to test the overall effectiveness of a simplified search strategy (SSS) for updating systematic reviews. We identified nine systematic reviews undertaken by our research group for which both comprehensive and SSS updates were performed. Three relevant performance measures were estimated, that is, sensitivity, precision, and number needed to read (NNR). The update reference searches for all nine included systematic reviews identified a total of 55,099 citations that were screened resulting in final inclusion of 163 randomized controlled trials. As compared with reference search, the SSS resulted in 8,239 hits and had a median sensitivity of 83.3%, while precision and NNR were 4.5 times better. During analysis, we found that the SSS performed better for clinically focused topics, with a median sensitivity of 100% and precision and NNR 6 times better than for the reference searches. For broader topics, the sensitivity of the SSS was 80% while precision and NNR were 5.4 times better compared with reference search. SSS performed well for clinically focused topics and, with a median sensitivity of 100%, could be a viable alternative to a conventional comprehensive search strategy for updating this type of systematic reviews particularly considering the budget constraints and the volume of new literature being published. For broader topics, 80% sensitivity is likely to be considered too low for a systematic review update in most cases, although it might be acceptable if updating a scoping or rapid review. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. A Systematic Identification Method for Thermodynamic Property Modelling

    DEFF Research Database (Denmark)

    Ana Perederic, Olivia; Cunico, Larissa; Sarup, Bent

    2017-01-01

    In this work, a systematic identification method for thermodynamic property modelling is proposed. The aim of the method is to improve the quality of phase equilibria prediction by group contribution based property prediction models. The method is applied to lipid systems where the Original UNIFAC...... model is used. Using the proposed method for estimating the interaction parameters using only VLE data, a better phase equilibria prediction for both VLE and SLE was obtained. The results were validated and compared with the original model performance...

  15. The reliability of physical examination tests for the diagnosis of anterior cruciate ligament rupture--A systematic review.

    Science.gov (United States)

    Lange, Toni; Freiberg, Alice; Dröge, Patrik; Lützner, Jörg; Schmitt, Jochen; Kopkow, Christian

    2015-06-01

    Systematic literature review. Despite their frequent application in routine care, a systematic review on the reliability of clinical examination tests to evaluate the integrity of the ACL is missing. To summarize and evaluate intra- and interrater reliability research on physical examination tests used for the diagnosis of ACL tears. A comprehensive systematic literature search was conducted in MEDLINE, EMBASE and AMED until May 30th 2013. Studies were included if they assessed the intra- and/or interrater reliability of physical examination tests for the integrity of the ACL. Methodological quality was evaluated with the Quality Appraisal of Reliability Studies (QAREL) tool by two independent reviewers. 110 hits were achieved of which seven articles finally met the inclusion criteria. These studies examined the reliability of four physical examination tests. Intrarater reliability was assessed in three studies and ranged from fair to almost perfect (Cohen's k = 0.22-1.00). Interrater reliability was assessed in all included studies and ranged from slight to almost perfect (Cohen's k = 0.02-0.81). The Lachman test is the physical tests with the highest intrarater reliability (Cohen's k = 1.00), the Lachman test performed in prone position the test with the highest interrater reliability (Cohen's k = 0.81). Included studies were partly of low methodological quality. A meta-analysis could not be performed due to the heterogeneity in study populations, reliability measures and methodological quality of included studies. Systematic investigations on the reliability of physical examination tests to assess the integrity of the ACL are scarce and of varying methodological quality. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. FROM ATOMISTIC TO SYSTEMATIC COARSE-GRAINED MODELS FOR MOLECULAR SYSTEMS

    KAUST Repository

    Harmandaris, Vagelis; Kalligiannaki, Evangelia; Katsoulakis, Markos; Plechac, Petr

    2017-01-01

    The development of systematic (rigorous) coarse-grained mesoscopic models for complex molecular systems is an intense research area. Here we first give an overview of methods for obtaining optimal parametrized coarse-grained models, starting from

  17. Strengthening Theoretical Testing in Criminology Using Agent-based Modeling.

    Science.gov (United States)

    Johnson, Shane D; Groff, Elizabeth R

    2014-07-01

    The Journal of Research in Crime and Delinquency ( JRCD ) has published important contributions to both criminological theory and associated empirical tests. In this article, we consider some of the challenges associated with traditional approaches to social science research, and discuss a complementary approach that is gaining popularity-agent-based computational modeling-that may offer new opportunities to strengthen theories of crime and develop insights into phenomena of interest. Two literature reviews are completed. The aim of the first is to identify those articles published in JRCD that have been the most influential and to classify the theoretical perspectives taken. The second is intended to identify those studies that have used an agent-based model (ABM) to examine criminological theories and to identify which theories have been explored. Ecological theories of crime pattern formation have received the most attention from researchers using ABMs, but many other criminological theories are amenable to testing using such methods. Traditional methods of theory development and testing suffer from a number of potential issues that a more systematic use of ABMs-not without its own issues-may help to overcome. ABMs should become another method in the criminologists toolbox to aid theory testing and falsification.

  18. Internet-Based Direct-to-Consumer Genetic Testing: A Systematic Review.

    Science.gov (United States)

    Covolo, Loredana; Rubinelli, Sara; Ceretti, Elisabetta; Gelatti, Umberto

    2015-12-14

    Direct-to-consumer genetic tests (DTC-GT) are easily purchased through the Internet, independent of a physician referral or approval for testing, allowing the retrieval of genetic information outside the clinical context. There is a broad debate about the testing validity, their impact on individuals, and what people know and perceive about them. The aim of this review was to collect evidence on DTC-GT from a comprehensive perspective that unravels the complexity of the phenomenon. A systematic search was carried out through PubMed, Web of Knowledge, and Embase, in addition to Google Scholar according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist with the key term "Direct-to-consumer genetic test." In the final sample, 118 articles were identified. Articles were summarized in five categories according to their focus on (1) knowledge of, attitude toward use of, and perception of DTC-GT (n=37), (2) the impact of genetic risk information on users (n=37), (3) the opinion of health professionals (n=20), (4) the content of websites selling DTC-GT (n=16), and (5) the scientific evidence and clinical utility of the tests (n=14). Most of the articles analyzed the attitude, knowledge, and perception of DTC-GT, highlighting an interest in using DTC-GT, along with the need for a health care professional to help interpret the results. The articles investigating the content analysis of the websites selling these tests are in agreement that the information provided by the companies about genetic testing is not completely comprehensive for the consumer. Given that risk information can modify consumers' health behavior, there are surprisingly few studies carried out on actual consumers and they do not confirm the overall concerns on the possible impact of DTC-GT. Data from studies that investigate the quality of the tests offered confirm that they are not informative, have little predictive power, and do not measure genetic risk

  19. Bayesian Network Models in Cyber Security: A Systematic Review

    OpenAIRE

    Chockalingam, S.; Pieters, W.; Herdeiro Teixeira, A.M.; van Gelder, P.H.A.J.M.; Lipmaa, Helger; Mitrokotsa, Aikaterini; Matulevicius, Raimundas

    2017-01-01

    Bayesian Networks (BNs) are an increasingly popular modelling technique in cyber security especially due to their capability to overcome data limitations. This is also instantiated by the growth of BN models development in cyber security. However, a comprehensive comparison and analysis of these models is missing. In this paper, we conduct a systematic review of the scientific literature and identify 17 standard BN models in cyber security. We analyse these models based on 9 different criteri...

  20. Model validation: a systemic and systematic approach

    International Nuclear Information System (INIS)

    Sheng, G.; Elzas, M.S.; Cronhjort, B.T.

    1993-01-01

    The term 'validation' is used ubiquitously in association with the modelling activities of numerous disciplines including social, political natural, physical sciences, and engineering. There is however, a wide range of definitions which give rise to very different interpretations of what activities the process involves. Analyses of results from the present large international effort in modelling radioactive waste disposal systems illustrate the urgent need to develop a common approach to model validation. Some possible explanations are offered to account for the present state of affairs. The methodology developed treats model validation and code verification in a systematic fashion. In fact, this approach may be regarded as a comprehensive framework to assess the adequacy of any simulation study. (author)

  1. Explanatory item response modelling of an abstract reasoning assessment: A case for modern test design

    OpenAIRE

    Helland, Fredrik

    2016-01-01

    Assessment is an integral part of society and education, and for this reason it is important to know what you measure. This thesis is about explanatory item response modelling of an abstract reasoning assessment, with the objective to create a modern test design framework for automatic generation of valid and precalibrated items of abstract reasoning. Modern test design aims to strengthen the connections between the different components of a test, with a stress on strong theory, systematic it...

  2. Cue-Controlled Relaxation and Systematic Desensitization versus Nonspecific Factors in Treating Test Anxiety.

    Science.gov (United States)

    Russell, Richard K.; Lent, Robert W.

    1982-01-01

    Compared the efficacy of two behavioral anxiety reduction techniques against "subconscious reconditioning," an empirically derived placebo method. Examination of within-group changes showed systematic desensitization produced significant reductions in test and trait anxiety, and remaining treatments and the placebo demonstrated…

  3. Should we assess climate model predictions in light of severe tests?

    Science.gov (United States)

    Katzav, Joel

    2011-06-01

    According to Austro-British philosopher Karl Popper, a system of theoretical claims is scientific only if it is methodologically falsifiable, i.e., only if systematic attempts to falsify or severely test the system are being carried out [Popper, 2005, pp. 20, 62]. He holds that a test of a theoretical system is severe if and only if it is a test of the applicability of the system to a case in which the system's failure is likely in light of background knowledge, i.e., in light of scientific assumptions other than those of the system being tested [Popper, 2002, p. 150]. Popper counts the 1919 tests of general relativity's then unlikely predictions of the deflection of light in the Sun's gravitational field as severe. An implication of Popper's above condition for being a scientific theoretical system is the injunction to assess theoretical systems in light of how well they have withstood severe testing. Applying this injunction to assessing the quality of climate model predictions (CMPs), including climate model projections, would involve assigning a quality to each CMP as a function of how well it has withstood severe tests allowed by its implications for past, present, and nearfuture climate or, alternatively, as a function of how well the models that generated the CMP have withstood severe tests of their suitability for generating the CMP.

  4. Accuracy of clinical tests in the diagnosis of anterior cruciate ligament injury: A systematic review

    NARCIS (Netherlands)

    M.S. Swain (Michael S.); N. Henschke (Nicholas); S.J. Kamper (Steven); A.S. Downie (Aron S.); B.W. Koes (Bart); C. Maher (Chris)

    2014-01-01

    textabstractBackground: Numerous clinical tests are used in the diagnosis of anterior cruciate ligament (ACL) injury but their accuracy is unclear. The purpose of this study is to evaluate the diagnostic accuracy of clinical tests for the diagnosis of ACL injury.Methods: Study Design: Systematic

  5. Hospitality and Tourism Online Review Research: A Systematic Analysis and Heuristic-Systematic Model

    Directory of Open Access Journals (Sweden)

    Sunyoung Hlee

    2018-04-01

    Full Text Available With tremendous growth and potential of online consumer reviews, online reviews of hospitality and tourism are now playing a significant role in consumer attitude and buying behaviors. This study reviewed and analyzed hospitality and tourism related articles published in academic journals. The systematic approach was used to analyze 55 research articles between January 2008 and December 2017. This study presented a brief synthesis of research by investigating content-related characteristics of hospitality and tourism online reviews (HTORs in different market segments. Two research questions were addressed. Building upon our literature analysis, we used the heuristic-systematic model (HSM to summarize and classify the characteristics affecting consumer perception in previous HTOR studies. We believe that the framework helps researchers to identify the research topic in extended HTORs literature and to point out possible direction for future studies.

  6. Background model systematics for the Fermi GeV excess

    Energy Technology Data Exchange (ETDEWEB)

    Calore, Francesca; Cholis, Ilias; Weniger, Christoph

    2015-03-01

    The possible gamma-ray excess in the inner Galaxy and the Galactic center (GC) suggested by Fermi-LAT observations has triggered a large number of studies. It has been interpreted as a variety of different phenomena such as a signal from WIMP dark matter annihilation, gamma-ray emission from a population of millisecond pulsars, or emission from cosmic rays injected in a sequence of burst-like events or continuously at the GC. We present the first comprehensive study of model systematics coming from the Galactic diffuse emission in the inner part of our Galaxy and their impact on the inferred properties of the excess emission at Galactic latitudes 2° < |b| < 20° and 300 MeV to 500 GeV. We study both theoretical and empirical model systematics, which we deduce from a large range of Galactic diffuse emission models and a principal component analysis of residuals in numerous test regions along the Galactic plane. We show that the hypothesis of an extended spherical excess emission with a uniform energy spectrum is compatible with the Fermi-LAT data in our region of interest at 95% CL. Assuming that this excess is the extended counterpart of the one seen in the inner few degrees of the Galaxy, we derive a lower limit of 10.0° (95% CL) on its extension away from the GC. We show that, in light of the large correlated uncertainties that affect the subtraction of the Galactic diffuse emission in the relevant regions, the energy spectrum of the excess is equally compatible with both a simple broken power-law of break energy E(break) = 2.1 ± 0.2 GeV, and with spectra predicted by the self-annihilation of dark matter, implying in the case of bar bb final states a dark matter mass of m(χ)=49(+6.4)(-)(5.4)  GeV.

  7. The Six-Minute Walk Test in Chronic Pediatric Conditions: A Systematic Review of Measurement Properties

    NARCIS (Netherlands)

    Bart Bartels; Janke de Groot; Caroline Terwee

    2013-01-01

    Background The Six-Minute Walk Test (6MWT) is increasingly being used as a functional outcome measure for chronic pediatric conditions. Knowledge about its measurement properties is needed to determine whether it is an appropriate test to use. Purpose The purpose of this study was to systematically

  8. Internet-Based Direct-to-Consumer Genetic Testing: A Systematic Review

    Science.gov (United States)

    Rubinelli, Sara; Ceretti, Elisabetta; Gelatti, Umberto

    2015-01-01

    Background Direct-to-consumer genetic tests (DTC-GT) are easily purchased through the Internet, independent of a physician referral or approval for testing, allowing the retrieval of genetic information outside the clinical context. There is a broad debate about the testing validity, their impact on individuals, and what people know and perceive about them. Objective The aim of this review was to collect evidence on DTC-GT from a comprehensive perspective that unravels the complexity of the phenomenon. Methods A systematic search was carried out through PubMed, Web of Knowledge, and Embase, in addition to Google Scholar according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist with the key term “Direct-to-consumer genetic test.” Results In the final sample, 118 articles were identified. Articles were summarized in five categories according to their focus on (1) knowledge of, attitude toward use of, and perception of DTC-GT (n=37), (2) the impact of genetic risk information on users (n=37), (3) the opinion of health professionals (n=20), (4) the content of websites selling DTC-GT (n=16), and (5) the scientific evidence and clinical utility of the tests (n=14). Most of the articles analyzed the attitude, knowledge, and perception of DTC-GT, highlighting an interest in using DTC-GT, along with the need for a health care professional to help interpret the results. The articles investigating the content analysis of the websites selling these tests are in agreement that the information provided by the companies about genetic testing is not completely comprehensive for the consumer. Given that risk information can modify consumers’ health behavior, there are surprisingly few studies carried out on actual consumers and they do not confirm the overall concerns on the possible impact of DTC-GT. Data from studies that investigate the quality of the tests offered confirm that they are not informative, have little predictive

  9. Systematic experimental based modeling of a rotary piezoelectric ultrasonic motor

    DEFF Research Database (Denmark)

    Mojallali, Hamed; Amini, Rouzbeh; Izadi-Zamanabadi, Roozbeh

    2007-01-01

    In this paper, a new method for equivalent circuit modeling of a traveling wave ultrasonic motor is presented. The free stator of the motor is modeled by an equivalent circuit containing complex circuit elements. A systematic approach for identifying the elements of the equivalent circuit is sugg...

  10. Towards Accurate Modelling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model

    Science.gov (United States)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron K.; Scoccimarro, Roman; Piscionere, Jennifer A.; Wibking, Benjamin D.

    2018-04-01

    Interpreting the small-scale clustering of galaxies with halo models can elucidate the connection between galaxies and dark matter halos. Unfortunately, the modelling is typically not sufficiently accurate for ruling out models statistically. It is thus difficult to use the information encoded in small scales to test cosmological models or probe subtle features of the galaxy-halo connection. In this paper, we attempt to push halo modelling into the "accurate" regime with a fully numerical mock-based methodology and careful treatment of statistical and systematic errors. With our forward-modelling approach, we can incorporate clustering statistics beyond the traditional two-point statistics. We use this modelling methodology to test the standard ΛCDM + halo model against the clustering of SDSS DR7 galaxies. Specifically, we use the projected correlation function, group multiplicity function and galaxy number density as constraints. We find that while the model fits each statistic separately, it struggles to fit them simultaneously. Adding group statistics leads to a more stringent test of the model and significantly tighter constraints on model parameters. We explore the impact of varying the adopted halo definition and cosmological model and find that changing the cosmology makes a significant difference. The most successful model we tried (Planck cosmology with Mvir halos) matches the clustering of low luminosity galaxies, but exhibits a 2.3σ tension with the clustering of luminous galaxies, thus providing evidence that the "standard" halo model needs to be extended. This work opens the door to adding interesting freedom to the halo model and including additional clustering statistics as constraints.

  11. Physical examination tests for screening and diagnosis of cervicogenic headache: A systematic review.

    Science.gov (United States)

    Rubio-Ochoa, J; Benítez-Martínez, J; Lluch, E; Santacruz-Zaragozá, S; Gómez-Contreras, P; Cook, C E

    2016-02-01

    It has been suggested that differential diagnosis of headaches should consist of a robust subjective examination and a detailed physical examination of the cervical spine. Cervicogenic headache (CGH) is a form of headache that involves referred pain from the neck. To our knowledge, no studies have summarized the reliability and diagnostic accuracy of physical examination tests for CGH. The aim of this study was to summarize the reliability and diagnostic accuracy of physical examination tests used to diagnose CGH. A systematic review following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines was performed in four electronic databases (MEDLINE, Web of Science, Embase and Scopus). Full text reports concerning physical tests for the diagnosis of CGH which reported the clinometric properties for assessment of CGH, were included and screened for methodological quality. Quality Appraisal for Reliability Studies (QAREL) and Quality Assessment of Studies of Diagnostic Accuracy (QUADAS-2) scores were completed to assess article quality. Eight articles were retrieved for quality assessment and data extraction. Studies investigating diagnostic reliability of physical examination tests for CGH scored poorer on methodological quality (higher risk of bias) than those of diagnostic accuracy. There is sufficient evidence showing high levels of reliability and diagnostic accuracy of the selected physical examination tests for the diagnosis of CGH. The cervical flexion-rotation test (CFRT) exhibited both the highest reliability and the strongest diagnostic accuracy for the diagnosis of CGH. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Caffeine challenge test and panic disorder: a systematic literature review.

    Science.gov (United States)

    Vilarim, Marina Machado; Rocha Araujo, Daniele Marano; Nardi, Antonio Egidio

    2011-08-01

    This systematic review aimed to examine the results of studies that have investigated the induction of panic attacks and/or the anxiogenic effect of the caffeine challenge test in patients with panic disorder. The literature search was performed in PubMed, Biblioteca Virtual em Saúde and the ISI Web of Knowledge. The words used for the search were caffeine, caffeine challenge test, panic disorder, panic attacks and anxiety disorder. In total, we selected eight randomized, double-blind studies where caffeine was administered orally, and none of them controlled for confounding factors in the analysis. The percentage of loss during follow-up ranged between 14.3% and 73.1%. The eight studies all showed a positive association between caffeine and anxiogenic effects and/or panic disorder.

  13. Simulation Modelling in Healthcare: An Umbrella Review of Systematic Literature Reviews.

    Science.gov (United States)

    Salleh, Syed; Thokala, Praveen; Brennan, Alan; Hughes, Ruby; Booth, Andrew

    2017-09-01

    Numerous studies examine simulation modelling in healthcare. These studies present a bewildering array of simulation techniques and applications, making it challenging to characterise the literature. The aim of this paper is to provide an overview of the level of activity of simulation modelling in healthcare and the key themes. We performed an umbrella review of systematic literature reviews of simulation modelling in healthcare. Searches were conducted of academic databases (JSTOR, Scopus, PubMed, IEEE, SAGE, ACM, Wiley Online Library, ScienceDirect) and grey literature sources, enhanced by citation searches. The articles were included if they performed a systematic review of simulation modelling techniques in healthcare. After quality assessment of all included articles, data were extracted on numbers of studies included in each review, types of applications, techniques used for simulation modelling, data sources and simulation software. The search strategy yielded a total of 117 potential articles. Following sifting, 37 heterogeneous reviews were included. Most reviews achieved moderate quality rating on a modified AMSTAR (A Measurement Tool used to Assess systematic Reviews) checklist. All the review articles described the types of applications used for simulation modelling; 15 reviews described techniques used for simulation modelling; three reviews described data sources used for simulation modelling; and six reviews described software used for simulation modelling. The remaining reviews either did not report or did not provide enough detail for the data to be extracted. Simulation modelling techniques have been used for a wide range of applications in healthcare, with a variety of software tools and data sources. The number of reviews published in recent years suggest an increased interest in simulation modelling in healthcare.

  14. An evaluation of a model for the systematic documentation of hospital based health promotion activities: results from a multicentre study

    Directory of Open Access Journals (Sweden)

    Morris Denise

    2007-09-01

    Full Text Available Abstract Background The first step of handling health promotion (HP in Diagnosis Related Groups (DRGs is a systematic documentation and registration of the activities in the medical records. So far the possibility and tradition for systematic registration of clinical HP activities in the medical records and in patient administrative systems have been sparse. Therefore, the activities are mostly invisible in the registers of hospital services as well as in budgets and balances. A simple model has been described to structure the registration of the HP procedures performed by the clinical staff. The model consists of two parts; first part includes motivational counselling (7 codes and the second part comprehends intervention, rehabilitation and after treatment (8 codes. The objective was to evaluate in an international study the usefulness, applicability and sufficiency of a simple model for the systematic registration of clinical HP procedures in day life. Methods The multi centre project was carried out in 19 departments/hospitals in 6 countries in a clinical setup. The study consisted of three parts in accordance with the objectives. A: Individual test. 20 consecutive medical records from each participating department/hospital were coded by the (coding specialists at local department/hospital, exclusively (n = 5,529 of 5,700 possible tests in total. B: Common test. 14 standardized medical records were coded by all the specialists from 17 departments/hospitals, who returned 3,046 of 3,570 tests. C: Specialist evaluation. The specialists from the 19 departments/hospitals evaluated if the codes were useful, applicable and sufficient for the registration in their own department/hospital (239 of 285. Results A: In 97 to100% of the local patient pathways the specialists were able to evaluate if there was documentation of HP activities in the medical record to be coded. B: Inter rater reliability on the use of the codes were 93% (57 to 100% and 71% (31

  15. Hall Thruster Thermal Modeling and Test Data Correlation

    Science.gov (United States)

    Myers, James; Kamhawi, Hani; Yim, John; Clayman, Lauren

    2016-01-01

    The life of Hall Effect thrusters are primarily limited by plasma erosion and thermal related failures. NASA Glenn Research Center (GRC) in cooperation with the Jet Propulsion Laboratory (JPL) have recently completed development of a Hall thruster with specific emphasis to mitigate these limitations. Extending the operational life of Hall thursters makes them more suitable for some of NASA's longer duration interplanetary missions. This paper documents the thermal model development, refinement and correlation of results with thruster test data. Correlation was achieved by minimizing uncertainties in model input and recognizing the relevant parameters for effective model tuning. Throughout the thruster design phase the model was used to evaluate design options and systematically reduce component temperatures. Hall thrusters are inherently complex assemblies of high temperature components relying on internal conduction and external radiation for heat dispersion and rejection. System solutions are necessary in most cases to fully assess the benefits and/or consequences of any potential design change. Thermal model correlation is critical since thruster operational parameters can push some components/materials beyond their temperature limits. This thruster incorporates a state-of-the-art magnetic shielding system to reduce plasma erosion and to a lesser extend power/heat deposition. Additionally a comprehensive thermal design strategy was employed to reduce temperatures of critical thruster components (primarily the magnet coils and the discharge channel). Long term wear testing is currently underway to assess the effectiveness of these systems and consequently thruster longevity.

  16. USING OF BYOD MODEL FOR TESTING OF EDUCATIONAL ACHIEVEMENTS ON THE BASIS OF GOOGLE SEARCH SERVICES

    Directory of Open Access Journals (Sweden)

    Tetiana Bondarenko

    2016-04-01

    Full Text Available The technology of using their own mobile devices of learners for testing educational achievements, based on the model of BYOD, in an article is offered. The proposed technology is based on cloud services Google. This technology provides a comprehensive support of testing system: creating appropriate forms, storing the results in cloud storage, processing test results and management of testing system through the use of Google-Calendar. A number of software products based on cloud technologies that allow using BYOD model for testing of educational achievement are described, their strengths and weaknesses are identified. This article also describes the stages of the testing process of the academic achievements of students on the basis of Google search services with using the BYOD model. The proposed approaches to the testing of educational achievements based on using of BYOD model extends the space and time of the testing, makes the test procedure more flexible and systematically, adds to the procedure for testing the elements of a computer game. BYOD model opens up broad prospects for implementation of ICT in all forms of learning process, and particularly in testing of educational achievement in view of the limited computing resources in education

  17. Barriers to workplace HIV testing in South Africa: a systematic review of the literature.

    Science.gov (United States)

    Weihs, Martin; Meyer-Weitz, Anna

    2016-01-01

    Low workplace HIV testing uptake makes effective management of HIV and AIDS difficult for South African organisations. Identifying barriers to workplace HIV testing is therefore crucial to inform urgently needed interventions aimed at increasing workplace HIV testing. This study reviewed literature on workplace HIV testing barriers in South Africa. Pubmed, ScienceDirect, PsycInfo and SA Publications were systematically researched. Studies needed to include measures to assess perceived or real barriers to participate in HIV Counselling and Testing (HCT) at the workplace or discuss perceived or real barriers of HIV testing at the workplace based on collected data, provide qualitative or quantitative evidence related to the research topic and needed to refer to workplaces in South Africa. Barriers were defined as any factor on economic, social, personal, environmental or organisational level preventing employees from participating in workplace HIV testing. Four peer-reviewed studies were included, two with quantitative and two with qualitative study designs. The overarching barriers across the studies were fear of compromised confidentiality, being stigmatised or discriminated in the event of testing HIV positive or being observed participating in HIV testing, and a low personal risk perception. Furthermore, it appeared that an awareness of an HIV-positive status hindered HIV testing at the workplace. Further research evidence of South African workplace barriers to HIV testing will enhance related interventions. This systematic review only found very little and contextualised evidence about workplace HCT barriers in South Africa, making it difficult to generalise, and not really sufficient to inform new interventions aimed at increasing workplace HCT uptake.

  18. HIV Testing and Counseling Among Female Sex Workers: A Systematic Literature Review.

    Science.gov (United States)

    Tokar, Anna; Broerse, Jacqueline E W; Blanchard, James; Roura, Maria

    2018-02-20

    HIV testing uptake continues to be low among Female Sex Workers (FSWs). We synthesizes evidence on barriers and facilitators to HIV testing among FSW as well as frequencies of testing, willingness to test, and return rates to collect results. We systematically searched the MEDLINE/PubMed, EMBASE, SCOPUS databases for articles published in English between January 2000 and November 2017. Out of 5036 references screened, we retained 36 papers. The two barriers to HIV testing most commonly reported were financial and time costs-including low income, transportation costs, time constraints, and formal/informal payments-as well as the stigma and discrimination ascribed to HIV positive people and sex workers. Social support facilitated testing with consistently higher uptake amongst married FSWs and women who were encouraged to test by peers and managers. The consistent finding that social support facilitated HIV testing calls for its inclusion into current HIV testing strategies addressed at FSW.

  19. Maturity Models in Supply Chain Sustainability: A Systematic Literature Review

    Directory of Open Access Journals (Sweden)

    Elisabete Correia

    2017-01-01

    Full Text Available A systematic literature review of supply chain maturity models with sustainability concerns is presented. The objective is to give insights into methodological issues related to maturity models, namely the research objectives; the research methods used to develop, validate and test them; the scope; and the main characteristics associated with their design. The literature review was performed based on journal articles and conference papers from 2000 to 2015 using the SCOPUS, Emerald Insight, EBSCO and Web of Science databases. Most of the analysed papers have as main objective the development of maturity models and their validation. The case study is the methodology that is most widely used by researchers to develop and validate maturity models. From the sustainability perspective, the scope of the analysed maturity models is the Triple Bottom Line (TBL and environmental dimension, focusing on a specific process (eco-design and new product development and without a broad SC perspective. The dominant characteristics associated with the design of the maturity models are the maturity grids and a continuous representation. In addition, results do not allow identifying a trend for a specific number of maturity levels. The comprehensive review, analysis, and synthesis of the maturity model literature represent an important contribution to the organization of this research area, making possible to clarify some confusion that exists about concepts, approaches and components of maturity models in sustainability. Various aspects associated with the maturity models (i.e., research objectives, research methods, scope and characteristics of the design of models are explored to contribute to the evolution and significance of this multidimensional area.

  20. Earthquake likelihood model testing

    Science.gov (United States)

    Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.

    2007-01-01

    INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a

  1. Systematic Review of Health Economic Evaluations of Diagnostic Tests in Brazil: How accurate are the results?

    Science.gov (United States)

    Oliveira, Maria Regina Fernandes; Leandro, Roseli; Decimoni, Tassia Cristina; Rozman, Luciana Martins; Novaes, Hillegonda Maria Dutilh; De Soárez, Patrícia Coelho

    2017-08-01

    The aim of this study is to identify and characterize the health economic evaluations (HEEs) of diagnostic tests conducted in Brazil, in terms of their adherence to international guidelines for reporting economic studies and specific questions in test accuracy reports. We systematically searched multiple databases, selecting partial and full HEEs of diagnostic tests, published between 1980 and 2013. Two independent reviewers screened articles for relevance and extracted the data. We performed a qualitative narrative synthesis. Forty-three articles were reviewed. The most frequently studied diagnostic tests were laboratory tests (37.2%) and imaging tests (32.6%). Most were non-invasive tests (51.2%) and were performed in the adult population (48.8%). The intended purposes of the technologies evaluated were mostly diagnostic (69.8%), but diagnosis and treatment and screening, diagnosis, and treatment accounted for 25.6% and 4.7%, respectively. Of the reviewed studies, 12.5% described the methods used to estimate the quantities of resources, 33.3% reported the discount rate applied, and 29.2% listed the type of sensitivity analysis performed. Among the 12 cost-effectiveness analyses, only two studies (17%) referred to the application of formal methods to check the quality of the accuracy studies that provided support for the economic model. The existing Brazilian literature on the HEEs of diagnostic tests exhibited reasonably good performance. However, the following points still require improvement: 1) the methods used to estimate resource quantities and unit costs, 2) the discount rate, 3) descriptions of sensitivity analysis methods, 4) reporting of conflicts of interest, 5) evaluations of the quality of the accuracy studies considered in the cost-effectiveness models, and 6) the incorporation of accuracy measures into sensitivity analyses.

  2. Cost-Effectiveness of HBV and HCV Screening Strategies – A Systematic Review of Existing Modelling Techniques

    Science.gov (United States)

    Geue, Claudia; Wu, Olivia; Xin, Yiqiao; Heggie, Robert; Hutchinson, Sharon; Martin, Natasha K.; Fenwick, Elisabeth; Goldberg, David

    2015-01-01

    Introduction Studies evaluating the cost-effectiveness of screening for Hepatitis B Virus (HBV) and Hepatitis C Virus (HCV) are generally heterogeneous in terms of risk groups, settings, screening intervention, outcomes and the economic modelling framework. It is therefore difficult to compare cost-effectiveness results between studies. This systematic review aims to summarise and critically assess existing economic models for HBV and HCV in order to identify the main methodological differences in modelling approaches. Methods A structured search strategy was developed and a systematic review carried out. A critical assessment of the decision-analytic models was carried out according to the guidelines and framework developed for assessment of decision-analytic models in Health Technology Assessment of health care interventions. Results The overall approach to analysing the cost-effectiveness of screening strategies was found to be broadly consistent for HBV and HCV. However, modelling parameters and related structure differed between models, producing different results. More recent publications performed better against a performance matrix, evaluating model components and methodology. Conclusion When assessing screening strategies for HBV and HCV infection, the focus should be on more recent studies, which applied the latest treatment regimes, test methods and had better and more complete data on which to base their models. In addition to parameter selection and associated assumptions, careful consideration of dynamic versus static modelling is recommended. Future research may want to focus on these methodological issues. In addition, the ability to evaluate screening strategies for multiple infectious diseases, (HCV and HIV at the same time) might prove important for decision makers. PMID:26689908

  3. A systematic fault tree analysis based on multi-level flow modeling

    International Nuclear Information System (INIS)

    Gofuku, Akio; Ohara, Ai

    2010-01-01

    The fault tree analysis (FTA) is widely applied for the safety evaluation of a large-scale and mission-critical system. Because the potential of the FTA, however, strongly depends on human skill of analyzers, problems are pointed out in (1) education and training, (2) unreliable quality, (3) necessity of expertise knowledge, and (4) update of FTA results after the reconstruction of a target system. To get rid of these problems, many techniques to systematize FTA activities by applying computer technologies have been proposed. However, these techniques only use structural information of a target system and do not use functional information that is one of important properties of an artifact. The principle of FTA is to trace comprehensively cause-effect relations from a top undesirable effect to anomaly causes. The tracing is similar to the causality estimation technique that the authors proposed to find plausible counter actions to prevent or to mitigate the undesirable behavior of plants based on the model by a functional modeling technique, Multilevel Flow Modeling (MFM). The authors have extended this systematic technique to construct a fault tree (FT). This paper presents an algorithm of systematic construction of FT based on MFM models and demonstrates the applicability of the extended technique by the FT construction result of a cooling plant of nitric acid. (author)

  4. Modeling Systematic Change in Stopover Duration Does Not Improve Bias in Trends Estimated from Migration Counts.

    Directory of Open Access Journals (Sweden)

    Tara L Crewe

    Full Text Available The use of counts of unmarked migrating animals to monitor long term population trends assumes independence of daily counts and a constant rate of detection. However, migratory stopovers often last days or weeks, violating the assumption of count independence. Further, a systematic change in stopover duration will result in a change in the probability of detecting individuals once, but also in the probability of detecting individuals on more than one sampling occasion. We tested how variation in stopover duration influenced accuracy and precision of population trends by simulating migration count data with known constant rate of population change and by allowing daily probability of survival (an index of stopover duration to remain constant, or to vary randomly, cyclically, or increase linearly over time by various levels. Using simulated datasets with a systematic increase in stopover duration, we also tested whether any resulting bias in population trend could be reduced by modeling the underlying source of variation in detection, or by subsampling data to every three or five days to reduce the incidence of recounting. Mean bias in population trend did not differ significantly from zero when stopover duration remained constant or varied randomly over time, but bias and the detection of false trends increased significantly with a systematic increase in stopover duration. Importantly, an increase in stopover duration over time resulted in a compounding effect on counts due to the increased probability of detection and of recounting on subsequent sampling occasions. Under this scenario, bias in population trend could not be modeled using a covariate for stopover duration alone. Rather, to improve inference drawn about long term population change using counts of unmarked migrants, analyses must include a covariate for stopover duration, as well as incorporate sampling modifications (e.g., subsampling to reduce the probability that individuals will

  5. Impact of systematic HIV testing on case finding and retention in care at a primary care clinic in South Africa.

    Science.gov (United States)

    Clouse, Kate; Hanrahan, Colleen F; Bassett, Jean; Fox, Matthew P; Sanne, Ian; Van Rie, Annelies

    2014-12-01

    Systematic, opt-out HIV counselling and testing (HCT) may diagnose individuals at lower levels of immunodeficiency but may impact loss to follow-up (LTFU) if healthier people are less motivated to engage and remain in HIV care. We explored LTFU and patient clinical outcomes under two different HIV testing strategies. We compared patient characteristics and retention in care between adults newly diagnosed with HIV by either voluntary counselling and testing (VCT) plus targeted provider-initiated counselling and testing (PITC) or systematic HCT at a primary care clinic in Johannesburg, South Africa. One thousand one hundred and forty-four adults were newly diagnosed by VCT/PITC and 1124 by systematic HCT. Two-thirds of diagnoses were in women. Median CD4 count at HIV diagnosis (251 vs. 264 cells/μl, P = 0.19) and proportion of individuals eligible for antiretroviral therapy (ART) (67.2% vs. 66.7%, P = 0.80) did not differ by HCT strategy. Within 1 year of HIV diagnosis, half were LTFU: 50.5% under VCT/PITC and 49.6% under systematic HCT (P = 0.64). The overall hazard of LTFU was not affected by testing policy (aHR 0.98, 95%CI: 0.87-1.10). Independent of HCT strategy, males, younger adults and those ineligible for ART were at higher risk of LTFU. Implementation of systematic HCT did not increase baseline CD4 count. Overall retention in the first year after HIV diagnosis was low (37.9%), especially among those ineligible for ART, but did not differ by testing strategy. Expansion of HIV testing should coincide with effective strategies to increase retention in care, especially among those not yet eligible for ART at initial diagnosis. © 2014 John Wiley & Sons Ltd.

  6. Systematic test on fast time resolution parallel plate avalanche counter

    International Nuclear Information System (INIS)

    Chen Yu; Li Guangwu; Gu Xianbao; Chen Yanchao; Zhang Gang; Zhang Wenhui; Yan Guohong

    2011-01-01

    Systematic test on each detect unit of parallel plate avalanche counter (PPAC) used in the fission multi-parameter measurement was performed with a 241 Am α source to get the time resolution and position resolution. The detectors work at 600 Pa flowing isobutane and with-600 V on cathode. The time resolution was got by TOF method and the position resolution was got by delay line method. The time resolution of detect units is better than 400 ps, and the position resolution is 6 mm. The results show that the demand of measurement is fully covered. (authors)

  7. Economic evaluation of medical tests at the early phases of development: a systematic review of empirical studies.

    Science.gov (United States)

    Frempong, Samuel N; Sutton, Andrew J; Davenport, Clare; Barton, Pelham

    2018-02-01

    There is little specific guidance on the implementation of cost-effectiveness modelling at the early stage of test development. The aim of this study was to review the literature in this field to examine the methodologies and tools that have been employed to date. Areas Covered: A systematic review to identify relevant studies in established literature databases. Five studies were identified and included for narrative synthesis. These studies revealed that there is no consistent approach in this growing field. The perspective of patients and the potential for value of information (VOI) to provide information on the value of future research is often overlooked. Test accuracy is an essential consideration, with most studies having described and included all possible test results in their analysis, and conducted extensive sensitivity analyses on important parameters. Headroom analysis was considered in some instances but at the early development stage (not the concept stage). Expert commentary: The techniques available to modellers that can demonstrate the value of conducting further research and product development (i.e. VOI analysis, headroom analysis) should be better utilized. There is the need for concerted efforts to develop rigorous methodology in this growing field to maximize the value and quality of such analysis.

  8. Systematic model building with flavor symmetries

    Energy Technology Data Exchange (ETDEWEB)

    Plentinger, Florian

    2009-12-19

    The observation of neutrino masses and lepton mixing has highlighted the incompleteness of the Standard Model of particle physics. In conjunction with this discovery, new questions arise: why are the neutrino masses so small, which form has their mass hierarchy, why is the mixing in the quark and lepton sectors so different or what is the structure of the Higgs sector. In order to address these issues and to predict future experimental results, different approaches are considered. One particularly interesting possibility, are Grand Unified Theories such as SU(5) or SO(10). GUTs are vertical symmetries since they unify the SM particles into multiplets and usually predict new particles which can naturally explain the smallness of the neutrino masses via the seesaw mechanism. On the other hand, also horizontal symmetries, i.e., flavor symmetries, acting on the generation space of the SM particles, are promising. They can serve as an explanation for the quark and lepton mass hierarchies as well as for the different mixings in the quark and lepton sectors. In addition, flavor symmetries are significantly involved in the Higgs sector and predict certain forms of mass matrices. This high predictivity makes GUTs and flavor symmetries interesting for both, theorists and experimentalists. These extensions of the SM can be also combined with theories such as supersymmetry or extra dimensions. In addition, they usually have implications on the observed matter-antimatter asymmetry of the universe or can provide a dark matter candidate. In general, they also predict the lepton flavor violating rare decays {mu} {yields} e{gamma}, {tau} {yields} {mu}{gamma}, and {tau} {yields} e{gamma} which are strongly bounded by experiments but might be observed in the future. In this thesis, we combine all of these approaches, i.e., GUTs, the seesaw mechanism and flavor symmetries. Moreover, our request is to develop and perform a systematic model building approach with flavor symmetries and

  9. Systematic model building with flavor symmetries

    International Nuclear Information System (INIS)

    Plentinger, Florian

    2009-01-01

    The observation of neutrino masses and lepton mixing has highlighted the incompleteness of the Standard Model of particle physics. In conjunction with this discovery, new questions arise: why are the neutrino masses so small, which form has their mass hierarchy, why is the mixing in the quark and lepton sectors so different or what is the structure of the Higgs sector. In order to address these issues and to predict future experimental results, different approaches are considered. One particularly interesting possibility, are Grand Unified Theories such as SU(5) or SO(10). GUTs are vertical symmetries since they unify the SM particles into multiplets and usually predict new particles which can naturally explain the smallness of the neutrino masses via the seesaw mechanism. On the other hand, also horizontal symmetries, i.e., flavor symmetries, acting on the generation space of the SM particles, are promising. They can serve as an explanation for the quark and lepton mass hierarchies as well as for the different mixings in the quark and lepton sectors. In addition, flavor symmetries are significantly involved in the Higgs sector and predict certain forms of mass matrices. This high predictivity makes GUTs and flavor symmetries interesting for both, theorists and experimentalists. These extensions of the SM can be also combined with theories such as supersymmetry or extra dimensions. In addition, they usually have implications on the observed matter-antimatter asymmetry of the universe or can provide a dark matter candidate. In general, they also predict the lepton flavor violating rare decays μ → eγ, τ → μγ, and τ → eγ which are strongly bounded by experiments but might be observed in the future. In this thesis, we combine all of these approaches, i.e., GUTs, the seesaw mechanism and flavor symmetries. Moreover, our request is to develop and perform a systematic model building approach with flavor symmetries and to search for phenomenological

  10. Screening strategies for atrial fibrillation: a systematic review and cost-effectiveness analysis.

    Science.gov (United States)

    Welton, Nicky J; McAleenan, Alexandra; Thom, Howard Hz; Davies, Philippa; Hollingworth, Will; Higgins, Julian Pt; Okoli, George; Sterne, Jonathan Ac; Feder, Gene; Eaton, Diane; Hingorani, Aroon; Fawsitt, Christopher; Lobban, Trudie; Bryden, Peter; Richards, Alison; Sofat, Reecha

    2017-05-01

    Atrial fibrillation (AF) is a common cardiac arrhythmia that increases the risk of thromboembolic events. Anticoagulation therapy to prevent AF-related stroke has been shown to be cost-effective. A national screening programme for AF may prevent AF-related events, but would involve a substantial investment of NHS resources. To conduct a systematic review of the diagnostic test accuracy (DTA) of screening tests for AF, update a systematic review of comparative studies evaluating screening strategies for AF, develop an economic model to compare the cost-effectiveness of different screening strategies and review observational studies of AF screening to provide inputs to the model. Systematic review, meta-analysis and cost-effectiveness analysis. Primary care. Adults. Screening strategies, defined by screening test, age at initial and final screens, screening interval and format of screening {systematic opportunistic screening [individuals offered screening if they consult with their general practitioner (GP)] or systematic population screening (when all eligible individuals are invited to screening)}. Sensitivity, specificity and diagnostic odds ratios; the odds ratio of detecting new AF cases compared with no screening; and the mean incremental net benefit compared with no screening. Two reviewers screened the search results, extracted data and assessed the risk of bias. A DTA meta-analysis was perfomed, and a decision tree and Markov model was used to evaluate the cost-effectiveness of the screening strategies. Diagnostic test accuracy depended on the screening test and how it was interpreted. In general, the screening tests identified in our review had high sensitivity (> 0.9). Systematic population and systematic opportunistic screening strategies were found to be similarly effective, with an estimated 170 individuals needed to be screened to detect one additional AF case compared with no screening. Systematic opportunistic screening was more likely to be cost

  11. Nitrate supplementation improves physical performance specifically in non-athletes during prolonged open-ended tests: a systematic review and meta-analysis.

    Science.gov (United States)

    Campos, Helton O; Drummond, Lucas R; Rodrigues, Quezia T; Machado, Frederico S M; Pires, Washington; Wanner, Samuel P; Coimbra, Cândido C

    2018-03-01

    Nitrate (NO3 -) is an ergogenic nutritional supplement that is widely used to improve physical performance. However, the effectiveness of NO3 - supplementation has not been systematically investigated in individuals with different physical fitness levels. The present study analysed whether different fitness levels (non-athletes v. athletes or classification of performance levels), duration of the test used to measure performance (short v. long duration) and the test protocol (time trials v. open-ended tests v. graded-exercise tests) influence the effects of NO3 - supplementation on performance. This systematic review and meta-analysis was conducted and reported according to the guidelines outlined in the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) statement. A systematic search of electronic databases, including PubMed, Web of Science, SPORTDiscus and ProQuest, was performed in August 2017. On the basis of the search and inclusion criteria, fifty-four and fifty-three placebo-controlled studies evaluating the effects of NO3 - supplementation on performance in humans were included in the systematic review and meta-analysis, respectively. NO3 - supplementation was ergogenic in non-athletes (mean effect size (ES) 0·25; 95 % CI 0·11, 0·38), particularly in evaluations of performance using long-duration open-ended tests (ES 0·47; 95 % CI 0·23, 0·71). In contrast, NO3 - supplementation did not enhance the performance of athletes (ES 0·04; 95 % CI -0·05, 0·15). After objectively classifying the participants into different performance levels, the frequency of trials showing ergogenic effects in individuals classified at lower levels was higher than that in individuals classified at higher levels. Thus, the present study indicates that dietary NO3 - supplementation improves physical performance in non-athletes, particularly during long-duration open-ended tests.

  12. The use of standardised short-term and working memory tests in aphasia research: a systematic review.

    Science.gov (United States)

    Murray, Laura; Salis, Christos; Martin, Nadine; Dralle, Jenny

    2018-04-01

    Impairments of short-term and working memory (STM, WM), both verbal and non-verbal, are ubiquitous in aphasia. Increasing interest in assessing STM and WM in aphasia research and clinical practice as well as a growing evidence base of STM/WM treatments for aphasia warrant an understanding of the range of standardised STM/WM measures that have been utilised in aphasia. To date, however, no previous systematic review has focused on aphasia. Accordingly, the goals of this systematic review were: (1) to identify standardised tests of STM and WM utilised in the aphasia literature, (2) to evaluate critically the psychometric strength of these tests, and (3) to appraise critically the quality of the investigations utilising these tests. Results revealed that a very limited number of standardised tests, in the verbal and non-verbal domains, had robust psychometric properties. Standardisation samples to elicit normative data were often small, and most measures exhibited poor validity and reliability properties. Studies using these tests inconsistently documented demographic and aphasia variables essential to interpreting STM/WM test outcomes. In light of these findings, recommendations are provided to foster, in the future, consistency across aphasia studies and confidence in STM/WM tests as assessment and treatment outcome measures.

  13. Information Processing and Risk Perception: An Adaptation of the Heuristic-Systematic Model.

    Science.gov (United States)

    Trumbo, Craig W.

    2002-01-01

    Describes heuristic-systematic information-processing model and risk perception--the two major conceptual areas of the analysis. Discusses the proposed model, describing the context of the data collections (public health communication involving cancer epidemiology) and providing the results of a set of three replications using the proposed model.…

  14. Verification of experimental modal modeling using HDR (Heissdampfreaktor) dynamic test data

    International Nuclear Information System (INIS)

    Srinivasan, M.G.; Kot, C.A.; Hsieh, B.J.

    1983-01-01

    Experimental modal modeling involves the determination of the modal parameters of the model of a structure from recorded input-output data from dynamic tests. Though commercial modal analysis algorithms are being widely used in many industries their ability to identify a set of reliable modal parameters of an as-built nuclear power plant structure has not been systematically verified. This paper describes the effort to verify MODAL-PLUS, a widely used modal analysis code, using recorded data from the dynamic tests performed on the reactor building of the Heissdampfreaktor, situated near Frankfurt, Federal Republic of Germany. In the series of dynamic tests on HDR in 1979, the reactor building was subjected to forced vibrations from different types and levels of dynamic excitations. Two sets of HDR containment building input-output data were chosen for MODAL-PLUS analyses. To reduce the influence of nonlinear behavior on the results, these sets were chosen so that the levels of excitation are relatively low and about the same in the two sets. The attempted verification was only partially successful in that only one modal model, with a limited range of validity, could be synthesized and in that the goodness of fit could be verified only in this limited range

  15. Real-time PCR Machine System Modeling and a Systematic Approach for the Robust Design of a Real-time PCR-on-a-Chip System

    Directory of Open Access Journals (Sweden)

    Da-Sheng Lee

    2010-01-01

    Full Text Available Chip-based DNA quantification systems are widespread, and used in many point-of-care applications. However, instruments for such applications may not be maintained or calibrated regularly. Since machine reliability is a key issue for normal operation, this study presents a system model of the real-time Polymerase Chain Reaction (PCR machine to analyze the instrument design through numerical experiments. Based on model analysis, a systematic approach was developed to lower the variation of DNA quantification and achieve a robust design for a real-time PCR-on-a-chip system. Accelerated lift testing was adopted to evaluate the reliability of the chip prototype. According to the life test plan, this proposed real-time PCR-on-a-chip system was simulated to work continuously for over three years with similar reproducibility in DNA quantification. This not only shows the robustness of the lab-on-a-chip system, but also verifies the effectiveness of our systematic method for achieving a robust design.

  16. Simulation models in population breast cancer screening: A systematic review.

    Science.gov (United States)

    Koleva-Kolarova, Rositsa G; Zhan, Zhuozhao; Greuter, Marcel J W; Feenstra, Talitha L; De Bock, Geertruida H

    2015-08-01

    The aim of this review was to critically evaluate published simulation models for breast cancer screening of the general population and provide a direction for future modeling. A systematic literature search was performed to identify simulation models with more than one application. A framework for qualitative assessment which incorporated model type; input parameters; modeling approach, transparency of input data sources/assumptions, sensitivity analyses and risk of bias; validation, and outcomes was developed. Predicted mortality reduction (MR) and cost-effectiveness (CE) were compared to estimates from meta-analyses of randomized control trials (RCTs) and acceptability thresholds. Seven original simulation models were distinguished, all sharing common input parameters. The modeling approach was based on tumor progression (except one model) with internal and cross validation of the resulting models, but without any external validation. Differences in lead times for invasive or non-invasive tumors, and the option for cancers not to progress were not explicitly modeled. The models tended to overestimate the MR (11-24%) due to screening as compared to optimal RCTs 10% (95% CI - 2-21%) MR. Only recently, potential harms due to regular breast cancer screening were reported. Most scenarios resulted in acceptable cost-effectiveness estimates given current thresholds. The selected models have been repeatedly applied in various settings to inform decision making and the critical analysis revealed high risk of bias in their outcomes. Given the importance of the models, there is a need for externally validated models which use systematical evidence for input data to allow for more critical evaluation of breast cancer screening. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Physical examination tests for the diagnosis of femoroacetabular impingement. A systematic review.

    Science.gov (United States)

    Pacheco-Carrillo, Aitana; Medina-Porqueres, Ivan

    2016-09-01

    Numerous clinical tests have been proposed to diagnose FAI, but little is known about their diagnostic accuracy. To summarize and evaluate research on the accuracy of physical examination tests for diagnosis of FAI. A search of the PubMed, SPORTDiscus and CINAHL databases was performed. Studies were considered eligible if they compared the results of physical examination tests to those of a reference standard. Methodological quality and internal validity assessment was performed by two independent reviewers using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool. The systematic search strategy revealed 298 potential articles, five of which articles met the inclusion criteria. After assessment using the QUADAS score, four of the five articles were of high quality. Clinical tests included were Impingement sign, IROP test (Internal Rotation Over Pressure), FABER test (Flexion-Abduction-External Rotation), Stinchfield/RSRL (Resisted Straight Leg Raise) test, Scour test, Maximal squat test, and the Anterior Impingement test. IROP test, impingement sign, and FABER test showed the most sensitive values to identify FAI. The diagnostic accuracy of physical examination tests to assess FAI is limited due to its heterogenecity. There is a strong need for sound research of high methodological quality in this area. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Facilitators and barriers for HIV-testing in Zambia: A systematic review of multi-level factors.

    Science.gov (United States)

    Qiao, Shan; Zhang, Yao; Li, Xiaoming; Menon, J Anitha

    2018-01-01

    It was estimated that 1.2 million people live with HIV/AIDS in Zambia by 2015. Zambia has developed and implemented diverse programs to reduce the prevalence in the country. HIV-testing is a critical step in HIV treatment and prevention, especially among all the key populations. However, there is no systematic review so far to demonstrate the trend of HIV-testing studies in Zambia since 1990s or synthesis the key factors that associated with HIV-testing practices in the country. Therefore, this study conducted a systematic review to search all English literature published prior to November 2016 in six electronic databases and retrieved 32 articles that meet our inclusion criteria. The results indicated that higher education was a common facilitator of HIV testing, while misconception of HIV testing and the fear of negative consequences were the major barriers for using the testing services. Other factors, such as demographic characteristics, marital dynamics, partner relationship, and relationship with the health care services, also greatly affects the participants' decision making. The findings indicated that 1) individualized strategies and comprehensive services are needed for diverse key population; 2) capacity building for healthcare providers is critical for effectively implementing the task-shifting strategy; 3) HIV testing services need to adapt to the social context of Zambia where HIV-related stigma and discrimination is still persistent and overwhelming; and 4) family-based education and intervention should involving improving gender equity.

  19. Modelling of the spallation reaction: analysis and testing of nuclear models; Simulation de la spallation: analyse et test des modeles nucleaires

    Energy Technology Data Exchange (ETDEWEB)

    Toccoli, C

    2000-04-03

    The spallation reaction is considered as a 2-step process. First a very quick stage (10{sup -22}, 10{sup -29} s) which corresponds to the individual interaction between the incident projectile and nucleons, this interaction is followed by a series of nucleon-nucleon collisions (intranuclear cascade) during which fast particles are emitted, the nucleus is left in a strongly excited level. Secondly a slower stage (10{sup -18}, 10{sup -19} s) during which the nucleus is expected to de-excite completely. This de-excitation is performed by evaporation of light particles (n, p, d, t, {sup 3}He, {sup 4}He) or/and fission or/and fragmentation. The HETC code has been designed to simulate spallation reactions, this simulation is based on the 2-steps process and on several models of intranuclear cascades (Bertini model, Cugnon model, Helder Duarte model), the evaporation model relies on the statistical theory of Weiskopf-Ewing. The purpose of this work is to evaluate the ability of the HETC code to predict experimental results. A methodology about the comparison of relevant experimental data with results of calculation is presented and a preliminary estimation of the systematic error of the HETC code is proposed. The main problem of cascade models originates in the difficulty of simulating inelastic nucleon-nucleon collisions, the emission of pions is over-estimated and corresponding differential spectra are badly reproduced. The inaccuracy of cascade models has a great impact to determine the excited level of the nucleus at the end of the first step and indirectly on the distribution of final residual nuclei. The test of the evaporation model has shown that the emission of high energy light particles is under-estimated. (A.C.)

  20. Toward a systematic exploration of nano-bio interactions

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Xue; Liu, Fang; Liu, Yin; Li, Cong; Wang, Shenqing [School of Chemistry and Chemical Engineering, Shandong University, Jinan (China); Zhou, Hongyu [School of Environmental Science and Technology, Shandong University, Jinan (China); Wang, Wenyi; Zhu, Hao [Department of Chemistry, Rutgers University, Camden, NJ (United States); The Rutgers Center for Computational and Integrative Biology, Rutgers University, Camden, NJ (United States); Winkler, David A., E-mail: d.winkler@latrobe.edu.au [CSIRO Manufacturing, Bag 10, Clayton South MDC 3169 (Australia); Monash Institute of Pharmaceutical Sciences, 392 Royal Parade, Parkville 3052 (Australia); Latrobe Institute for Molecular Science, Bundoora 3083 (Australia); School of Chemical and Physical Sciences, Flinders University, Bedford Park 5042 (Australia); Yan, Bing, E-mail: drbingyan@yahoo.com [School of Chemistry and Chemical Engineering, Shandong University, Jinan (China); School of Environmental Science and Technology, Shandong University, Jinan (China)

    2017-05-15

    Many studies of nanomaterials make non-systematic alterations of nanoparticle physicochemical properties. Given the immense size of the property space for nanomaterials, such approaches are not very useful in elucidating fundamental relationships between inherent physicochemical properties of these materials and their interactions with, and effects on, biological systems. Data driven artificial intelligence methods such as machine learning algorithms have proven highly effective in generating models with good predictivity and some degree of interpretability. They can provide a viable method of reducing or eliminating animal testing. However, careful experimental design with the modelling of the results in mind is a proven and efficient way of exploring large materials spaces. This approach, coupled with high speed automated experimental synthesis and characterization technologies now appearing, is the fastest route to developing models that regulatory bodies may find useful. We advocate greatly increased focus on systematic modification of physicochemical properties of nanoparticles combined with comprehensive biological evaluation and computational analysis. This is essential to obtain better mechanistic understanding of nano-bio interactions, and to derive quantitatively predictive and robust models for the properties of nanomaterials that have useful domains of applicability. - Highlights: • Nanomaterials studies make non-systematic alterations to nanoparticle properties. • Vast nanomaterials property spaces require systematic studies of nano-bio interactions. • Experimental design and modelling are efficient ways of exploring materials spaces. • We advocate systematic modification and computational analysis to probe nano-bio interactions.

  1. Toward a systematic exploration of nano-bio interactions

    International Nuclear Information System (INIS)

    Bai, Xue; Liu, Fang; Liu, Yin; Li, Cong; Wang, Shenqing; Zhou, Hongyu; Wang, Wenyi; Zhu, Hao; Winkler, David A.; Yan, Bing

    2017-01-01

    Many studies of nanomaterials make non-systematic alterations of nanoparticle physicochemical properties. Given the immense size of the property space for nanomaterials, such approaches are not very useful in elucidating fundamental relationships between inherent physicochemical properties of these materials and their interactions with, and effects on, biological systems. Data driven artificial intelligence methods such as machine learning algorithms have proven highly effective in generating models with good predictivity and some degree of interpretability. They can provide a viable method of reducing or eliminating animal testing. However, careful experimental design with the modelling of the results in mind is a proven and efficient way of exploring large materials spaces. This approach, coupled with high speed automated experimental synthesis and characterization technologies now appearing, is the fastest route to developing models that regulatory bodies may find useful. We advocate greatly increased focus on systematic modification of physicochemical properties of nanoparticles combined with comprehensive biological evaluation and computational analysis. This is essential to obtain better mechanistic understanding of nano-bio interactions, and to derive quantitatively predictive and robust models for the properties of nanomaterials that have useful domains of applicability. - Highlights: • Nanomaterials studies make non-systematic alterations to nanoparticle properties. • Vast nanomaterials property spaces require systematic studies of nano-bio interactions. • Experimental design and modelling are efficient ways of exploring materials spaces. • We advocate systematic modification and computational analysis to probe nano-bio interactions.

  2. Understanding in vivo modelling of depression in non-human animals: a systematic review protocol

    DEFF Research Database (Denmark)

    Bannach-Brown, Alexandra; Liao, Jing; Wegener, Gregers

    2016-01-01

    experimental model(s) to induce or mimic a depressive-like phenotype. Data that will be extracted include the model or method of induction; species and gender of the animals used; the behavioural, anatomical, electrophysiological, neurochemical or genetic outcome measure(s) used; risk of bias......The aim of this study is to systematically collect all published preclinical non-human animal literature on depression to provide an unbiased overview of existing knowledge. A systematic search will be carried out in PubMed and Embase. Studies will be included if they use non-human animal......-analysis of the preclinical studies modelling depression-like behaviours and phenotypes in animals....

  3. Reliability of physical functioning tests in patients with low back pain: a systematic review.

    Science.gov (United States)

    Denteneer, Lenie; Van Daele, Ulrike; Truijen, Steven; De Hertogh, Willem; Meirte, Jill; Stassijns, Gaetane

    2018-01-01

    The aim of this study was to provide a comprehensive overview of physical functioning tests in patients with low back pain (LBP) and to investigate their reliability. A systematic computerized search was finalized in four different databases on June 24, 2017: PubMed, Web of Science, Embase, and MEDLINE. Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines were followed during all stages of this review. Clinical studies that investigate the reliability of physical functioning tests in patients with LBP were eligible. The methodological quality of the included studies was assessed with the use of the Consensus-based Standards for the selection of health Measurement Instruments (COSMIN) checklist. To come to final conclusions on the reliability of the identified clinical tests, the current review assessed three factors, namely, outcome assessment, methodological quality, and consistency of description. A total of 20 studies were found eligible and 38 clinical tests were identified. Good overall test-retest reliability was concluded for the extensor endurance test (intraclass correlation coefficient [ICC]=0.93-0.97), the flexor endurance test (ICC=0.90-0.97), the 5-minute walking test (ICC=0.89-0.99), the 50-ft walking test (ICC=0.76-0.96), the shuttle walk test (ICC=0.92-0.99), the sit-to-stand test (ICC=0.91-0.99), and the loaded forward reach test (ICC=0.74-0.98). For inter-rater reliability, only one test, namely, the Biering-Sörensen test (ICC=0.88-0.99), could be concluded to have an overall good inter-rater reliability. None of the identified clinical tests could be concluded to have a good intrarater reliability. Further investigation should focus on a better overall study methodology and the use of identical protocols for the description of clinical tests. The assessment of reliability is only a first step in the recommendation process for the use of clinical tests. In future research, the identified clinical tests in the

  4. Asteroseismic modelling of solar-type stars: internal systematics from input physics and surface correction methods

    Science.gov (United States)

    Nsamba, B.; Campante, T. L.; Monteiro, M. J. P. F. G.; Cunha, M. S.; Rendle, B. M.; Reese, D. R.; Verma, K.

    2018-04-01

    Asteroseismic forward modelling techniques are being used to determine fundamental properties (e.g. mass, radius, and age) of solar-type stars. The need to take into account all possible sources of error is of paramount importance towards a robust determination of stellar properties. We present a study of 34 solar-type stars for which high signal-to-noise asteroseismic data is available from multi-year Kepler photometry. We explore the internal systematics on the stellar properties, that is, associated with the uncertainty in the input physics used to construct the stellar models. In particular, we explore the systematics arising from: (i) the inclusion of the diffusion of helium and heavy elements; and (ii) the uncertainty in solar metallicity mixture. We also assess the systematics arising from (iii) different surface correction methods used in optimisation/fitting procedures. The systematics arising from comparing results of models with and without diffusion are found to be 0.5%, 0.8%, 2.1%, and 16% in mean density, radius, mass, and age, respectively. The internal systematics in age are significantly larger than the statistical uncertainties. We find the internal systematics resulting from the uncertainty in solar metallicity mixture to be 0.7% in mean density, 0.5% in radius, 1.4% in mass, and 6.7% in age. The surface correction method by Sonoi et al. and Ball & Gizon's two-term correction produce the lowest internal systematics among the different correction methods, namely, ˜1%, ˜1%, ˜2%, and ˜8% in mean density, radius, mass, and age, respectively. Stellar masses obtained using the surface correction methods by Kjeldsen et al. and Ball & Gizon's one-term correction are systematically higher than those obtained using frequency ratios.

  5. Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks.

    Science.gov (United States)

    Jarama, Ángel J; López-Araquistain, Jaime; Miguel, Gonzalo de; Besada, Juan A

    2017-09-21

    In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases) is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature). It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation.

  6. 'When measurements mean action' decision models for portal image review to eliminate systematic set-up errors

    International Nuclear Information System (INIS)

    Wratten, C.R.; Denham, J.W.; O; Brien, P.; Hamilton, C.S.; Kron, T.; London Regional Cancer Centre, London, Ontario

    2004-01-01

    The aim of the present paper is to evaluate how the use of decision models in the review of portal images can eliminate systematic set-up errors during conformal therapy. Sixteen patients undergoing four-field irradiation of prostate cancer have had daily portal images obtained during the first two treatment weeks and weekly thereafter. The magnitude of random and systematic variations has been calculated by comparison of the portal image with the reference simulator images using the two-dimensional decision model embodied in the Hotelling's evaluation process (HEP). Random day-to-day set-up variation was small in this group of patients. Systematic errors were, however, common. In 15 of 16 patients, one or more errors of >2 mm were diagnosed at some stage during treatment. Sixteen of the 23 errors were between 2 and 4 mm. Although there were examples of oversensitivity of the HEP in three cases, and one instance of undersensitivity, the HEP proved highly sensitive to the small (2-4 mm) systematic errors that must be eliminated during high precision radiotherapy. The HEP has proven valuable in diagnosing very small ( 4 mm) systematic errors using one-dimensional decision models, HEP can eliminate the majority of systematic errors during the first 2 treatment weeks. Copyright (2004) Blackwell Science Pty Ltd

  7. The application of the heuristic-systematic processing model to treatment decision making about prostate cancer.

    Science.gov (United States)

    Steginga, Suzanne K; Occhipinti, Stefano

    2004-01-01

    The study investigated the utility of the Heuristic-Systematic Processing Model as a framework for the investigation of patient decision making. A total of 111 men recently diagnosed with localized prostate cancer were assessed using Verbal Protocol Analysis and self-report measures. Study variables included men's use of nonsystematic and systematic information processing, desire for involvement in decision making, and the individual differences of health locus of control, tolerance of ambiguity, and decision-related uncertainty. Most men (68%) preferred that decision making be shared equally between them and their doctor. Men's use of the expert opinion heuristic was related to men's verbal reports of decisional uncertainty and having a positive orientation to their doctor and medical care; a desire for greater involvement in decision making was predicted by a high internal locus of health control. Trends were observed for systematic information processing to increase when the heuristic strategy used was negatively affect laden and when men were uncertain about the probabilities for cure and side effects. There was a trend for decreased systematic processing when the expert opinion heuristic was used. Findings were consistent with the Heuristic-Systematic Processing Model and suggest that this model has utility for future research in applied decision making about health.

  8. Testing a new flux rope model using the HELCATS CME catalogue

    Science.gov (United States)

    Rouillard, Alexis Paul; Lavarra, Michael

    2017-04-01

    We present a magnetically-driven flux rope model that computes the forces acting on a twisted magnetic flux rope from the Sun to 1AU. This model assumes a more realistic flux rope geometry than assumed before by these types of models. The balance of force is computed in an analogous manner to the well-known Chen flux-rope model. The 3-D vector components of the magnetic field measured by a probe flying through the flux rope can be extracted for any flux rope orientation imposed near the Sun. We test this model through a parametric study and a systematic comparison of the model with the HELCATS catalogues (imagery and in situ). We also report on our investigations of other physical mechanisms such as the shift of flux-surfaces associated with the magnetic forces acting to accelerate the flux rope from the lower to upper corona. Finally, we present an evaluation of this model for space-weather predictions. This work was partly funded by the HELCATS project under the FP7 EU contract number 606692.

  9. Ecological validity of cost-effectiveness models of universal HPV vaccination: A systematic literature review.

    Science.gov (United States)

    Favato, Giampiero; Easton, Tania; Vecchiato, Riccardo; Noikokyris, Emmanouil

    2017-05-09

    The protective (herd) effect of the selective vaccination of pubertal girls against human papillomavirus (HPV) implies a high probability that one of the two partners involved in intercourse is immunised, hence preventing the other from this sexually transmitted infection. The dynamic transmission models used to inform immunisation policy should include consideration of sexual behaviours and population mixing in order to demonstrate an ecological validity, whereby the scenarios modelled remain faithful to the real-life social and cultural context. The primary aim of this review is to test the ecological validity of the universal HPV vaccination cost-effectiveness modelling available in the published literature. The research protocol related to this systematic review has been registered in the International Prospective Register of Systematic Reviews (PROSPERO: CRD42016034145). Eight published economic evaluations were reviewed. None of the studies showed due consideration of the complexities of human sexual behaviour and the impact this may have on the transmission of HPV. Our findings indicate that all the included models might be affected by a different degree of ecological bias, which implies an inability to reflect the natural demographic and behavioural trends in their outcomes and, consequently, to accurately inform public healthcare policy. In particular, ecological bias have the effect to over-estimate the preference-based outcomes of selective immunisation. A relatively small (15-20%) over-estimation of quality-adjusted life years (QALYs) gained with selective immunisation programmes could induce a significant error in the estimate of cost-effectiveness of universal immunisation, by inflating its incremental cost effectiveness ratio (ICER) beyond the acceptability threshold. The results modelled here demonstrate the limitations of the cost-effectiveness studies for HPV vaccination, and highlight the concern that public healthcare policy might have been

  10. Do Test Design and Uses Influence Test Preparation? Testing a Model of Washback with Structural Equation Modeling

    Science.gov (United States)

    Xie, Qin; Andrews, Stephen

    2013-01-01

    This study introduces Expectancy-value motivation theory to explain the paths of influences from perceptions of test design and uses to test preparation as a special case of washback on learning. Based on this theory, two conceptual models were proposed and tested via Structural Equation Modeling. Data collection involved over 870 test takers of…

  11. Comparison of two stochastic techniques for reliable urban runoff prediction by modeling systematic errors

    DEFF Research Database (Denmark)

    Del Giudice, Dario; Löwe, Roland; Madsen, Henrik

    2015-01-01

    from different fields and have not yet been compared in environmental modeling. To compare the two approaches, we develop a unifying terminology, evaluate them theoretically, and apply them to conceptual rainfall-runoff modeling in the same drainage system. Our results show that both approaches can......In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two...... approaches which use stochastic processes to describe systematic deviations and to discuss their advantages and drawbacks for urban drainage modeling. The two methodologies are an external bias description (EBD) and an internal noise description (IND, also known as stochastic gray-box modeling). They emerge...

  12. A systematic approach for development of a PWR cladding corrosion model

    International Nuclear Information System (INIS)

    Quecedo, M.; Serna, J.J.; Weiner, R.A.; Kersting, P.J.

    2001-01-01

    A new model for the in-reactor corrosion of Improved (low-tin) Zircaloy-4 cladding irradiated in commercial pressurized water reactors (PWRs) is described. The model is based on an extensive database of PWR fuel cladding corrosion data from fuel irradiated in commercial reactors, with a range of fuel duty and coolant chemistry control strategies which bracket current PWR fuel management practices. The fuel thermal duty with these current fuel management practices is characterized by a significant amount of sub-cooled nucleate boiling (SNB) during the fuel's residence in-core, and the cladding corrosion model is very sensitive to the coolant heat transfer models used to calculate the coolant temperature at the oxide surface. The systematic approach to developing the new corrosion model therefore began with a review and evaluation of several alternative models for the forced convection and SNB coolant heat transfer. The heat transfer literature is not sufficient to determine which of these heat transfer models is most appropriate for PWR fuel rod operating conditions, and the selection of the coolant heat transfer model used in the new cladding corrosion model has been coupled with a statistical analysis of the in-reactor corrosion enhancement factors and their impact on obtaining the best fit to the cladding corrosion data. The in-reactor corrosion enhancement factors considered in this statistical analysis are based on a review of the current literature for PWR cladding corrosion phenomenology and models. Fuel operating condition factors which this literature review indicated could have a significant effect on the cladding corrosion performance were also evaluated in detail in developing the corrosion model. An iterative least squares fitting procedure was used to obtain the model coefficients and select the coolant heat transfer models and in-reactor corrosion enhancement factors. This statistical procedure was completed with an exhaustive analysis of the model

  13. Modelling of the spallation reaction: analysis and testing of nuclear models

    International Nuclear Information System (INIS)

    Toccoli, C.

    2000-01-01

    The spallation reaction is considered as a 2-step process. First a very quick stage (10 -22 , 10 -29 s) which corresponds to the individual interaction between the incident projectile and nucleons, this interaction is followed by a series of nucleon-nucleon collisions (intranuclear cascade) during which fast particles are emitted, the nucleus is left in a strongly excited level. Secondly a slower stage (10 -18 , 10 -19 s) during which the nucleus is expected to de-excite completely. This de-excitation is performed by evaporation of light particles (n, p, d, t, 3 He, 4 He) or/and fission or/and fragmentation. The HETC code has been designed to simulate spallation reactions, this simulation is based on the 2-steps process and on several models of intranuclear cascades (Bertini model, Cugnon model, Helder Duarte model), the evaporation model relies on the statistical theory of Weiskopf-Ewing. The purpose of this work is to evaluate the ability of the HETC code to predict experimental results. A methodology about the comparison of relevant experimental data with results of calculation is presented and a preliminary estimation of the systematic error of the HETC code is proposed. The main problem of cascade models originates in the difficulty of simulating inelastic nucleon-nucleon collisions, the emission of pions is over-estimated and corresponding differential spectra are badly reproduced. The inaccuracy of cascade models has a great impact to determine the excited level of the nucleus at the end of the first step and indirectly on the distribution of final residual nuclei. The test of the evaporation model has shown that the emission of high energy light particles is under-estimated. (A.C.)

  14. Using logic model methods in systematic review synthesis: describing complex pathways in referral management interventions.

    Science.gov (United States)

    Baxter, Susan K; Blank, Lindsay; Woods, Helen Buckley; Payne, Nick; Rimmer, Melanie; Goyder, Elizabeth

    2014-05-10

    There is increasing interest in innovative methods to carry out systematic reviews of complex interventions. Theory-based approaches, such as logic models, have been suggested as a means of providing additional insights beyond that obtained via conventional review methods. This paper reports the use of an innovative method which combines systematic review processes with logic model techniques to synthesise a broad range of literature. The potential value of the model produced was explored with stakeholders. The review identified 295 papers that met the inclusion criteria. The papers consisted of 141 intervention studies and 154 non-intervention quantitative and qualitative articles. A logic model was systematically built from these studies. The model outlines interventions, short term outcomes, moderating and mediating factors and long term demand management outcomes and impacts. Interventions were grouped into typologies of practitioner education, process change, system change, and patient intervention. Short-term outcomes identified that may result from these interventions were changed physician or patient knowledge, beliefs or attitudes and also interventions related to changed doctor-patient interaction. A range of factors which may influence whether these outcomes lead to long term change were detailed. Demand management outcomes and intended impacts included content of referral, rate of referral, and doctor or patient satisfaction. The logic model details evidence and assumptions underpinning the complex pathway from interventions to demand management impact. The method offers a useful addition to systematic review methodologies. PROSPERO registration number: CRD42013004037.

  15. Models of expected returns on the brazilian market: Empirical tests using predictive methodology

    Directory of Open Access Journals (Sweden)

    Adriano Mussa

    2009-01-01

    Full Text Available Predictive methodologies for test of the expected returns models are largely diffused on the international academic environment. However, these methods have not been used in Brazil in a systematic way. Generally, empirical studies proceeded with Brazilian stock market data are concentrated only in the first step of these methodologies. The purpose of this article was test and compare the models CAPM, 3-factors and 4-factors using a predictive methodology, considering two steps – temporal and cross-section regressions – with standard errors obtained by the techniques of Fama and Macbeth (1973. The results indicated the superiority of the 4-fators model as compared to the 3-fators model, and the superiority of the 3- factors model as compared to the CAPM, but no one of the tested models were enough on the explanation of the Brazilian stock returns. Contrary to some empirical evidences, that do not use predictive methodology, the size and momentum effect seem do not exist on the Brazilian capital markets, but there are evidences of the value effect and the relevance of the market for explanation of expected returns. These finds rise some questions, mainly caused by the originality of the methodology on the local market and by the fact that this subject is still incipient and polemic on the Brazilian academic environment.

  16. Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks

    Directory of Open Access Journals (Sweden)

    Ángel J. Jarama

    2017-09-01

    Full Text Available In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature. It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation.

  17. Theoretical Tools and Software for Modeling, Simulation and Control Design of Rocket Test Facilities

    Science.gov (United States)

    Richter, Hanz

    2004-01-01

    A rocket test stand and associated subsystems are complex devices whose operation requires that certain preparatory calculations be carried out before a test. In addition, real-time control calculations must be performed during the test, and further calculations are carried out after a test is completed. The latter may be required in order to evaluate if a particular test conformed to specifications. These calculations are used to set valve positions, pressure setpoints, control gains and other operating parameters so that a desired system behavior is obtained and the test can be successfully carried out. Currently, calculations are made in an ad-hoc fashion and involve trial-and-error procedures that may involve activating the system with the sole purpose of finding the correct parameter settings. The goals of this project are to develop mathematical models, control methodologies and associated simulation environments to provide a systematic and comprehensive prediction and real-time control capability. The models and controller designs are expected to be useful in two respects: 1) As a design tool, a model is the only way to determine the effects of design choices without building a prototype, which is, in the context of rocket test stands, impracticable; 2) As a prediction and tuning tool, a good model allows to set system parameters off-line, so that the expected system response conforms to specifications. This includes the setting of physical parameters, such as valve positions, and the configuration and tuning of any feedback controllers in the loop.

  18. A systematic review of the diagnostic performance of orthopedic physical examination tests of the hip.

    Science.gov (United States)

    Rahman, Labib Ataur; Adie, Sam; Naylor, Justine Maree; Mittal, Rajat; So, Sarah; Harris, Ian Andrew

    2013-08-30

    Previous reviews of the diagnostic performances of physical tests of the hip in orthopedics have drawn limited conclusions because of the low to moderate quality of primary studies published in the literature. This systematic review aims to build on these reviews by assessing a broad range of hip pathologies, and employing a more selective approach to the inclusion of studies in order to accurately gauge diagnostic performance for the purposes of making recommendations for clinical practice and future research. It specifically identifies tests which demonstrate strong and moderate diagnostic performance. A systematic search of Medline, Embase, Embase Classic and CINAHL was conducted to identify studies of hip tests. Our selection criteria included an analysis of internal and external validity. We reported diagnostic performance in terms of sensitivity, specificity, predictive values and likelihood ratios. Likelihood ratios were used to identify tests with strong and moderate diagnostic utility. Only a small proportion of tests reported in the literature have been assessed in methodologically valid primary studies. 16 studies were included in our review, producing 56 independent test-pathology combinations. Two tests demonstrated strong clinical utility, the patellar-pubic percussion test for excluding radiologically occult hip fractures (negative LR 0.05, 95% Confidence Interval [CI] 0.03-0.08) and the hip abduction sign for diagnosing sarcoglycanopathies in patients with known muscular dystrophies (positive LR 34.29, 95% CI 10.97-122.30). Fifteen tests demonstrated moderate diagnostic utility for diagnosing and/or excluding hip fractures, symptomatic osteoarthritis and loosening of components post-total hip arthroplasty. We have identified a number of tests demonstrating strong and moderate diagnostic performance. These findings must be viewed with caution as there are concerns over the methodological quality of the primary studies from which we have extracted our

  19. Computer-aided modeling framework – a generic modeling template

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    and test models systematically, efficiently and reliably. In this way, development of products and processes can be made faster, cheaper and more efficient. In this contribution, as part of the framework, a generic modeling template for the systematic derivation of problem specific models is presented....... The application of the modeling template is highlighted with a case study related to the modeling of a catalytic membrane reactor coupling dehydrogenation of ethylbenzene with hydrogenation of nitrobenzene...

  20. TESTING CAPM MODEL ON THE EMERGING MARKETS OF THE CENTRAL AND SOUTHEASTERN EUROPE

    Directory of Open Access Journals (Sweden)

    Josipa Džaja

    2013-02-01

    Full Text Available The paper examines if the Capital Asset Pricing Model (CAPM is adequate for capital asset valuation on the Central and South-East European emerging securities markets using monthly stock returns for nine countries for the period of January 2006 to December 2010. Precisely, it is tested if beta, as the systematic risk measure, is valid on observed markets by analysing are high expected returns associated with high levels of risk, i.e. beta. Also, the efficiency of market indices of observed countries is examined.

  1. A framework for testing and comparing binaural models.

    Science.gov (United States)

    Dietz, Mathias; Lestang, Jean-Hugues; Majdak, Piotr; Stern, Richard M; Marquardt, Torsten; Ewert, Stephan D; Hartmann, William M; Goodman, Dan F M

    2018-03-01

    Auditory research has a rich history of combining experimental evidence with computational simulations of auditory processing in order to deepen our theoretical understanding of how sound is processed in the ears and in the brain. Despite significant progress in the amount of detail and breadth covered by auditory models, for many components of the auditory pathway there are still different model approaches that are often not equivalent but rather in conflict with each other. Similarly, some experimental studies yield conflicting results which has led to controversies. This can be best resolved by a systematic comparison of multiple experimental data sets and model approaches. Binaural processing is a prominent example of how the development of quantitative theories can advance our understanding of the phenomena, but there remain several unresolved questions for which competing model approaches exist. This article discusses a number of current unresolved or disputed issues in binaural modelling, as well as some of the significant challenges in comparing binaural models with each other and with the experimental data. We introduce an auditory model framework, which we believe can become a useful infrastructure for resolving some of the current controversies. It operates models over the same paradigms that are used experimentally. The core of the proposed framework is an interface that connects three components irrespective of their underlying programming language: The experiment software, an auditory pathway model, and task-dependent decision stages called artificial observers that provide the same output format as the test subject. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Loglinear Rasch model tests

    NARCIS (Netherlands)

    Kelderman, Hendrikus

    1984-01-01

    Existing statistical tests for the fit of the Rasch model have been criticized, because they are only sensitive to specific violations of its assumptions. Contingency table methods using loglinear models have been used to test various psychometric models. In this paper, the assumptions of the Rasch

  3. Raising the standards of the calf-raise test: a systematic review.

    Science.gov (United States)

    Hébert-Losier, Kim; Newsham-West, Richard J; Schneiders, Anthony G; Sullivan, S John

    2009-11-01

    The calf-raise test is used by clinicians and researchers in sports medicine to assess properties of the calf muscle-tendon unit. The test generally involves repetitive concentric-eccentric muscle action of the plantar-flexors in unipedal stance and is quantified by the number of raises performed. Although the calf-raise test appears to have acceptable reliability and face validity, and is commonly used for medical assessment and rehabilitation of injuries, no universally acceptable test parameters have been published to date. A systematic review of the existing literature was conducted to investigate the consistency as well as universal acceptance of the evaluation purposes, test parameters, outcome measurements and psychometric properties of the calf-raise test. Nine electronic databases were searched during the period May 30th to September 21st 2008. Forty-nine articles met the inclusion criteria and were quality assessed. Information on study characteristics and calf-raise test parameters, as well as quantitative data, were extracted; tabulated; and statistically analysed. The average quality score of the reviewed articles was 70.4+/-12.2% (range 44-90%). Articles provided various test parameters; however, a consensus was not ascertained. Key testing parameters varied, were often unstated, and few studies reported reliability or validity values, including sensitivity and specificity. No definitive normative values could be established and the utility of the test in subjects with pathologies remained unclear. Although adapted for use in several disciplines and traditionally recommended for clinical assessment, there is no uniform description of the calf-raise test in the literature. Further investigation is recommended to ensure consistent use and interpretation of the test by researchers and clinicians.

  4. Real-time screening tests for functional alignment of the trunk and lower extremities in adolescent – a systematic review

    DEFF Research Database (Denmark)

    Junge, Tina; Wedderkopp, N; Juul-Kristensen, B

    mechanisms resulting in ACL injuries (Hewett, 2010). Prevention may therefore depend on identifying these potentially injury risk factors. Screening tools must thus include patterns of typical movements in sport and leisure time activities, consisting of high-load and multi-directional tests, focusing...... on functional alignment. In large epidemiological studies these tests must only require a minimum of time and technical equipment. Objective The purpose of the study was to accomplish a systematic review of screening tests for identification of adolescents at increased risk of knee injuries, focusing...... of knee alignment, there is a further need to evaluate reliability and validity of real-time functional alignment tests, before the can be used as screening tools for prevention of knee injuries among adolescents. Still the next step in this systematic review is to evaluate the quality and feasibility...

  5. Decoding β-decay systematics: A global statistical model for β- half-lives

    International Nuclear Information System (INIS)

    Costiris, N. J.; Mavrommatis, E.; Gernoth, K. A.; Clark, J. W.

    2009-01-01

    Statistical modeling of nuclear data provides a novel approach to nuclear systematics complementary to established theoretical and phenomenological approaches based on quantum theory. Continuing previous studies in which global statistical modeling is pursued within the general framework of machine learning theory, we implement advances in training algorithms designed to improve generalization, in application to the problem of reproducing and predicting the half-lives of nuclear ground states that decay 100% by the β - mode. More specifically, fully connected, multilayer feed-forward artificial neural network models are developed using the Levenberg-Marquardt optimization algorithm together with Bayesian regularization and cross-validation. The predictive performance of models emerging from extensive computer experiments is compared with that of traditional microscopic and phenomenological models as well as with the performance of other learning systems, including earlier neural network models as well as the support vector machines recently applied to the same problem. In discussing the results, emphasis is placed on predictions for nuclei that are far from the stability line, and especially those involved in r-process nucleosynthesis. It is found that the new statistical models can match or even surpass the predictive performance of conventional models for β-decay systematics and accordingly should provide a valuable additional tool for exploring the expanding nuclear landscape.

  6. Traceability in Model-Based Testing

    Directory of Open Access Journals (Sweden)

    Mathew George

    2012-11-01

    Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.

  7. HIV Testing among Men Who Have Sex with Men (MSM): Systematic Review of Qualitative Evidence

    Science.gov (United States)

    Lorenc, Theo; Marrero-Guillamon, Isaac; Llewellyn, Alexis; Aggleton, Peter; Cooper, Chris; Lehmann, Angela; Lindsay, Catriona

    2011-01-01

    We conducted a systematic review of qualitative evidence relating to the views and attitudes of men who have sex with men (MSM) concerning testing for HIV. Studies conducted in high-income countries (Organisation for Economic Co-operation and Development members) since 1996 were included. Seventeen studies were identified, most of gay or bisexual…

  8. FROM ATOMISTIC TO SYSTEMATIC COARSE-GRAINED MODELS FOR MOLECULAR SYSTEMS

    KAUST Repository

    Harmandaris, Vagelis

    2017-10-03

    The development of systematic (rigorous) coarse-grained mesoscopic models for complex molecular systems is an intense research area. Here we first give an overview of methods for obtaining optimal parametrized coarse-grained models, starting from detailed atomistic representation for high dimensional molecular systems. Different methods are described based on (a) structural properties (inverse Boltzmann approaches), (b) forces (force matching), and (c) path-space information (relative entropy). Next, we present a detailed investigation concerning the application of these methods in systems under equilibrium and non-equilibrium conditions. Finally, we present results from the application of these methods to model molecular systems.

  9. T-UPPAAL: Online Model-based Testing of Real-Time Systems

    DEFF Research Database (Denmark)

    Mikucionis, Marius; Larsen, Kim Guldstrand; Nielsen, Brian

    2004-01-01

    The goal of testing is to gain confidence in a physical computer based system by means of executing it. More than one third of typical project resources is spent on testing embedded and real-time systems, but still it remains ad-hoc, based on heuristics, and error-prone. Therefore systematic...

  10. e-Government Maturity Model Based on Systematic Review and Meta-Ethnography Approach

    Directory of Open Access Journals (Sweden)

    Darmawan Napitupulu

    2016-11-01

    Full Text Available Maturity model based on e-Government portal has been developed by a number of researchers both individually and institutionally, but still scattered in various journals and conference articles and can be said to have a different focus with each other, both in terms of stages and features. The aim of this research is conducting a study to integrate a number of maturity models existing today in order to build generic maturity model based on e-Government portal. The method used in this study is Systematic Review with meta-ethnography qualitative approach. Meta-ethnography, which is part of Systematic Review method, is a technique to perform data integration to obtain theories and concepts with a new level of understanding that is deeper and thorough. The result obtained is a maturity model based on e-Government portal that consists of 7 (seven stages, namely web presence, interaction, transaction, vertical integration, horizontal integration, full integration, and open participation. These seven stages are synthesized from the 111 key concepts related to 25 studies of maturity model based e-Government portal. The maturity model resulted is more comprehensive and generic because it is an integration of models (best practices that exists today.

  11. Experimental model for architectural systematization and its basic thermal performance. Part 1. Research on architectural systematization of energy conversion devices; Kenchiku system ka model no gaiyo to kihon seino ni tsuite. 1. Energy henkan no kenchiku system ka ni kansuru kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    Sunaga, N; Ito, N; Kimura, G; Fukao, S; Shimizu, T; Tsunoda, M; Muro, K [Tokyo Metropolitan University, Tokyo (Japan)

    1996-10-27

    The outline of a model for architectural systematization of natural energy conversion and the experiment result on the basic thermal performance in winter are described. The model is about 20 m{sup 2} in floor space. Foam polystyrene of 100 mm and 200 mm thick was used for the outer wall as heat insulating materials. The model has a solar battery and air conditioner and uses red brick as a heat reservoir. An experiment was made on seven modes obtained when three elements (heating, heat storage, and night insulated door) are combined. The information obtained by the experiment showed that a model for architectural systematization has high heat insulation and tightness and can be used as an energy element or an evaluation model for architectural systematization. In this model for architectural systematization, the power consumption of an air conditioner in winter can be fully supplied by only the power generation based on a solar battery. In an architectural element, the heating energy consumption can be remarkably reduced and the indoor thermal environment can be greatly improved, by the combination of a heat reservoir and night heat insulated door. 1 ref., 6 figs., 3 tabs.

  12. A systematic study of multiple minerals precipitation modelling in wastewater treatment.

    Science.gov (United States)

    Kazadi Mbamba, Christian; Tait, Stephan; Flores-Alsina, Xavier; Batstone, Damien J

    2015-11-15

    Mineral solids precipitation is important in wastewater treatment. However approaches to minerals precipitation modelling are varied, often empirical, and mostly focused on single precipitate classes. A common approach, applicable to multi-species precipitates, is needed to integrate into existing wastewater treatment models. The present study systematically tested a semi-mechanistic modelling approach, using various experimental platforms with multiple minerals precipitation. Experiments included dynamic titration with addition of sodium hydroxide to synthetic wastewater, and aeration to progressively increase pH and induce precipitation in real piggery digestate and sewage sludge digestate. The model approach consisted of an equilibrium part for aqueous phase reactions and a kinetic part for minerals precipitation. The model was fitted to dissolved calcium, magnesium, total inorganic carbon and phosphate. Results indicated that precipitation was dominated by the mineral struvite, forming together with varied and minor amounts of calcium phosphate and calcium carbonate. The model approach was noted to have the advantage of requiring a minimal number of fitted parameters, so the model was readily identifiable. Kinetic rate coefficients, which were statistically fitted, were generally in the range 0.35-11.6 h(-1) with confidence intervals of 10-80% relative. Confidence regions for the kinetic rate coefficients were often asymmetric with model-data residuals increasing more gradually with larger coefficient values. This suggests that a large kinetic coefficient could be used when actual measured data is lacking for a particular precipitate-matrix combination. Correlation between the kinetic rate coefficients of different minerals was low, indicating that parameter values for individual minerals could be independently fitted (keeping all other model parameters constant). Implementation was therefore relatively flexible, and would be readily expandable to include other

  13. Numerical Well Testing Interpretation Model and Applications in Crossflow Double-Layer Reservoirs by Polymer Flooding

    Directory of Open Access Journals (Sweden)

    Haiyang Yu

    2014-01-01

    Full Text Available This work presents numerical well testing interpretation model and analysis techniques to evaluate formation by using pressure transient data acquired with logging tools in crossflow double-layer reservoirs by polymer flooding. A well testing model is established based on rheology experiments and by considering shear, diffusion, convection, inaccessible pore volume (IPV, permeability reduction, wellbore storage effect, and skin factors. The type curves were then developed based on this model, and parameter sensitivity is analyzed. Our research shows that the type curves have five segments with different flow status: (I wellbore storage section, (II intermediate flow section (transient section, (III mid-radial flow section, (IV crossflow section (from low permeability layer to high permeability layer, and (V systematic radial flow section. The polymer flooding field tests prove that our model can accurately determine formation parameters in crossflow double-layer reservoirs by polymer flooding. Moreover, formation damage caused by polymer flooding can also be evaluated by comparison of the interpreted permeability with initial layered permeability before polymer flooding. Comparison of the analysis of numerical solution based on flow mechanism with observed polymer flooding field test data highlights the potential for the application of this interpretation method in formation evaluation and enhanced oil recovery (EOR.

  14. Measuring fit of sequence data to phylogenetic model: gain of power using marginal tests.

    Science.gov (United States)

    Waddell, Peter J; Ota, Rissa; Penny, David

    2009-10-01

    Testing fit of data to model is fundamentally important to any science, but publications in the field of phylogenetics rarely do this. Such analyses discard fundamental aspects of science as prescribed by Karl Popper. Indeed, not without cause, Popper (Unended quest: an intellectual autobiography. Fontana, London, 1976) once argued that evolutionary biology was unscientific as its hypotheses were untestable. Here we trace developments in assessing fit from Penny et al. (Nature 297:197-200, 1982) to the present. We compare the general log-likelihood ratio (the G or G (2) statistic) statistic between the evolutionary tree model and the multinomial model with that of marginalized tests applied to an alignment (using placental mammal coding sequence data). It is seen that the most general test does not reject the fit of data to model (P approximately 0.5), but the marginalized tests do. Tests on pairwise frequency (F) matrices, strongly (P < 0.001) reject the most general phylogenetic (GTR) models commonly in use. It is also clear (P < 0.01) that the sequences are not stationary in their nucleotide composition. Deviations from stationarity and homogeneity seem to be unevenly distributed amongst taxa; not necessarily those expected from examining other regions of the genome. By marginalizing the 4( t ) patterns of the i.i.d. model to observed and expected parsimony counts, that is, from constant sites, to singletons, to parsimony informative characters of a minimum possible length, then the likelihood ratio test regains power, and it too rejects the evolutionary model with P < 0.001. Given such behavior over relatively recent evolutionary time, readers in general should maintain a healthy skepticism of results, as the scale of the systematic errors in published trees may really be far larger than the analytical methods (e.g., bootstrap) report.

  15. Systematic study of polycrystalline flow during tension test of sheet 304 austenitic stainless steel at room temperature

    International Nuclear Information System (INIS)

    Muñoz-Andrade, Juan D.

    2013-01-01

    By systematic study the mapping of polycrystalline flow of sheet 304 austenitic stainless steel (ASS) during tension test at constant crosshead velocity at room temperature was obtained. The main results establish that the trajectory of crystals in the polycrystalline spatially extended system (PCSES), during irreversible deformation process obey a hyperbolic motion. Where, the ratio between the expansion velocity of the field and the velocity of the field source is not constant and the field lines of such trajectory of crystals become curved, this accelerated motion is called a hyperbolic motion. Such behavior is assisted by dislocations dynamics and self-accommodation process between crystals in the PCSES. Furthermore, by applying the quantum mechanics and relativistic model proposed by Muñoz-Andrade, the activation energy for polycrystalline flow during the tension test of 304 ASS was calculated for each instant in a global form. In conclusion was established that the mapping of the polycrystalline flow is fundamental to describe in an integral way the phenomenology and mechanics of irreversible deformation processes

  16. Systematic study of polycrystalline flow during tension test of sheet 304 austenitic stainless steel at room temperature

    Energy Technology Data Exchange (ETDEWEB)

    Muñoz-Andrade, Juan D., E-mail: jdma@correo.azc.uam.mx [Departamento de Materiales, División de Ciencias Básicas e Ingeniería, Universidad Autónoma Metropolitana Unidad Azcapotzalco, Av. San Pablo No. 180, Colonia Reynosa Tamaulipas, C.P. 02200, México Distrito Federal (Mexico)

    2013-12-16

    By systematic study the mapping of polycrystalline flow of sheet 304 austenitic stainless steel (ASS) during tension test at constant crosshead velocity at room temperature was obtained. The main results establish that the trajectory of crystals in the polycrystalline spatially extended system (PCSES), during irreversible deformation process obey a hyperbolic motion. Where, the ratio between the expansion velocity of the field and the velocity of the field source is not constant and the field lines of such trajectory of crystals become curved, this accelerated motion is called a hyperbolic motion. Such behavior is assisted by dislocations dynamics and self-accommodation process between crystals in the PCSES. Furthermore, by applying the quantum mechanics and relativistic model proposed by Muñoz-Andrade, the activation energy for polycrystalline flow during the tension test of 304 ASS was calculated for each instant in a global form. In conclusion was established that the mapping of the polycrystalline flow is fundamental to describe in an integral way the phenomenology and mechanics of irreversible deformation processes.

  17. Correcting systematic inflation in genetic association tests that consider interaction effects: application to a genome-wide association study of posttraumatic stress disorder.

    Science.gov (United States)

    Almli, Lynn M; Duncan, Richard; Feng, Hao; Ghosh, Debashis; Binder, Elisabeth B; Bradley, Bekh; Ressler, Kerry J; Conneely, Karen N; Epstein, Michael P

    2014-12-01

    Genetic association studies of psychiatric outcomes often consider interactions with environmental exposures and, in particular, apply tests that jointly consider gene and gene-environment interaction effects for analysis. Using a genome-wide association study (GWAS) of posttraumatic stress disorder (PTSD), we report that heteroscedasticity (defined as variability in outcome that differs by the value of the environmental exposure) can invalidate traditional joint tests of gene and gene-environment interaction. To identify the cause of bias in traditional joint tests of gene and gene-environment interaction in a PTSD GWAS and determine whether proposed robust joint tests are insensitive to this problem. The PTSD GWAS data set consisted of 3359 individuals (978 men and 2381 women) from the Grady Trauma Project (GTP), a cohort study from Atlanta, Georgia. The GTP performed genome-wide genotyping of participants and collected environmental exposures using the Childhood Trauma Questionnaire and Trauma Experiences Inventory. We performed joint interaction testing of the Beck Depression Inventory and modified PTSD Symptom Scale in the GTP GWAS. We assessed systematic bias in our interaction analyses using quantile-quantile plots and genome-wide inflation factors. Application of the traditional joint interaction test to the GTP GWAS yielded systematic inflation across different outcomes and environmental exposures (inflation-factor estimates ranging from 1.07 to 1.21), whereas application of the robust joint test to the same data set yielded no such inflation (inflation-factor estimates ranging from 1.01 to 1.02). Simulated data further revealed that the robust joint test is valid in different heteroscedasticity models, whereas the traditional joint test is invalid. The robust joint test also has power similar to the traditional joint test when heteroscedasticity is not an issue. We believe the robust joint test should be used in candidate-gene studies and GWASs of

  18. Insights on the impact of systematic model errors on data assimilation performance in changing catchments

    Science.gov (United States)

    Pathiraja, S.; Anghileri, D.; Burlando, P.; Sharma, A.; Marshall, L.; Moradkhani, H.

    2018-03-01

    The global prevalence of rapid and extensive land use change necessitates hydrologic modelling methodologies capable of handling non-stationarity. This is particularly true in the context of Hydrologic Forecasting using Data Assimilation. Data Assimilation has been shown to dramatically improve forecast skill in hydrologic and meteorological applications, although such improvements are conditional on using bias-free observations and model simulations. A hydrologic model calibrated to a particular set of land cover conditions has the potential to produce biased simulations when the catchment is disturbed. This paper sheds new light on the impacts of bias or systematic errors in hydrologic data assimilation, in the context of forecasting in catchments with changing land surface conditions and a model calibrated to pre-change conditions. We posit that in such cases, the impact of systematic model errors on assimilation or forecast quality is dependent on the inherent prediction uncertainty that persists even in pre-change conditions. Through experiments on a range of catchments, we develop a conceptual relationship between total prediction uncertainty and the impacts of land cover changes on the hydrologic regime to demonstrate how forecast quality is affected when using state estimation Data Assimilation with no modifications to account for land cover changes. This work shows that systematic model errors as a result of changing or changed catchment conditions do not always necessitate adjustments to the modelling or assimilation methodology, for instance through re-calibration of the hydrologic model, time varying model parameters or revised offline/online bias estimation.

  19. Economic Evaluations of Pharmacogenetic and Pharmacogenomic Screening Tests: A Systematic Review. Second Update of the Literature.

    Directory of Open Access Journals (Sweden)

    Elizabeth J J Berm

    Full Text Available Due to extended application of pharmacogenetic and pharmacogenomic screening (PGx tests it is important to assess whether they provide good value for money. This review provides an update of the literature.A literature search was performed in PubMed and papers published between August 2010 and September 2014, investigating the cost-effectiveness of PGx screening tests, were included. Papers from 2000 until July 2010 were included via two previous systematic reviews. Studies' overall quality was assessed with the Quality of Health Economic Studies (QHES instrument.We found 38 studies, which combined with the previous 42 studies resulted in a total of 80 included studies. An average QHES score of 76 was found. Since 2010, more studies were funded by pharmaceutical companies. Most recent studies performed cost-utility analysis, univariate and probabilistic sensitivity analyses, and discussed limitations of their economic evaluations. Most studies indicated favorable cost-effectiveness. Majority of evaluations did not provide information regarding the intrinsic value of the PGx test. There were considerable differences in the costs for PGx testing. Reporting of the direction and magnitude of bias on the cost-effectiveness estimates as well as motivation for the chosen economic model and perspective were frequently missing.Application of PGx tests was mostly found to be a cost-effective or cost-saving strategy. We found that only the minority of recent pharmacoeconomic evaluations assessed the intrinsic value of the PGx tests. There was an increase in the number of studies and in the reporting of quality associated characteristics. To improve future evaluations, scenario analysis including a broad range of PGx tests costs and equal costs of comparator drugs to assess the intrinsic value of the PGx tests, are recommended. In addition, robust clinical evidence regarding PGx tests' efficacy remains of utmost importance.

  20. The reliability of physical examination tests for the clinical assessment of scapular dyskinesis in subjects with shoulder complaints: A systematic review.

    Science.gov (United States)

    Lange, Toni; Struyf, Filip; Schmitt, Jochen; Lützner, Jörg; Kopkow, Christian

    2017-07-01

    Systematic review. The aim of this systematic review was to summarize and evaluate intra- and interrater reliability research of physical examination tests used for the assessment of scapular dyskinesis. Scapular dyskinesis, defined as alteration of normal scapular kinematics, is described as a non-specific response to different shoulder pathologies. A systematic literature search was conducted in MEDLINE, EMBASE, AMED and PEDro until March 20th, 2015. Methodological quality was assessed with the Quality Appraisal of Reliability Studies (QAREL) by two independent reviewers. The search strategy revealed 3259 articles, of which 15 met the inclusion criteria. These studies evaluated the reliability of 41 test and test variations used for the assessment of scapular dyskinesis. This review identified a lack of high-quality studies evaluating intra- as well as interrater reliability of tests used for the assessment of scapular dyskinesis. In addition, reliability measures differed between included studies hindering proper cross-study comparisons. The effect of manual correction of the scapula on shoulder symptoms was evaluated in only one study, which is striking, since symptom alteration tests are used in routine care to guide further treatment. Thus, there is a strong need for further research in this area. Diagnosis, level 3a. Copyright © 2016. Published by Elsevier Ltd.

  1. [The effectiveness of continuing care models in patients with chronic diseases: a systematic review].

    Science.gov (United States)

    Chen, Hsiao-Mei; Han, Tung-Chen; Chen, Ching-Min

    2014-04-01

    Population aging has caused significant rises in the prevalence of chronic diseases and the utilization of healthcare services in Taiwan. The current healthcare delivery system is fragmented. Integrating medical services may increase the quality of healthcare, enhance patient and patient family satisfaction with healthcare services, and better contain healthcare costs. This article introduces two continuing care models: discharge planning and case management. Further, the effectiveness and essential components of these two models are analyzed using a systematic review method. Articles included in this systematic review were all original articles on discharge-planning or case-management interventions published between February 1999 and March 2013 in any of 6 electronic databases (Medline, PubMed, Cinahl Plus with full Text, ProQuest, Cochrane Library, CEPS and Center for Chinese Studies electronic databases). Of the 70 articles retrieved, only 7 were randomized controlled trial studies. Three types of continuity-of-care models were identified: discharge planning, case management, and a hybrid of these two. All three models used logical and systematic processes to conduct assessment, planning, implementation, coordination, follow-up, and evaluation activities. Both the discharge planning model and the case management model were positively associated with improved self-care knowledge, reduced length of stay, decreased medical costs, and better quality of life. This study cross-referenced all reviewed articles in terms of target clients, content, intervention schedules, measurements, and outcome indicators. Study results may be referenced in future implementations of continuity-care models and may provide a reference for future research.

  2. Towards eliminating systematic errors caused by the experimental conditions in Biochemical Methane Potential (BMP) tests

    Energy Technology Data Exchange (ETDEWEB)

    Strömberg, Sten, E-mail: sten.stromberg@biotek.lu.se [Department of Biotechnology, Lund University, Getingevägen 60, 221 00 Lund (Sweden); Nistor, Mihaela, E-mail: mn@bioprocesscontrol.com [Bioprocess Control, Scheelevägen 22, 223 63 Lund (Sweden); Liu, Jing, E-mail: jing.liu@biotek.lu.se [Department of Biotechnology, Lund University, Getingevägen 60, 221 00 Lund (Sweden); Bioprocess Control, Scheelevägen 22, 223 63 Lund (Sweden)

    2014-11-15

    Highlights: • The evaluated factors introduce significant systematic errors (10–38%) in BMP tests. • Ambient temperature (T) has the most substantial impact (∼10%) at low altitude. • Ambient pressure (p) has the most substantial impact (∼68%) at high altitude. • Continuous monitoring of T and p is not necessary for kinetic calculations. - Abstract: The Biochemical Methane Potential (BMP) test is increasingly recognised as a tool for selecting and pricing biomass material for production of biogas. However, the results for the same substrate often differ between laboratories and much work to standardise such tests is still needed. In the current study, the effects from four environmental factors (i.e. ambient temperature and pressure, water vapour content and initial gas composition of the reactor headspace) on the degradation kinetics and the determined methane potential were evaluated with a 2{sup 4} full factorial design. Four substrates, with different biodegradation profiles, were investigated and the ambient temperature was found to be the most significant contributor to errors in the methane potential. Concerning the kinetics of the process, the environmental factors’ impact on the calculated rate constants was negligible. The impact of the environmental factors on the kinetic parameters and methane potential from performing a BMP test at different geographical locations around the world was simulated by adjusting the data according to the ambient temperature and pressure of some chosen model sites. The largest effect on the methane potential was registered from tests performed at high altitudes due to a low ambient pressure. The results from this study illustrate the importance of considering the environmental factors’ influence on volumetric gas measurement in BMP tests. This is essential to achieve trustworthy and standardised results that can be used by researchers and end users from all over the world.

  3. Towards eliminating systematic errors caused by the experimental conditions in Biochemical Methane Potential (BMP) tests

    International Nuclear Information System (INIS)

    Strömberg, Sten; Nistor, Mihaela; Liu, Jing

    2014-01-01

    Highlights: • The evaluated factors introduce significant systematic errors (10–38%) in BMP tests. • Ambient temperature (T) has the most substantial impact (∼10%) at low altitude. • Ambient pressure (p) has the most substantial impact (∼68%) at high altitude. • Continuous monitoring of T and p is not necessary for kinetic calculations. - Abstract: The Biochemical Methane Potential (BMP) test is increasingly recognised as a tool for selecting and pricing biomass material for production of biogas. However, the results for the same substrate often differ between laboratories and much work to standardise such tests is still needed. In the current study, the effects from four environmental factors (i.e. ambient temperature and pressure, water vapour content and initial gas composition of the reactor headspace) on the degradation kinetics and the determined methane potential were evaluated with a 2 4 full factorial design. Four substrates, with different biodegradation profiles, were investigated and the ambient temperature was found to be the most significant contributor to errors in the methane potential. Concerning the kinetics of the process, the environmental factors’ impact on the calculated rate constants was negligible. The impact of the environmental factors on the kinetic parameters and methane potential from performing a BMP test at different geographical locations around the world was simulated by adjusting the data according to the ambient temperature and pressure of some chosen model sites. The largest effect on the methane potential was registered from tests performed at high altitudes due to a low ambient pressure. The results from this study illustrate the importance of considering the environmental factors’ influence on volumetric gas measurement in BMP tests. This is essential to achieve trustworthy and standardised results that can be used by researchers and end users from all over the world

  4. Developing a quality by design approach to model tablet dissolution testing: an industrial case study.

    Science.gov (United States)

    Yekpe, Ketsia; Abatzoglou, Nicolas; Bataille, Bernard; Gosselin, Ryan; Sharkawi, Tahmer; Simard, Jean-Sébastien; Cournoyer, Antoine

    2017-11-02

    This study applied the concept of Quality by Design (QbD) to tablet dissolution. Its goal was to propose a quality control strategy to model dissolution testing of solid oral dose products according to International Conference on Harmonization guidelines. The methodology involved the following three steps: (1) a risk analysis to identify the material- and process-related parameters impacting the critical quality attributes of dissolution testing, (2) an experimental design to evaluate the influence of design factors (attributes and parameters selected by risk analysis) on dissolution testing, and (3) an investigation of the relationship between design factors and dissolution profiles. Results show that (a) in the case studied, the two parameters impacting dissolution kinetics are active pharmaceutical ingredient particle size distributions and tablet hardness and (b) these two parameters could be monitored with PAT tools to predict dissolution profiles. Moreover, based on the results obtained, modeling dissolution is possible. The practicality and effectiveness of the QbD approach were demonstrated through this industrial case study. Implementing such an approach systematically in industrial pharmaceutical production would reduce the need for tablet dissolution testing.

  5. Agent-based modeling of noncommunicable diseases: a systematic review.

    Science.gov (United States)

    Nianogo, Roch A; Arah, Onyebuchi A

    2015-03-01

    We reviewed the use of agent-based modeling (ABM), a systems science method, in understanding noncommunicable diseases (NCDs) and their public health risk factors. We systematically reviewed studies in PubMed, ScienceDirect, and Web of Sciences published from January 2003 to July 2014. We retrieved 22 relevant articles; each had an observational or interventional design. Physical activity and diet were the most-studied outcomes. Often, single agent types were modeled, and the environment was usually irrelevant to the studied outcome. Predictive validation and sensitivity analyses were most used to validate models. Although increasingly used to study NCDs, ABM remains underutilized and, where used, is suboptimally reported in public health studies. Its use in studying NCDs will benefit from clarified best practices and improved rigor to establish its usefulness and facilitate replication, interpretation, and application.

  6. Deterministic Modeling of the High Temperature Test Reactor

    International Nuclear Information System (INIS)

    Ortensi, J.; Cogliati, J.J.; Pope, M.A.; Ferrer, R.M.; Ougouag, A.M.

    2010-01-01

    Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL's current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is used in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green's Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2-3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the control

  7. A Systematic Review of Evidence for the Clubhouse Model of Psychosocial Rehabilitation

    OpenAIRE

    McKay, Colleen; Nugent, Katie L.; Johnsen, Matthew; Eaton, William W.; Lidz, Charles W.

    2016-01-01

    The Clubhouse Model has been in existence for over sixty-five years; however, a review that synthesizes the literature on the model is needed. The current study makes use of the existing research to conduct a systematic review of articles providing a comprehensive understanding of what is known about the Clubhouse Model, to identify the best evidence available, as well as areas that would benefit from further study. Findings are summarized and evidence is classified by outcome domains. Fifty-...

  8. TESTING GARCH-X TYPE MODELS

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    2017-01-01

    We present novel theory for testing for reduction of GARCH-X type models with an exogenous (X) covariate to standard GARCH type models. To deal with the problems of potential nuisance parameters on the boundary of the parameter space as well as lack of identification under the null, we exploit...... a noticeable property of specific zero-entries in the inverse information of the GARCH-X type models. Specifically, we consider sequential testing based on two likelihood ratio tests and as demonstrated the structure of the inverse information implies that the proposed test neither depends on whether...... the nuisance parameters lie on the boundary of the parameter space, nor on lack of identification. Our general results on GARCH-X type models are applied to Gaussian based GARCH-X models, GARCH-X models with Student's t-distributed innovations as well as the integer-valued GARCH-X (PAR-X) models....

  9. A systematic review and meta-analysis of tests to predict wound healing in diabetic foot.

    Science.gov (United States)

    Wang, Zhen; Hasan, Rim; Firwana, Belal; Elraiyah, Tarig; Tsapas, Apostolos; Prokop, Larry; Mills, Joseph L; Murad, Mohammad Hassan

    2016-02-01

    This systematic review summarized the evidence on noninvasive screening tests for the prediction of wound healing and the risk of amputation in diabetic foot ulcers. We searched MEDLINE In-Process & Other Non-Indexed Citations, MEDLINE, Embase, Cochrane Database of Systematic Reviews, Cochrane Central Register of Controlled Trials, and Scopus from database inception to October 2011. We pooled sensitivity, specificity, and diagnostic odds ratio (DOR) and compared test performance. Thirty-seven studies met the inclusion criteria. Eight tests were used to predict wound healing in this setting, including ankle-brachial index (ABI), ankle peak systolic velocity, transcutaneous oxygen measurement (TcPo2), toe-brachial index, toe systolic blood pressure, microvascular oxygen saturation, skin perfusion pressure, and hyperspectral imaging. For the TcPo2 test, the pooled DOR was 15.81 (95% confidence interval [CI], 3.36-74.45) for wound healing and 4.14 (95% CI, 2.98-5.76) for the risk of amputation. ABI was also predictive but to a lesser degree of the risk of amputations (DOR, 2.89; 95% CI, 1.65-5.05) but not of wound healing (DOR, 1.02; 95% CI, 0.40-2.64). It was not feasible to perform meta-analysis comparing the remaining tests. The overall quality of evidence was limited by the risk of bias and imprecision (wide CIs due to small sample size). Several tests may predict wound healing in the setting of diabetic foot ulcer; however, most of the available evidence evaluates only TcPo2 and ABI. The overall quality of the evidence is low, and further research is needed to provide higher quality comparative effectiveness evidence. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  10. Marginalizing Instrument Systematics in HST WFC3 Transit Light Curves

    Science.gov (United States)

    Wakeford, H. R.; Sing, D. K.; Evans, T.; Deming, D.; Mandell, A.

    2016-03-01

    Hubble Space Telescope (HST) Wide Field Camera 3 (WFC3) infrared observations at 1.1-1.7 μm probe primarily the H2O absorption band at 1.4 μm, and have provided low-resolution transmission spectra for a wide range of exoplanets. We present the application of marginalization based on Gibson to analyze exoplanet transit light curves obtained from HST WFC3 to better determine important transit parameters such as Rp/R*, which are important for accurate detections of H2O. We approximate the evidence, often referred to as the marginal likelihood, for a grid of systematic models using the Akaike Information Criterion. We then calculate the evidence-based weight assigned to each systematic model and use the information from all tested models to calculate the final marginalized transit parameters for both the band-integrated and spectroscopic light curves to construct the transmission spectrum. We find that a majority of the highest weight models contain a correction for a linear trend in time as well as corrections related to HST orbital phase. We additionally test the dependence on the shift in spectral wavelength position over the course of the observations and find that spectroscopic wavelength shifts {δ }λ (λ ) best describe the associated systematic in the spectroscopic light curves for most targets while fast scan rate observations of bright targets require an additional level of processing to produce a robust transmission spectrum. The use of marginalization allows for transparent interpretation and understanding of the instrument and the impact of each systematic evaluated statistically for each data set, expanding the ability to make true and comprehensive comparisons between exoplanet atmospheres.

  11. Reliability of specific physical examination tests for the diagnosis of shoulder pathologies: a systematic review and meta-analysis.

    Science.gov (United States)

    Lange, Toni; Matthijs, Omer; Jain, Nitin B; Schmitt, Jochen; Lützner, Jörg; Kopkow, Christian

    2017-03-01

    Shoulder pain in the general population is common and to identify the aetiology of shoulder pain, history, motion and muscle testing, and physical examination tests are usually performed. The aim of this systematic review was to summarise and evaluate intrarater and inter-rater reliability of physical examination tests in the diagnosis of shoulder pathologies. A comprehensive systematic literature search was conducted using MEDLINE, EMBASE, Allied and Complementary Medicine Database (AMED) and Physiotherapy Evidence Database (PEDro) through 20 March 2015. Methodological quality was assessed using the Quality Appraisal of Reliability Studies (QAREL) tool by 2 independent reviewers. The search strategy revealed 3259 articles, of which 18 finally met the inclusion criteria. These studies evaluated the reliability of 62 test and test variations used for the specific physical examination tests for the diagnosis of shoulder pathologies. Methodological quality ranged from 2 to 7 positive criteria of the 11 items of the QAREL tool. This review identified a lack of high-quality studies evaluating inter-rater as well as intrarater reliability of specific physical examination tests for the diagnosis of shoulder pathologies. In addition, reliability measures differed between included studies hindering proper cross-study comparisons. PROSPERO CRD42014009018. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  12. Diagnostic validity of physical examination tests for common knee disorders: An overview of systematic reviews and meta-analysis.

    Science.gov (United States)

    Décary, Simon; Ouellet, Philippe; Vendittoli, Pascal-André; Roy, Jean-Sébastien; Desmeules, François

    2017-01-01

    More evidence on diagnostic validity of physical examination tests for knee disorders is needed to lower frequently used and costly imaging tests. To conduct a systematic review of systematic reviews (SR) and meta-analyses (MA) evaluating the diagnostic validity of physical examination tests for knee disorders. A structured literature search was conducted in five databases until January 2016. Methodological quality was assessed using the AMSTAR. Seventeen reviews were included with mean AMSTAR score of 5.5 ± 2.3. Based on six SR, only the Lachman test for ACL injuries is diagnostically valid when individually performed (Likelihood ratio (LR+):10.2, LR-:0.2). Based on two SR, the Ottawa Knee Rule is a valid screening tool for knee fractures (LR-:0.05). Based on one SR, the EULAR criteria had a post-test probability of 99% for the diagnosis of knee osteoarthritis. Based on two SR, a complete physical examination performed by a trained health provider was found to be diagnostically valid for ACL, PCL and meniscal injuries as well as for cartilage lesions. When individually performed, common physical tests are rarely able to rule in or rule out a specific knee disorder, except the Lachman for ACL injuries. There is low-quality evidence concerning the validity of combining history elements and physical tests. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Systematic model calculations of the hyperfine structure in light and heavy ions

    CERN Document Server

    Tomaselli, M; Nörtershäuser, W; Ewald, G; Sánchez, R; Fritzsche, S; Karshenboim, S G

    2003-01-01

    Systematic model calculations are performed for the magnetization distributions and the hyperfine structure (HFS) of light and heavy ions with a mass close to A ~ 6 208 235 to test the interplay of nuclear and atomic structure. A high-precision measurement of lithium-isotope shifts (IS) for suitable transition, combined with an accurate theoretical evaluation of the mass-shift contribution in the respective transition, can be used to determine the root-mean-square (rms) nuclear-charge radius of Li isotopes, particularly of the halo nucleus /sup 11/Li. An experiment of this type is currently underway at GSI in Darmstadt and ISOLDE at CERN. However, the field-shift contributions between the different isotopes can be evaluated using the results obtained for the charge radii, thus casting, with knowledge of the ratio of the HFS constants to the magnetic moments, new light on the IS theory. For heavy charged ions the calculated n- body magnetization distributions reproduce the HFS of hydrogen-like ions well if QED...

  14. Uncertainty Analysis of Resistance Tests in Ata Nutku Ship Model Testing Laboratory of Istanbul Technical University

    Directory of Open Access Journals (Sweden)

    Cihad DELEN

    2015-12-01

    Full Text Available In this study, some systematical resistance tests, where were performed in Ata Nutku Ship Model Testing Laboratory of Istanbul Technical University (ITU, have been included in order to determine the uncertainties. Experiments which are conducted in the framework of mathematical and physical rules for the solution of engineering problems, measurements, calculations include uncertainty. To question the reliability of the obtained values, the existing uncertainties should be expressed as quantities. The uncertainty of a measurement system is not known if the results do not carry a universal value. On the other hand, resistance is one of the most important parameters that should be considered in the process of ship design. Ship resistance during the design phase of a ship cannot be determined precisely and reliably due to the uncertainty resources in determining the resistance value that are taken into account. This case may cause negative effects to provide the required specifications in the latter design steps. The uncertainty arising from the resistance test has been estimated and compared for a displacement type ship and high speed marine vehicles according to ITTC 2002 and ITTC 2014 regulations which are related to the uncertainty analysis methods. Also, the advantages and disadvantages of both ITTC uncertainty analysis methods have been discussed.

  15. A systematic review of the diagnostic accuracy of provocative tests of the neck for diagnosing cervical radiculopathy.

    NARCIS (Netherlands)

    Rubinstein, S.M.; Pool, J.J.; van Tulder, M.W.; Riphagen, II; de Vet, H.C.W.

    2007-01-01

    Clinical provocative tests of the neck, which position the neck and arm inorder to aggravate or relieve arm symptoms, are commonly used in clinical practice in patients with a suspected cervical radiculopathy. Their diagnostic accuracy, however, has never been examined in a systematic review. A

  16. Tolerable systematic errors in Really Large Hadron Collider dipoles

    International Nuclear Information System (INIS)

    Peggs, S.; Dell, F.

    1996-01-01

    Maximum allowable systematic harmonics for arc dipoles in a Really Large Hadron Collider are derived. The possibility of half cell lengths much greater than 100 meters is justified. A convenient analytical model evaluating horizontal tune shifts is developed, and tested against a sample high field collider

  17. A systematic review of tests for lymph node status in primary endometrial cancer.

    Science.gov (United States)

    Selman, Tara J; Mann, Christopher H; Zamora, Javier; Khan, Khalid S

    2008-05-05

    The lymph node status of a patient is a key determinate in staging, prognosis and adjuvant treatment of endometrial cancer. Despite this, the potential additional morbidity associated with lymphadenectomy makes its role controversial. This study systematically reviews the accuracy literature on sentinel node biopsy; ultra sound scanning, magnetic resonance imaging (MRI) and computer tomography (CT) for determining lymph node status in endometrial cancer. Relevant articles were identified form MEDLINE (1966-2006), EMBASE (1980-2006), MEDION, the Cochrane library, hand searching of reference lists from primary articles and reviews, conference abstracts and contact with experts in the field. The review included 18 relevant primary studies (693 women). Data was extracted for study characteristics and quality. Bivariate random-effect model meta-analysis was used to estimate diagnostic accuracy of the various index tests. MRI (pooled positive LR 26.7, 95% CI 10.6 - 67.6 and negative LR 0.29 95% CI 0.17 - 0.49) and successful sentinel node biopsy (pooled positive LR 18.9 95% CI 6.7 - 53.2 and negative LR 0.22, 95% CI 0.1 - 0.48) were the most accurate tests. CT was not as accurate a test (pooled positive LR 3.8, 95% CI 2.0 - 7.3 and negative LR of 0.62, 95% CI 0.45 - 0.86. There was only one study that reported the use of ultrasound scanning. MRI and sentinel node biopsy have shown similar diagnostic accuracy in confirming lymph node status among women with primary endometrial cancer than CT scanning, although the comparisons made are indirect and hence subject to bias. MRI should be used in preference, in light of the ASTEC trial, because of its non invasive nature.

  18. A systematic review of tests for lymph node status in primary endometrial cancer

    Directory of Open Access Journals (Sweden)

    Zamora Javier

    2008-05-01

    Full Text Available Abstract Background The lymph node status of a patient is a key determinate in staging, prognosis and adjuvant treatment of endometrial cancer. Despite this, the potential additional morbidity associated with lymphadenectomy makes its role controversial. This study systematically reviews the accuracy literature on sentinel node biopsy; ultra sound scanning, magnetic resonance imaging (MRI and computer tomography (CT for determining lymph node status in endometrial cancer. Methods Relevant articles were identified form MEDLINE (1966–2006, EMBASE (1980–2006, MEDION, the Cochrane library, hand searching of reference lists from primary articles and reviews, conference abstracts and contact with experts in the field. The review included 18 relevant primary studies (693 women. Data was extracted for study characteristics and quality. Bivariate random-effect model meta-analysis was used to estimate diagnostic accuracy of the various index tests. Results MRI (pooled positive LR 26.7, 95% CI 10.6 – 67.6 and negative LR 0.29 95% CI 0.17 – 0.49 and successful sentinel node biopsy (pooled positive LR 18.9 95% CI 6.7 – 53.2 and negative LR 0.22, 95% CI 0.1 – 0.48 were the most accurate tests. CT was not as accurate a test (pooled positive LR 3.8, 95% CI 2.0 – 7.3 and negative LR of 0.62, 95% CI 0.45 – 0.86. There was only one study that reported the use of ultrasound scanning. Conclusion MRI and sentinel node biopsy have shown similar diagnostic accuracy in confirming lymph node status among women with primary endometrial cancer than CT scanning, although the comparisons made are indirect and hence subject to bias. MRI should be used in preference, in light of the ASTEC trial, because of its non invasive nature.

  19. Equation-free analysis of agent-based models and systematic parameter determination

    Science.gov (United States)

    Thomas, Spencer A.; Lloyd, David J. B.; Skeldon, Anne C.

    2016-12-01

    Agent based models (ABM)s are increasingly used in social science, economics, mathematics, biology and computer science to describe time dependent systems in circumstances where a description in terms of equations is difficult. Yet few tools are currently available for the systematic analysis of ABM behaviour. Numerical continuation and bifurcation analysis is a well-established tool for the study of deterministic systems. Recently, equation-free (EF) methods have been developed to extend numerical continuation techniques to systems where the dynamics are described at a microscopic scale and continuation of a macroscopic property of the system is considered. To date, the practical use of EF methods has been limited by; (1) the over-head of application-specific implementation; (2) the laborious configuration of problem-specific parameters; and (3) large ensemble sizes (potentially) leading to computationally restrictive run-times. In this paper we address these issues with our tool for the EF continuation of stochastic systems, which includes algorithms to systematically configuration problem specific parameters and enhance robustness to noise. Our tool is generic and can be applied to any 'black-box' simulator and determines the essential EF parameters prior to EF analysis. Robustness is significantly improved using our convergence-constraint with a corrector-repeat (C3R) method. This algorithm automatically detects outliers based on the dynamics of the underlying system enabling both an order of magnitude reduction in ensemble size and continuation of systems at much higher levels of noise than classical approaches. We demonstrate our method with application to several ABM models, revealing parameter dependence, bifurcation and stability analysis of these complex systems giving a deep understanding of the dynamical behaviour of the models in a way that is not otherwise easily obtainable. In each case we demonstrate our systematic parameter determination stage for

  20. A Systematic Review of Agent-Based Modelling and Simulation Applications in the Higher Education Domain

    Science.gov (United States)

    Gu, X.; Blackmore, K. L.

    2015-01-01

    This paper presents the results of a systematic review of agent-based modelling and simulation (ABMS) applications in the higher education (HE) domain. Agent-based modelling is a "bottom-up" modelling paradigm in which system-level behaviour (macro) is modelled through the behaviour of individual local-level agent interactions (micro).…

  1. Algorithms for testing of fractional dynamics: a practical guide to ARFIMA modelling

    International Nuclear Information System (INIS)

    Burnecki, Krzysztof; Weron, Aleksander

    2014-01-01

    In this survey paper we present a systematic methodology which demonstrates how to identify the origins of fractional dynamics. We consider three mechanisms which lead to it, namely fractional Brownian motion, fractional Lévy stable motion and an autoregressive fractionally integrated moving average (ARFIMA) process but we concentrate on the ARFIMA modelling. The methodology is based on statistical tools for identification and validation of the fractional dynamics, in particular on an ARFIMA parameter estimator, an ergodicity test, a self-similarity index estimator based on sample p-variation and a memory parameter estimator based on sample mean-squared displacement. A complete list of algorithms needed for this is provided in appendices A–F. Finally, we illustrate the methodology on various empirical data and show that ARFIMA can be considered as a universal model for fractional dynamics. Thus, we provide a practical guide for experimentalists on how to efficiently use ARFIMA modelling for a large class of anomalous diffusion data. (paper)

  2. Preliminary wing model tests in the variable density wind tunnel of the National Advisory Committee for Aeronautics

    Science.gov (United States)

    Munk, Max M

    1926-01-01

    This report contains the results of a series of tests with three wing models. By changing the section of one of the models and painting the surface of another, the number of models tested was increased to five. The tests were made in order to obtain some general information on the air forces on wing sections at a high Reynolds number and in particular to make sure that the Reynolds number is really the important factor, and not other things like the roughness of the surface and the sharpness of the trailing edge. The few tests described in this report seem to indicate that the air forces at a high Reynolds number are not equivalent to respective air forces at a low Reynolds number (as in an ordinary atmospheric wind tunnel). The drag appears smaller at a high Reynolds number and the maximum lift is increased in some cases. The roughness of the surface and the sharpness of the trailing edge do not materially change the results, so that we feel confident that tests with systematic series of different wing sections will bring consistent results, important and highly useful to the designer.

  3. A systematic review investigating measurement properties of physiological tests in rugby.

    Science.gov (United States)

    Chiwaridzo, Matthew; Oorschot, Sander; Dambi, Jermaine M; Ferguson, Gillian D; Bonney, Emmanuel; Mudawarima, Tapfuma; Tadyanemhandu, Cathrine; Smits-Engelsman, Bouwien C M

    2017-01-01

    This systematic review was conducted with the first objective aimed at providing an overview of the physiological characteristics commonly evaluated in rugby and the corresponding tests used to measure each construct. Secondly, the measurement properties of all identified tests per physiological construct were evaluated with the ultimate purpose of identifying tests with strongest level of evidence per construct. The review was conducted in two stages. In all stages, electronic databases of EBSCOhost, Medline and Scopus were searched for full-text articles. Stage 1 included studies examining physiological characteristics in rugby. Stage 2 included studies evaluating measurement properties of all tests identified in Stage 1 either in rugby or related sports such as Australian Rules football and Soccer. Two independent reviewers screened relevant articles from titles and abstracts for both stages. Seventy studies met the inclusion criteria for Stage 1. The studies described 63 tests assessing speed (8), agility/change of direction speed (7), upper-body muscular endurance (8), upper-body muscular power (6), upper-body muscular strength (5), anaerobic endurance (4), maximal aerobic power (4), lower-body muscular power (3), prolonged high-intensity intermittent running ability/endurance (5), lower-body muscular strength (5), repeated high-intensity exercise performance (3), repeated-sprint ability (2), repeated-effort ability (1), maximal aerobic speed (1) and abdominal endurance (1). Stage 2 identified 20 studies describing measurement properties of 21 different tests. Only moderate evidence was found for the reliability of the 30-15 Intermittent Fitness. There was limited evidence found for the reliability and/or validity of 5 m, 10 m, 20 m speed tests, 505 test, modified 505 test, L run test, Sergeant Jump test and bench press repetitions-to-fatigue tests. There was no information from high-quality studies on the measurement properties of all the other tests

  4. Systematic review and proposal of a field-based physical fitness-test battery in preschool children: the PREFIT battery.

    Science.gov (United States)

    Ortega, Francisco B; Cadenas-Sánchez, Cristina; Sánchez-Delgado, Guillermo; Mora-González, José; Martínez-Téllez, Borja; Artero, Enrique G; Castro-Piñero, Jose; Labayen, Idoia; Chillón, Palma; Löf, Marie; Ruiz, Jonatan R

    2015-04-01

    Physical fitness is a powerful health marker in childhood and adolescence, and it is reasonable to think that it might be just as important in younger children, i.e. preschoolers. At the moment, researchers, clinicians and sport practitioners do not have enough information about which fitness tests are more reliable, valid and informative from the health point of view to be implemented in preschool children. Our aim was to systematically review the studies conducted in preschool children using field-based fitness tests, and examine their (1) reliability, (2) validity, and (3) relationship with health outcomes. Our ultimate goal was to propose a field-based physical fitness-test battery to be used in preschool children. PubMed and Web of Science. Studies conducted in healthy preschool children that included field-based fitness tests. When using PubMed, we included Medical Subject Heading (MeSH) terms to enhance the power of the search. A set of fitness-related terms were combined with 'child, preschool' [MeSH]. The same strategy and terms were used for Web of Science (except for the MeSH option). Since no previous reviews with a similar aim were identified, we searched for all articles published up to 1 April 2014 (no starting date). A total of 2,109 articles were identified, of which 22 articles were finally selected for this review. Most studies focused on reliability of the fitness tests (n = 21, 96%), while very few focused on validity (0 criterion-related validity and 4 (18%) convergent validity) or relationship with health outcomes (0 longitudinal and 1 (5%) cross-sectional study). Motor fitness, particularly balance, was the most studied fitness component, while cardiorespiratory fitness was the least studied. After analyzing the information retrieved in the current systematic review about fitness testing in preschool children, we propose the PREFIT battery, field-based FITness testing in PREschool children. The PREFIT battery is composed of the following

  5. Validity and Reliability of Published Comprehensive Theory of Mind Tests for Normal Preschool Children: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Seyyede Zohreh Ziatabar Ahmadi

    2015-12-01

    Full Text Available Objective: Theory of mind (ToM or mindreading is an aspect of social cognition that evaluates mental states and beliefs of oneself and others. Validity and reliability are very important criteria when evaluating standard tests; and without them, these tests are not usable. The aim of this study was to systematically review the validity and reliability of published English comprehensive ToM tests developed for normal preschool children.Method: We searched MEDLINE (PubMed interface, Web of Science, Science direct, PsycINFO, and also evidence base Medicine (The Cochrane Library databases from 1990 to June 2015. Search strategy was Latin transcription of ‘Theory of Mind’ AND test AND children. Also, we manually studied the reference lists of all final searched articles and carried out a search of their references. Inclusion criteria were as follows: Valid and reliable diagnostic ToM tests published from 1990 to June 2015 for normal preschool children; and exclusion criteria were as follows: the studies that only used ToM tests and single tasks (false belief tasks for ToM assessment and/or had no description about structure, validity or reliability of their tests. Methodological quality of the selected articles was assessed using the Critical Appraisal Skills Programme (CASP.Result: In primary searching, we found 1237 articles in total databases. After removing duplicates and applying all inclusion and exclusion criteria, we selected 11 tests for this systematic review. Conclusion: There were a few valid, reliable and comprehensive ToM tests for normal preschool children. However, we had limitations concerning the included articles. The defined ToM tests were different in populations, tasks, mode of presentations, scoring, mode of responses, times and other variables. Also, they had various validities and reliabilities. Therefore, it is recommended that the researchers and clinicians select the ToM tests according to their psychometric

  6. Reduction in uptake of PSA tests following decision aids: systematic review of current aids and their evaluations.

    NARCIS (Netherlands)

    Evans, R.; Edwards, A.; Brett, J.; Bradburn, M.; Watson, E.; Austoker, J.; Elwyn, G.

    2005-01-01

    A man's decision to have a prostate-specific antigen (PSA) test should be an informed one. We undertook a systematic review to identify and appraise PSA decision aids and evaluations. We searched 15 electronic databases and hand-searched key journals. We also contacted key authors and organisations.

  7. Testing homogeneity in Weibull-regression models.

    Science.gov (United States)

    Bolfarine, Heleno; Valença, Dione M

    2005-10-01

    In survival studies with families or geographical units it may be of interest testing whether such groups are homogeneous for given explanatory variables. In this paper we consider score type tests for group homogeneity based on a mixing model in which the group effect is modelled as a random variable. As opposed to hazard-based frailty models, this model presents survival times that conditioned on the random effect, has an accelerated failure time representation. The test statistics requires only estimation of the conventional regression model without the random effect and does not require specifying the distribution of the random effect. The tests are derived for a Weibull regression model and in the uncensored situation, a closed form is obtained for the test statistic. A simulation study is used for comparing the power of the tests. The proposed tests are applied to real data sets with censored data.

  8. Incorporation of systematic uncertainties in statistical decision rules

    International Nuclear Information System (INIS)

    Wichers, V.A.

    1994-02-01

    The influence of systematic uncertainties on statistical hypothesis testing is an underexposed subject. Systematic uncertainties cannot be incorporated in hypothesis tests, but they deteriorate the performance of these tests. A wrong treatment of systematic uncertainties in verification applications in safeguards leads to false assessment of the strength of the safeguards measure, and thus undermines the safeguards system. The effects of systematic uncertainties on decision errors in hypothesis testing are analyzed quantitatively for an example from the safeguards practice. (LEU-HEU verification of UF 6 enrichment in centrifuge enrichment plants). It is found that the only proper way to tackle systematic uncertainties is reduction to sufficiently low levels; criteria for these are proposed. Although conclusions were obtained from study of a single practical application, it is believed that they hold generally: for all sources of systematic uncertainties, all statistical decision rules, and all applications. (orig./HP)

  9. Systematizing Web Search through a Meta-Cognitive, Systems-Based, Information Structuring Model (McSIS)

    Science.gov (United States)

    Abuhamdieh, Ayman H.; Harder, Joseph T.

    2015-01-01

    This paper proposes a meta-cognitive, systems-based, information structuring model (McSIS) to systematize online information search behavior based on literature review of information-seeking models. The General Systems Theory's (GST) prepositions serve as its framework. Factors influencing information-seekers, such as the individual learning…

  10. Extensive and systematic rewiring of histone post-translational modifications in cancer model systems.

    Science.gov (United States)

    Noberini, Roberta; Osti, Daniela; Miccolo, Claudia; Richichi, Cristina; Lupia, Michela; Corleone, Giacomo; Hong, Sung-Pil; Colombo, Piergiuseppe; Pollo, Bianca; Fornasari, Lorenzo; Pruneri, Giancarlo; Magnani, Luca; Cavallaro, Ugo; Chiocca, Susanna; Minucci, Saverio; Pelicci, Giuliana; Bonaldi, Tiziana

    2018-05-04

    Histone post-translational modifications (PTMs) generate a complex combinatorial code that regulates gene expression and nuclear functions, and whose deregulation has been documented in different types of cancers. Therefore, the availability of relevant culture models that can be manipulated and that retain the epigenetic features of the tissue of origin is absolutely crucial for studying the epigenetic mechanisms underlying cancer and testing epigenetic drugs. In this study, we took advantage of quantitative mass spectrometry to comprehensively profile histone PTMs in patient tumor tissues, primary cultures and cell lines from three representative tumor models, breast cancer, glioblastoma and ovarian cancer, revealing an extensive and systematic rewiring of histone marks in cell culture conditions, which includes a decrease of H3K27me2/me3, H3K79me1/me2 and H3K9ac/K14ac, and an increase of H3K36me1/me2. While some changes occur in short-term primary cultures, most of them are instead time-dependent and appear only in long-term cultures. Remarkably, such changes mostly revert in cell line- and primary cell-derived in vivo xenograft models. Taken together, these results support the use of xenografts as the most representative models of in vivo epigenetic processes, suggesting caution when using cultured cells, in particular cell lines and long-term primary cultures, for epigenetic investigations.

  11. Business model framework applications in health care: A systematic review.

    Science.gov (United States)

    Fredriksson, Jens Jacob; Mazzocato, Pamela; Muhammed, Rafiq; Savage, Carl

    2017-11-01

    It has proven to be a challenge for health care organizations to achieve the Triple Aim. In the business literature, business model frameworks have been used to understand how organizations are aligned to achieve their goals. We conducted a systematic literature review with an explanatory synthesis approach to understand how business model frameworks have been applied in health care. We found a large increase in applications of business model frameworks during the last decade. E-health was the most common context of application. We identified six applications of business model frameworks: business model description, financial assessment, classification based on pre-defined typologies, business model analysis, development, and evaluation. Our synthesis suggests that the choice of business model framework and constituent elements should be informed by the intent and context of application. We see a need for harmonization in the choice of elements in order to increase generalizability, simplify application, and help organizations realize the Triple Aim.

  12. Evaluating the Psychometric Characteristics of Generated Multiple-Choice Test Items

    Science.gov (United States)

    Gierl, Mark J.; Lai, Hollis; Pugh, Debra; Touchie, Claire; Boulais, André-Philippe; De Champlain, André

    2016-01-01

    Item development is a time- and resource-intensive process. Automatic item generation integrates cognitive modeling with computer technology to systematically generate test items. To date, however, items generated using cognitive modeling procedures have received limited use in operational testing situations. As a result, the psychometric…

  13. Prediction of pre-eclampsia: a protocol for systematic reviews of test accuracy

    Directory of Open Access Journals (Sweden)

    Khan Khalid S

    2006-10-01

    Full Text Available Abstract Background Pre-eclampsia, a syndrome of hypertension and proteinuria, is a major cause of maternal and perinatal morbidity and mortality. Accurate prediction of pre-eclampsia is important, since high risk women could benefit from intensive monitoring and preventive treatment. However, decision making is currently hampered due to lack of precise and up to date comprehensive evidence summaries on estimates of risk of developing pre-eclampsia. Methods/Design A series of systematic reviews and meta-analyses will be undertaken to determine, among women in early pregnancy, the accuracy of various tests (history, examinations and investigations for predicting pre-eclampsia. We will search Medline, Embase, Cochrane Library, MEDION, citation lists of review articles and eligible primary articles and will contact experts in the field. Reviewers working independently will select studies, extract data, and assess study validity according to established criteria. Language restrictions will not be applied. Bivariate meta-analysis of sensitivity and specificity will be considered for tests whose studies allow generation of 2 × 2 tables. Discussion The results of the test accuracy reviews will be integrated with results of effectiveness reviews of preventive interventions to assess the impact of test-intervention combinations for prevention of pre-eclampsia.

  14. The diagnostic accuracy of serological tests for Lyme borreliosis in Europe: a systematic review and meta-analysis.

    Science.gov (United States)

    Leeflang, M M G; Ang, C W; Berkhout, J; Bijlmer, H A; Van Bortel, W; Brandenburg, A H; Van Burgel, N D; Van Dam, A P; Dessau, R B; Fingerle, V; Hovius, J W R; Jaulhac, B; Meijer, B; Van Pelt, W; Schellekens, J F P; Spijker, R; Stelma, F F; Stanek, G; Verduyn-Lunel, F; Zeller, H; Sprong, H

    2016-03-25

    Interpretation of serological assays in Lyme borreliosis requires an understanding of the clinical indications and the limitations of the currently available tests. We therefore systematically reviewed the accuracy of serological tests for the diagnosis of Lyme borreliosis in Europe. We searched EMBASE en MEDLINE and contacted experts. Studies evaluating the diagnostic accuracy of serological assays for Lyme borreliosis in Europe were eligible. Study selection and data-extraction were done by two authors independently. We assessed study quality using the QUADAS-2 checklist. We used a hierarchical summary ROC meta-regression method for the meta-analyses. Potential sources of heterogeneity were test-type, commercial or in-house, Ig-type, antigen type and study quality. These were added as covariates to the model, to assess their effect on test accuracy. Seventy-eight studies evaluating an Enzyme-Linked ImmunoSorbent assay (ELISA) or an immunoblot assay against a reference standard of clinical criteria were included. None of the studies had low risk of bias for all QUADAS-2 domains. Sensitivity was highly heterogeneous, with summary estimates: erythema migrans 50% (95% CI 40% to 61%); neuroborreliosis 77% (95% CI 67% to 85%); acrodermatitis chronica atrophicans 97% (95% CI 94% to 99%); unspecified Lyme borreliosis 73% (95% CI 53% to 87%). Specificity was around 95% in studies with healthy controls, but around 80% in cross-sectional studies. Two-tiered algorithms or antibody indices did not outperform single test approaches. The observed heterogeneity and risk of bias complicate the extrapolation of our results to clinical practice. The usefulness of the serological tests for Lyme disease depends on the pre-test probability and subsequent predictive values in the setting where the tests are being used. Future diagnostic accuracy studies should be prospectively planned cross-sectional studies, done in settings where the test will be used in practice.

  15. Systematic comparison of model polymer nanocomposite mechanics.

    Science.gov (United States)

    Xiao, Senbo; Peter, Christine; Kremer, Kurt

    2016-09-13

    Polymer nanocomposites render a range of outstanding materials from natural products such as silk, sea shells and bones, to synthesized nanoclay or carbon nanotube reinforced polymer systems. In contrast to the fast expanding interest in this type of material, the fundamental mechanisms of their mixing, phase behavior and reinforcement, especially for higher nanoparticle content as relevant for bio-inorganic composites, are still not fully understood. Although polymer nanocomposites exhibit diverse morphologies, qualitatively their mechanical properties are believed to be governed by a few parameters, namely their internal polymer network topology, nanoparticle volume fraction, particle surface properties and so on. Relating material mechanics to such elementary parameters is the purpose of this work. By taking a coarse-grained molecular modeling approach, we study an range of different polymer nanocomposites. We vary polymer nanoparticle connectivity, surface geometry and volume fraction to systematically study rheological/mechanical properties. Our models cover different materials, and reproduce key characteristics of real nanocomposites, such as phase separation, mechanical reinforcement. The results shed light on establishing elementary structure, property and function relationship of polymer nanocomposites.

  16. POC CD4 Testing Improves Linkage to HIV Care and Timeliness of ART Initiation in a Public Health Approach: A Systematic Review and Meta-Analysis.

    Directory of Open Access Journals (Sweden)

    Lara Vojnov

    Full Text Available CD4 cell count is an important test in HIV programs for baseline risk assessment, monitoring of ART where viral load is not available, and, in many settings, antiretroviral therapy (ART initiation decisions. However, access to CD4 testing is limited, in part due to the centralized conventional laboratory network. Point of care (POC CD4 testing has the potential to address some of the challenges of centralized CD4 testing and delays in delivery of timely testing and ART initiation. We conducted a systematic review and meta-analysis to identify the extent to which POC improves linkages to HIV care and timeliness of ART initiation.We searched two databases and four conference sites between January 2005 and April 2015 for studies reporting test turnaround times, proportion of results returned, and retention associated with the use of point-of-care CD4. Random effects models were used to estimate pooled risk ratios, pooled proportions, and 95% confidence intervals.We identified 30 eligible studies, most of which were completed in Africa. Test turnaround times were reduced with the use of POC CD4. The time from HIV diagnosis to CD4 test was reduced from 10.5 days with conventional laboratory-based testing to 0.1 days with POC CD4 testing. Retention along several steps of the treatment initiation cascade was significantly higher with POC CD4 testing, notably from HIV testing to CD4 testing, receipt of results, and pre-CD4 test retention (all p<0.001. Furthermore, retention between CD4 testing and ART initiation increased with POC CD4 testing compared to conventional laboratory-based testing (p = 0.01. We also carried out a non-systematic review of the literature observing that POC CD4 increased the projected life expectancy, was cost-effective, and acceptable.POC CD4 technologies reduce the time and increase patient retention along the testing and treatment cascade compared to conventional laboratory-based testing. POC CD4 is, therefore, a useful tool

  17. Model-based security testing

    OpenAIRE

    Schieferdecker, Ina; Großmann, Jürgen; Schneider, Martin

    2012-01-01

    Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security...

  18. Measuring and modelling the effects of systematic non-adherence to mass drug administration

    Directory of Open Access Journals (Sweden)

    Louise Dyson

    2017-03-01

    Full Text Available It is well understood that the success or failure of a mass drug administration campaign critically depends on the level of coverage achieved. To that end coverage levels are often closely scrutinised during campaigns and the response to underperforming campaigns is to attempt to improve coverage. Modelling work has indicated, however, that the quality of the coverage achieved may also have a significant impact on the outcome. If the coverage achieved is likely to miss similar people every round then this can have a serious detrimental effect on the campaign outcome. We begin by reviewing the current modelling descriptions of this effect and introduce a new modelling framework that can be used to simulate a given level of systematic non-adherence. We formalise the likelihood that people may miss several rounds of treatment using the correlation in the attendance of different rounds. Using two very simplified models of the infection of helminths and non-helminths, respectively, we demonstrate that the modelling description used and the correlation included between treatment rounds can have a profound effect on the time to elimination of disease in a population. It is therefore clear that more detailed coverage data is required to accurately predict the time to disease elimination. We review published coverage data in which individuals are asked how many previous rounds they have attended, and show how this information may be used to assess the level of systematic non-adherence. We note that while the coverages in the data found range from 40.5% to 95.5%, still the correlations found lie in a fairly narrow range (between 0.2806 and 0.5351. This indicates that the level of systematic non-adherence may be similar even in data from different years, countries, diseases and administered drugs.

  19. Several submaximal exercise tests are reliable, valid and acceptable in people with chronic pain, fibromyalgia or chronic fatigue: a systematic review

    NARCIS (Netherlands)

    Ratter, Julia; Radlinger, Lorenz; Lucas, Cees

    2014-01-01

    Are submaximal and maximal exercise tests reliable, valid and acceptable in people with chronic pain, fibromyalgia and fatigue disorders? Systematic review of studies of the psychometric properties of exercise tests. People older than 18 years with chronic pain, fibromyalgia and chronic fatigue

  20. Systematically too low values of the cranking model collective inertia parameters

    International Nuclear Information System (INIS)

    Dudek, I.; Dudek, W.; Lukasiak-Ruchowska, E.; Skalski, I.

    1980-01-01

    Deformed Nilsson and Woods-Saxon potentials were employed for generating single particle states used henceforth for calculating the inertia tensor (cranking model and monopole pairing) and the collective energy surfaces (Strutinsky method). The deformation was parametrized in terms of quadrupole and hexadecapole degrees of freedom. The classical energy expression obtained from the inertia tensor and energy surfaces was quantized and the resulting stationary Schroedinger equation was solved using the approximate method. The second Isup(π) = 0 + 2 collective level energies were calculated for the Rare Earth and Actinide nuclei and the results compared with the experimental data. The vibrational level energies agree with the experimental ones much better for spherical nuclei for both single particle potentials; the discrepancies for deformed nuclei overestimate the experimental results by roughly a factor of two. It is argued that coupling of the axially symmetric quadrupole degrees of freedom to non-axial and hexadecapole ones does not affect the conclusions about systematically too low mass parameter values. The alternative explanation of the systematic deviations from the 0 + 2 level energies could be a systematically too high stiffness of the energy surfaces obrained with the Strutinsky method. (orig.)

  1. Clinical information modeling processes for semantic interoperability of electronic health records: systematic review and inductive analysis.

    Science.gov (United States)

    Moreno-Conde, Alberto; Moner, David; Cruz, Wellington Dimas da; Santos, Marcelo R; Maldonado, José Alberto; Robles, Montserrat; Kalra, Dipak

    2015-07-01

    This systematic review aims to identify and compare the existing processes and methodologies that have been published in the literature for defining clinical information models (CIMs) that support the semantic interoperability of electronic health record (EHR) systems. Following the preferred reporting items for systematic reviews and meta-analyses systematic review methodology, the authors reviewed published papers between 2000 and 2013 that covered that semantic interoperability of EHRs, found by searching the PubMed, IEEE Xplore, and ScienceDirect databases. Additionally, after selection of a final group of articles, an inductive content analysis was done to summarize the steps and methodologies followed in order to build CIMs described in those articles. Three hundred and seventy-eight articles were screened and thirty six were selected for full review. The articles selected for full review were analyzed to extract relevant information for the analysis and characterized according to the steps the authors had followed for clinical information modeling. Most of the reviewed papers lack a detailed description of the modeling methodologies used to create CIMs. A representative example is the lack of description related to the definition of terminology bindings and the publication of the generated models. However, this systematic review confirms that most clinical information modeling activities follow very similar steps for the definition of CIMs. Having a robust and shared methodology could improve their correctness, reliability, and quality. Independently of implementation technologies and standards, it is possible to find common patterns in methods for developing CIMs, suggesting the viability of defining a unified good practice methodology to be used by any clinical information modeler. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Testing the standard model of particle physics using lattice QCD

    International Nuclear Information System (INIS)

    Water, Ruth S van de

    2007-01-01

    Recent advances in both computers and algorithms now allow realistic calculations of Quantum Chromodynamics (QCD) interactions using the numerical technique of lattice QCD. The methods used in so-called '2+1 flavor' lattice calculations have been verified both by post-dictions of quantities that were already experimentally well-known and by predictions that occurred before the relevant experimental determinations were sufficiently precise. This suggests that the sources of systematic error in lattice calculations are under control, and that lattice QCD can now be reliably used to calculate those weak matrix elements that cannot be measured experimentally but are necessary to interpret the results of many high-energy physics experiments. These same calculations also allow stringent tests of the Standard Model of particle physics, and may therefore lead to the discovery of new physics in the future

  3. Improving the Diagnosis of Legionella Pneumonia within a Healthcare System through a Systematic Consultation and Testing Program.

    Science.gov (United States)

    Decker, Brooke K; Harris, Patricia L; Muder, Robert R; Hong, Jae H; Singh, Nina; Sonel, Ali F; Clancy, Cornelius J

    2016-08-01

    Legionella testing is not recommended for all patients with pneumonia, but rather for particular patient subgroups. As a result, the overall incidence of Legionella pneumonia may be underestimated. To determine the incidence of Legionella pneumonia in a veteran population in an endemic area after introduction of a systematic infectious diseases consultation and testing program. In response to a 2011-2012 outbreak, the VA Pittsburgh Healthcare System mandated infectious diseases consultations and testing for Legionella by urine antigen and sputum culture in all patients with pneumonia. Between January 2013 and December 2015, 1,579 cases of pneumonia were identified. The incidence of pneumonia was 788/100,000 veterans per year, including 352/100,000 veterans per year and 436/100,000 veterans per year with community-associated pneumonia (CAP) and health care-associated pneumonia, respectively. Ninety-eight percent of patients with suspected pneumonia were tested for Legionella by at least one method. Legionella accounted for 1% of pneumonia cases (n = 16), including 1.7% (12/706) and 0.6% (4/873) of CAP and health care-associated pneumonia, respectively. The yearly incidences of Legionella pneumonia and Legionella CAP were 7.99 and 5.99/100,000 veterans, respectively. The sensitivities of urine antigen and sputum culture were 81% and 60%, respectively; the specificity of urine antigen was >99.97%. Urine antigen testing and Legionella cultures increased by 65% and 330%, respectively, after introduction of our program. Systematic testing of veterans in an endemic area revealed a higher incidence of Legionella pneumonia and CAP than previously reported. Widespread urine antigen testing was not limited by false positivity.

  4. Economic Evaluations of Multicomponent Disease Management Programs with Markov Models: A Systematic Review.

    Science.gov (United States)

    Kirsch, Florian

    2016-12-01

    Disease management programs (DMPs) for chronic diseases are being increasingly implemented worldwide. To present a systematic overview of the economic effects of DMPs with Markov models. The quality of the models is assessed, the method by which the DMP intervention is incorporated into the model is examined, and the differences in the structure and data used in the models are considered. A literature search was conducted; the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement was followed to ensure systematic selection of the articles. Study characteristics e.g. results, the intensity of the DMP and usual care, model design, time horizon, discount rates, utility measures, and cost-of-illness were extracted from the reviewed studies. Model quality was assessed by two researchers with two different appraisals: one proposed by Philips et al. (Good practice guidelines for decision-analytic modelling in health technology assessment: a review and consolidation of quality asessment. Pharmacoeconomics 2006;24:355-71) and the other proposed by Caro et al. (Questionnaire to assess relevance and credibility of modeling studies for informing health care decision making: an ISPOR-AMCP-NPC Good Practice Task Force report. Value Health 2014;17:174-82). A total of 16 studies (9 on chronic heart disease, 2 on asthma, and 5 on diabetes) met the inclusion criteria. Five studies reported cost savings and 11 studies reported additional costs. In the quality, the overall score of the models ranged from 39% to 65%, it ranged from 34% to 52%. Eleven models integrated effectiveness derived from a clinical trial or a meta-analysis of complete DMPs and only five models combined intervention effects from different sources into a DMP. The main limitations of the models are bad reporting practice and the variation in the selection of input parameters. Eleven of the 14 studies reported cost-effectiveness results of less than $30,000 per quality-adjusted life-year and

  5. A new fit-for-purpose model testing framework: Decision Crash Tests

    Science.gov (United States)

    Tolson, Bryan; Craig, James

    2016-04-01

    Decision-makers in water resources are often burdened with selecting appropriate multi-million dollar strategies to mitigate the impacts of climate or land use change. Unfortunately, the suitability of existing hydrologic simulation models to accurately inform decision-making is in doubt because the testing procedures used to evaluate model utility (i.e., model validation) are insufficient. For example, many authors have identified that a good standard framework for model testing called the Klemes Crash Tests (KCTs), which are the classic model validation procedures from Klemeš (1986) that Andréassian et al. (2009) rename as KCTs, have yet to become common practice in hydrology. Furthermore, Andréassian et al. (2009) claim that the progression of hydrological science requires widespread use of KCT and the development of new crash tests. Existing simulation (not forecasting) model testing procedures such as KCTs look backwards (checking for consistency between simulations and past observations) rather than forwards (explicitly assessing if the model is likely to support future decisions). We propose a fundamentally different, forward-looking, decision-oriented hydrologic model testing framework based upon the concept of fit-for-purpose model testing that we call Decision Crash Tests or DCTs. Key DCT elements are i) the model purpose (i.e., decision the model is meant to support) must be identified so that model outputs can be mapped to management decisions ii) the framework evaluates not just the selected hydrologic model but the entire suite of model-building decisions associated with model discretization, calibration etc. The framework is constructed to directly and quantitatively evaluate model suitability. The DCT framework is applied to a model building case study on the Grand River in Ontario, Canada. A hypothetical binary decision scenario is analysed (upgrade or not upgrade the existing flood control structure) under two different sets of model building

  6. 46 CFR 154.449 - Model test.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Model test. 154.449 Section 154.449 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES SAFETY STANDARDS FOR SELF... § 154.449 Model test. The following analyzed data of a model test of structural elements for independent...

  7. Diagnostic performance of urine dipstick testing in children with suspected UTI: a systematic review of relationship with age and comparison with microscopy.

    Science.gov (United States)

    Mori, R; Yonemoto, N; Fitzgerald, A; Tullus, K; Verrier-Jones, K; Lakhanpaul, M

    2010-04-01

    Prompt diagnosis of urinary tract infection (UTI) in children is needed to initiate treatment but is difficult to establish without urine testing, and reliance on culture leads to delay. Urine dipsticks are often used as an alternative to microscopy, although the diagnostic performance of dipsticks at different ages has not been established systematically. Studies comparing urine dipstick testing in infants versus older children and urine dipstick versus microscopy were systematically searched and reviewed. Meta-analysis of available studies was conducted. Six studies addressed these questions. The results of meta-analysis showed that the performance of urine dipstick testing was significantly less in the younger children when compared with older children (p UTI in children over 2 years than for younger children.

  8. Tree-Based Global Model Tests for Polytomous Rasch Models

    Science.gov (United States)

    Komboz, Basil; Strobl, Carolin; Zeileis, Achim

    2018-01-01

    Psychometric measurement models are only valid if measurement invariance holds between test takers of different groups. Global model tests, such as the well-established likelihood ratio (LR) test, are sensitive to violations of measurement invariance, such as differential item functioning and differential step functioning. However, these…

  9. Health-care providers' experiences with opt-out HIV testing: a systematic review.

    Science.gov (United States)

    Leidel, Stacy; Wilson, Sally; McConigley, Ruth; Boldy, Duncan; Girdler, Sonya

    2015-01-01

    HIV is now a manageable chronic disease with a good prognosis, but early detection and referral for treatment are vital. In opt-out HIV testing, patients are informed that they will be tested unless they decline. This qualitative systematic review explored the experiences, attitudes, barriers, and facilitators of opt-out HIV testing from a health-care provider (HCP) perspective. Four articles were included in the synthesis and reported on findings from approximately 70 participants, representing diverse geographical regions and a range of human development status and HIV prevalence. Two synthesized findings emerged: HCP attitudes and systems. The first synthesized finding encompassed HCP decision-making attitudes about who and when to test for HIV. It also included the assumptions the HCPs made about patient consequences. The second synthesized finding related to systems. System-related barriers to opt-out HIV testing included lack of time, resources, and adequate training. System-related facilitators included integration into standard practice, support of the medical setting, and electronic reminders. A common attitude among HCPs was the outdated notion that HIV is a terrible disease that equates to certain death. Some HCPs stated that offering the HIV test implied that the patient had engaged in immoral behaviour, which could lead to stigma or disengagement with health services. This paternalism diminished patient autonomy, because patients who were excluded from opt-out HIV testing could have benefited from it. One study highlighted the positive aspects of opt-out HIV testing, in which participants underscored the professional satisfaction that arose from making an HIV diagnosis, particularly when marginalized patients could be connected to treatment and social services. Recommendations for opt-out HIV testing should be disseminated to HCPs in a broad range of settings. Implementation of system-related factors such as electronic reminders and care coordination

  10. 46 CFR 154.431 - Model test.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Model test. 154.431 Section 154.431 Shipping COAST GUARD... Model test. (a) The primary and secondary barrier of a membrane tank, including the corners and joints...(c). (b) Analyzed data of a model test for the primary and secondary barrier of the membrane tank...

  11. Should trained lay providers perform HIV testing? A systematic review to inform World Health Organization guidelines.

    Science.gov (United States)

    Kennedy, C E; Yeh, P T; Johnson, C; Baggaley, R

    2017-12-01

    New strategies for HIV testing services (HTS) are needed to achieve UN 90-90-90 targets, including diagnosis of 90% of people living with HIV. Task-sharing HTS to trained lay providers may alleviate health worker shortages and better reach target groups. We conducted a systematic review of studies evaluating HTS by lay providers using rapid diagnostic tests (RDTs). Peer-reviewed articles were included if they compared HTS using RDTs performed by trained lay providers to HTS by health professionals, or to no intervention. We also reviewed data on end-users' values and preferences around lay providers preforming HTS. Searching was conducted through 10 online databases, reviewing reference lists, and contacting experts. Screening and data abstraction were conducted in duplicate using systematic methods. Of 6113 unique citations identified, 5 studies were included in the effectiveness review and 6 in the values and preferences review. One US-based randomized trial found patients' uptake of HTS doubled with lay providers (57% vs. 27%, percent difference: 30, 95% confidence interval: 27-32, p lay providers. Studies from Cambodia, Malawi, and South Africa comparing testing quality between lay providers and laboratory staff found little discordance and high sensitivity and specificity (≥98%). Values and preferences studies generally found support for lay providers conducting HTS, particularly in non-hypothetical scenarios. Based on evidence supporting using trained lay providers, a WHO expert panel recommended lay providers be allowed to conduct HTS using HIV RDTs. Uptake of this recommendation could expand HIV testing to more people globally.

  12. A test of systematic coarse-graining of molecular dynamics simulations: Thermodynamic properties

    Science.gov (United States)

    Fu, Chia-Chun; Kulkarni, Pandurang M.; Scott Shell, M.; Gary Leal, L.

    2012-10-01

    Coarse-graining (CG) techniques have recently attracted great interest for providing descriptions at a mesoscopic level of resolution that preserve fluid thermodynamic and transport behaviors with a reduced number of degrees of freedom and hence less computational effort. One fundamental question arises: how well and to what extent can a "bottom-up" developed mesoscale model recover the physical properties of a molecular scale system? To answer this question, we explore systematically the properties of a CG model that is developed to represent an intermediate mesoscale model between the atomistic and continuum scales. This CG model aims to reduce the computational cost relative to a full atomistic simulation, and we assess to what extent it is possible to preserve both the thermodynamic and transport properties of an underlying reference all-atom Lennard-Jones (LJ) system. In this paper, only the thermodynamic properties are considered in detail. The transport properties will be examined in subsequent work. To coarse-grain, we first use the iterative Boltzmann inversion (IBI) to determine a CG potential for a (1-ϕ)N mesoscale particle system, where ϕ is the degree of coarse-graining, so as to reproduce the radial distribution function (RDF) of an N atomic particle system. Even though the uniqueness theorem guarantees a one to one relationship between the RDF and an effective pairwise potential, we find that RDFs are insensitive to the long-range part of the IBI-determined potentials, which provides some significant flexibility in further matching other properties. We then propose a reformulation of IBI as a robust minimization procedure that enables simultaneous matching of the RDF and the fluid pressure. We find that this new method mainly changes the attractive tail region of the CG potentials, and it improves the isothermal compressibility relative to pure IBI. We also find that there are optimal interaction cutoff lengths for the CG system, as a function of

  13. THE SYSTEMATICS OF STRONG LENS MODELING QUANTIFIED: THE EFFECTS OF CONSTRAINT SELECTION AND REDSHIFT INFORMATION ON MAGNIFICATION, MASS, AND MULTIPLE IMAGE PREDICTABILITY

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Traci L.; Sharon, Keren, E-mail: tljohn@umich.edu [University of Michigan, Department of Astronomy, 1085 South University Avenue, Ann Arbor, MI 48109-1107 (United States)

    2016-11-20

    Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading as to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.

  14. The Reporting Quality of Systematic Reviews and Meta-Analyses in Industrial and Organizational Psychology: A Systematic Review.

    Science.gov (United States)

    Schalken, Naomi; Rietbergen, Charlotte

    2017-01-01

    Objective: The goal of this systematic review was to examine the reporting quality of the method section of quantitative systematic reviews and meta-analyses from 2009 to 2016 in the field of industrial and organizational psychology with the help of the Meta-Analysis Reporting Standards (MARS), and to update previous research, such as the study of Aytug et al. (2012) and Dieckmann et al. (2009). Methods: A systematic search for quantitative systematic reviews and meta-analyses was conducted in the top 10 journals in the field of industrial and organizational psychology between January 2009 and April 2016. Data were extracted on study characteristics and items of the method section of MARS. A cross-classified multilevel model was analyzed, to test whether publication year and journal impact factor (JIF) were associated with the reporting quality scores of articles. Results: Compliance with MARS in the method section was generally inadequate in the random sample of 120 articles. Variation existed in the reporting of items. There were no significant effects of publication year and journal impact factor (JIF) on the reporting quality scores of articles. Conclusions: The reporting quality in the method section of systematic reviews and meta-analyses was still insufficient, therefore we recommend researchers to improve the reporting in their articles by using reporting standards like MARS.

  15. Bayesian models based on test statistics for multiple hypothesis testing problems.

    Science.gov (United States)

    Ji, Yuan; Lu, Yiling; Mills, Gordon B

    2008-04-01

    We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.

  16. Model-based testing for software safety

    NARCIS (Netherlands)

    Gurbuz, Havva Gulay; Tekinerdogan, Bedir

    2017-01-01

    Testing safety-critical systems is crucial since a failure or malfunction may result in death or serious injuries to people, equipment, or environment. An important challenge in testing is the derivation of test cases that can identify the potential faults. Model-based testing adopts models of a

  17. A Systematic Evaluation of Ultrasound-based Fetal Weight Estimation Models on Indian Population

    Directory of Open Access Journals (Sweden)

    Sujitkumar S. Hiwale

    2017-12-01

    Conclusion: We found that the existing fetal weight estimation models have high systematic and random errors on Indian population, with a general tendency of overestimation of fetal weight in the LBW category and underestimation in the HBW category. We also observed that these models have a limited ability to predict babies at a risk of either low or high birth weight. It is recommended that the clinicians should consider all these factors, while interpreting estimated weight given by the existing models.

  18. Intervention Strategies Based on Information-Motivation-Behavioral Skills Model for Health Behavior Change: A Systematic Review

    OpenAIRE

    Chang, Sun Ju; Choi, Suyoung; Kim, Se-An; Song, Misoon

    2014-01-01

    Purpose: This study systematically reviewed research on behavioral interventions based on the information-motivation-behavioral skills (IMB) model to investigate specific intervention strategies that focus on information, motivation, and behavioral skills and to evaluate their effectiveness for people with chronic diseases. Methods: A systematic review was conducted in accordance with the guidelines of both the National Evidence-based Healthcare Collaborating Agency and Im and Chang. A lit...

  19. Integration and consistency testing of groundwater flow models with hydro-geochemistry in site investigations in Finland

    International Nuclear Information System (INIS)

    Pitkaenen, P.; Loefman, J.; Korkealaakso, J.; Koskinen, L.; Ruotsalainen, P.; Hautojaervi, A.; Aeikaes, T.

    1999-01-01

    In the assessment of the suitability and safety of a geological repository for radioactive waste the understanding of the fluid flow at a site is essential. In order to build confidence in the assessment of the hydrogeological performance of a site in various conditions, integration of hydrological and hydrogeochemical methods and studies provides the primary method for investigating the evolution that has taken place in the past, and for predicting future conditions at the potential disposal site. A systematic geochemical sampling campaign was started since the beginning of 1990's in the Finnish site investigation programme. This enabled the initiating of integration and evaluation of site scale hydrogeochemical and groundwater flow models. Hydrogeochemical information has been used to screen relevant external processes and variables for definition of the initial and boundary conditions in hydrological simulations. The results obtained from interpretation and modelling hydrogeochemical evolution have been employed in testing the hydrogeochemical consistency of conceptual flow models. Integration and testing of flow models with hydrogeochemical information are considered to improve significantly the hydrogeological understanding of a site and increases confidence in conceptual hydrogeological models. (author)

  20. Statistical Tests for Mixed Linear Models

    CERN Document Server

    Khuri, André I; Sinha, Bimal K

    2011-01-01

    An advanced discussion of linear models with mixed or random effects. In recent years a breakthrough has occurred in our ability to draw inferences from exact and optimum tests of variance component models, generating much research activity that relies on linear models with mixed and random effects. This volume covers the most important research of the past decade as well as the latest developments in hypothesis testing. It compiles all currently available results in the area of exact and optimum tests for variance component models and offers the only comprehensive treatment for these models a

  1. Antenatal HIV Testing in Sub-Saharan Africa During the Implementation of the Millennium Development Goals: A Systematic Review Using the PEN-3 Cultural Model.

    Science.gov (United States)

    Blackstone, Sarah R; Nwaozuru, Ucheoma; Iwelunmor, Juliet

    2018-01-01

    This study systematically explored the barriers and facilitators to routine antenatal HIV testing from the perspective of pregnant women in sub-Saharan Africa during the implementation period of the Millennium Development Goals. Articles published between 2000 and 2015 were selected after reviewing the title, abstract, and references. Twenty-seven studies published in 11 African countries were eligible for the current study and reviewed. The most common barriers identified include communication with male partners, patient convenience and accessibility, health system and health-care provider issues, fear of disclosure, HIV-related stigma, the burden of other responsibilities at home, and the perception of antenatal care as a "woman's job." Routine testing among pregnant women is crucial for the eradication of infant and child HIV infections. Further understanding the interplay of social and cultural factors, particularly the role of women in intimate relationships and the influence of men on antenatal care seeking behaviors, is necessary to continue the work of the Millennium Development Goals.

  2. Observation-Based Modeling for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.G.

    2009-01-01

    One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through

  3. Tests for the Assessment of Sport-Specific Performance in Olympic Combat Sports: A Systematic Review With Practical Recommendations.

    Science.gov (United States)

    Chaabene, Helmi; Negra, Yassine; Bouguezzi, Raja; Capranica, Laura; Franchini, Emerson; Prieske, Olaf; Hbacha, Hamdi; Granacher, Urs

    2018-01-01

    The regular monitoring of physical fitness and sport-specific performance is important in elite sports to increase the likelihood of success in competition. This study aimed to systematically review and to critically appraise the methodological quality, validation data, and feasibility of the sport-specific performance assessment in Olympic combat sports like amateur boxing, fencing, judo, karate, taekwondo, and wrestling. A systematic search was conducted in the electronic databases PubMed, Google-Scholar, and Science-Direct up to October 2017. Studies in combat sports were included that reported validation data (e.g., reliability, validity, sensitivity) of sport-specific tests. Overall, 39 studies were eligible for inclusion in this review. The majority of studies (74%) contained sample sizes sport-specific tests (intraclass correlation coefficient [ICC] = 0.43-1.00). Content validity was addressed in all included studies, criterion validity (only the concurrent aspect of it) in approximately half of the studies with correlation coefficients ranging from r = -0.41 to 0.90. Construct validity was reported in 31% of the included studies and predictive validity in only one. Test sensitivity was addressed in 13% of the included studies. The majority of studies (64%) ignored and/or provided incomplete information on test feasibility and methodological limitations of the sport-specific test. In 28% of the included studies, insufficient information or a complete lack of information was provided in the respective field of the test application. Several methodological gaps exist in studies that used sport-specific performance tests in Olympic combat sports. Additional research should adopt more rigorous validation procedures in the application and description of sport-specific performance tests in Olympic combat sports.

  4. Testing the Standard Model

    CERN Document Server

    Riles, K

    1998-01-01

    The Large Electron Project (LEP) accelerator near Geneva, more than any other instrument, has rigorously tested the predictions of the Standard Model of elementary particles. LEP measurements have probed the theory from many different directions and, so far, the Standard Model has prevailed. The rigour of these tests has allowed LEP physicists to determine unequivocally the number of fundamental 'generations' of elementary particles. These tests also allowed physicists to ascertain the mass of the top quark in advance of its discovery. Recent increases in the accelerator's energy allow new measurements to be undertaken, measurements that may uncover directly or indirectly the long-sought Higgs particle, believed to impart mass to all other particles.

  5. Test-driven modeling of embedded systems

    DEFF Research Database (Denmark)

    Munck, Allan; Madsen, Jan

    2015-01-01

    To benefit maximally from model-based systems engineering (MBSE) trustworthy high quality models are required. From the software disciplines it is known that test-driven development (TDD) can significantly increase the quality of the products. Using a test-driven approach with MBSE may have...... a similar positive effect on the quality of the system models and the resulting products and may therefore be desirable. To define a test-driven model-based systems engineering (TD-MBSE) approach, we must define this approach for numerous sub disciplines such as modeling of requirements, use cases...... suggest that our method provides a sound foundation for rapid development of high quality system models....

  6. Systematic review and meta-analysis of studies evaluating diagnostic test accuracy: A practical review for clinical researchers-Part I. general guidance and tips

    International Nuclear Information System (INIS)

    Kim, Kyung Won; Choi, Sang Hyun; Huh, Jimi; Park, Seong Ho; Lee, June Young

    2015-01-01

    In the field of diagnostic test accuracy (DTA), the use of systematic review and meta-analyses is steadily increasing. By means of objective evaluation of all available primary studies, these two processes generate an evidence-based systematic summary regarding a specific research topic. The methodology for systematic review and meta-analysis in DTA studies differs from that in therapeutic/interventional studies, and its content is still evolving. Here we review the overall process from a practical standpoint, which may serve as a reference for those who implement these methods

  7. Clinical outcomes following inpatient penicillin allergy testing: A systematic review and meta-analysis.

    Science.gov (United States)

    Sacco, K A; Bates, A; Brigham, T J; Imam, J S; Burton, M C

    2017-09-01

    A documented penicillin allergy is associated with increased morbidity including length of hospital stay and an increased incidence of resistant infections attributed to use of broader-spectrum antibiotics. The aim of the systematic review was to identify whether inpatient penicillin allergy testing affected clinical outcomes during hospitalization. We performed an electronic search of Ovid MEDLINE/PubMed, Embase, Web of Science, Scopus, and the Cochrane Library over the past 20 years. Inpatients having a documented penicillin allergy that underwent penicillin allergy testing were included. Twenty-four studies met eligibility criteria. Study sample size was between 24 and 252 patients in exclusively inpatient cohorts. Penicillin skin testing (PST) with or without oral amoxicillin challenge was the main intervention described (18 studies). The population-weighted mean for a negative PST was 95.1% [CI 93.8-96.1]. Inpatient penicillin allergy testing led to a change in antibiotic selection that was greater in the intensive care unit (77.97% [CI 72.0-83.1] vs 54.73% [CI 51.2-58.2], Pallergy testing was associated with decreased healthcare cost in four studies. Inpatient penicillin allergy testing is safe and effective in ruling out penicillin allergy. The rate of negative tests is comparable to outpatient and perioperative data. Patients with a documented penicillin allergy who require penicillin should be tested during hospitalization given its benefit for individual patient outcomes and antibiotic stewardship. © 2017 EAACI and John Wiley and Sons A/S. Published by John Wiley and Sons Ltd.

  8. Computer-aided modeling framework – a generic modeling template for catalytic membrane fixed bed reactors

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    2013-01-01

    and users to generate and test models systematically, efficiently and reliably. In this way, development of products and processes can be faster, cheaper and very efficient. In this contribution, as part of the framework a generic modeling template for the systematic derivation of problem specific catalytic...... membrane fixed bed models is developed. The application of the modeling template is highlighted with a case study related to the modeling of a catalytic membrane reactor coupling dehydrogenation of ethylbenzene with hydrogenation of nitrobenzene....

  9. Systematic problems with using dark matter simulations to model stellar halos

    International Nuclear Information System (INIS)

    Bailin, Jeremy; Bell, Eric F.; Valluri, Monica; Stinson, Greg S.; Debattista, Victor P.; Couchman, H. M. P.; Wadsley, James

    2014-01-01

    The limits of available computing power have forced models for the structure of stellar halos to adopt one or both of the following simplifying assumptions: (1) stellar mass can be 'painted' onto dark matter (DM) particles in progenitor satellites; (2) pure DM simulations that do not form a luminous galaxy can be used. We estimate the magnitude of the systematic errors introduced by these assumptions using a controlled set of stellar halo models where we independently vary whether we look at star particles or painted DM particles, and whether we use a simulation in which a baryonic disk galaxy forms or a matching pure DM simulation that does not form a baryonic disk. We find that the 'painting' simplification reduces the halo concentration and internal structure, predominantly because painted DM particles have different kinematics from star particles even when both are buried deep in the potential well of the satellite. The simplification of using pure DM simulations reduces the concentration further, but increases the internal structure, and results in a more prolate stellar halo. These differences can be a factor of 1.5-7 in concentration (as measured by the half-mass radius) and 2-7 in internal density structure. Given this level of systematic uncertainty, one should be wary of overinterpreting differences between observations and the current generation of stellar halo models based on DM-only simulations when such differences are less than an order of magnitude.

  10. Test facility TIMO for testing the ITER model cryopump

    International Nuclear Information System (INIS)

    Haas, H.; Day, C.; Mack, A.; Methe, S.; Boissin, J.C.; Schummer, P.; Murdoch, D.K.

    2001-01-01

    Within the framework of the European Fusion Technology Programme, FZK is involved in the research and development process for a vacuum pump system of a future fusion reactor. As a result of these activities, the concept and the necessary requirements for the primary vacuum system of the ITER fusion reactor were defined. Continuing that development process, FZK has been preparing the test facility TIMO (Test facility for ITER Model pump) since 1996. This test facility provides for testing a cryopump all needed infrastructure as for example a process gas supply including a metering system, a test vessel, the cryogenic supply for the different temperature levels and a gas analysing system. For manufacturing the ITER model pump an order was given to the company L' Air Liquide in the form of a NET contract. (author)

  11. Test facility TIMO for testing the ITER model cryopump

    International Nuclear Information System (INIS)

    Haas, H.; Day, C.; Mack, A.; Methe, S.; Boissin, J.C.; Schummer, P.; Murdoch, D.K.

    1999-01-01

    Within the framework of the European Fusion Technology Programme, FZK is involved in the research and development process for a vacuum pump system of a future fusion reactor. As a result of these activities, the concept and the necessary requirements for the primary vacuum system of the ITER fusion reactor were defined. Continuing that development process, FZK has been preparing the test facility TIMO (Test facility for ITER Model pump) since 1996. This test facility provides for testing a cryopump all needed infrastructure as for example a process gas supply including a metering system, a test vessel, the cryogenic supply for the different temperature levels and a gas analysing system. For manufacturing the ITER model pump an order was given to the company L'Air Liquide in the form of a NET contract. (author)

  12. A systematic review and qualitative analysis to inform the development of a new emergency department-based geriatric case management model.

    Science.gov (United States)

    Sinha, Samir K; Bessman, Edward S; Flomenbaum, Neal; Leff, Bruce

    2011-06-01

    We inform the future development of a new geriatric emergency management practice model. We perform a systematic review of the existing evidence for emergency department (ED)-based case management models designed to improve the health, social, and health service utilization outcomes for noninstitutionalized older patients within the context of an index ED visit. This was a systematic review of English-language articles indexed in MEDLINE and CINAHL (1966 to 2010), describing ED-based case management models for older adults. Bibliographies of the retrieved articles were reviewed to identify additional references. A systematic qualitative case study analytic approach was used to identify the core operational components and outcome measures of the described clinical interventions. The authors of the included studies were also invited to verify our interpretations of their work. The determined patterns of component adherence were then used to postulate the relative importance and effect of the presence or absence of a particular component in influencing the overall effectiveness of their respective interventions. Eighteen of 352 studies (reported in 20 articles) met study criteria. Qualitative analyses identified 28 outcome measures and 8 distinct model characteristic components that included having an evidence-based practice model, nursing clinical involvement or leadership, high-risk screening processes, focused geriatric assessments, the initiation of care and disposition planning in the ED, interprofessional and capacity-building work practices, post-ED discharge follow-up with patients, and evaluation and monitoring processes. Of the 15 positive study results, 6 had all 8 characteristic components and 9 were found to be lacking at least 1 component. Two studies with positive results lacked 2 characteristic components and none lacked more than 2 components. Of the 3 studies with negative results demonstrating no positive effects based on any outcome tested, one

  13. External Validity and Model Validity: A Conceptual Approach for Systematic Review Methodology

    Directory of Open Access Journals (Sweden)

    Raheleh Khorsan

    2014-01-01

    Full Text Available Background. Evidence rankings do not consider equally internal (IV, external (EV, and model validity (MV for clinical studies including complementary and alternative medicine/integrative health care (CAM/IHC research. This paper describe this model and offers an EV assessment tool (EVAT© for weighing studies according to EV and MV in addition to IV. Methods. An abbreviated systematic review methodology was employed to search, assemble, and evaluate the literature that has been published on EV/MV criteria. Standard databases were searched for keywords relating to EV, MV, and bias-scoring from inception to Jan 2013. Tools identified and concepts described were pooled to assemble a robust tool for evaluating these quality criteria. Results. This study assembled a streamlined, objective tool to incorporate for the evaluation of quality of EV/MV research that is more sensitive to CAM/IHC research. Conclusion. Improved reporting on EV can help produce and provide information that will help guide policy makers, public health researchers, and other scientists in their selection, development, and improvement in their research-tested intervention. Overall, clinical studies with high EV have the potential to provide the most useful information about “real-world” consequences of health interventions. It is hoped that this novel tool which considers IV, EV, and MV on equal footing will better guide clinical decision making.

  14. Loss reduction in axial-flow compressors through low-speed model testing

    Science.gov (United States)

    Wisler, D. C.

    1984-01-01

    A systematic procedure for reducing losses in axial-flow compressors is presented. In this procedure, a large, low-speed, aerodynamic model of a high-speed core compressor is designed and fabricated based on aerodynamic similarity principles. This model is then tested at low speed where high-loss regions associated with three-dimensional endwall boundary layers flow separation, leakage, and secondary flows can be located, detailed measurements made, and loss mechanisms determined with much greater accuracy and much lower cost and risk than is possible in small, high-speed compressors. Design modifications are made by using custom-tailored airfoils and vector diagrams, airfoil endbends, and modified wall geometries in the high-loss regions. The design improvements resulting in reduced loss or increased stall margin are then scaled to high speed. This paper describes the procedure and presents experimental results to show that in some cases endwall loss has been reduced by as much as 10 percent, flow separation has been reduced or eliminated, and stall margin has been substantially improved by using these techniques.

  15. SEMI-EMPIRICAL WHITE DWARF INITIAL-FINAL MASS RELATIONSHIPS: A THOROUGH ANALYSIS OF SYSTEMATIC UNCERTAINTIES DUE TO STELLAR EVOLUTION MODELS

    International Nuclear Information System (INIS)

    Salaris, Maurizio; Serenelli, Aldo; Weiss, Achim; Miller Bertolami, Marcelo

    2009-01-01

    Using the most recent results about white dwarfs (WDs) in ten open clusters, we revisit semiempirical estimates of the initial-final mass relation (IFMR) in star clusters, with emphasis on the use of stellar evolution models. We discuss the influence of these models on each step of the derivation. One intention of our work is to use consistent sets of calculations both for the isochrones and the WD cooling tracks. The second one is to derive the range of systematic errors arising from stellar evolution theory. This is achieved by using different sources for the stellar models and by varying physical assumptions and input data. We find that systematic errors, including the determination of the cluster age, are dominating the initial mass values, while observational uncertainties influence the final mass primarily. After having determined the systematic errors, the initial-final mass relation allows us finally to draw conclusions about the physics of the stellar models, in particular about convective overshooting.

  16. Scaling analysis in modeling transport and reaction processes a systematic approach to model building and the art of approximation

    CERN Document Server

    Krantz, William B

    2007-01-01

    This book is unique as the first effort to expound on the subject of systematic scaling analysis. Not written for a specific discipline, the book targets any reader interested in transport phenomena and reaction processes. The book is logically divided into chapters on the use of systematic scaling analysis in fluid dynamics, heat transfer, mass transfer, and reaction processes. An integrating chapter is included that considers more complex problems involving combined transport phenomena. Each chapter includes several problems that are explained in considerable detail. These are followed by several worked examples for which the general outline for the scaling is given. Each chapter also includes many practice problems. This book is based on recognizing the value of systematic scaling analysis as a pedagogical method for teaching transport and reaction processes and as a research tool for developing and solving models and in designing experiments. Thus, the book can serve as both a textbook and a reference boo...

  17. Several submaximal exercise tests are reliable, valid and acceptable in people with chronic pain, fibromyalgia or chronic fatigue: a systematic review

    OpenAIRE

    Julia Ratter; Lorenz Radlinger; Cees Lucas

    2014-01-01

    Question: Are submaximal and maximal exercise tests reliable, valid and acceptable in people with chronic pain, fibromyalgia and fatigue disorders? Design: Systematic review of studies of the psychometric properties of exercise tests. Participants: People older than 18 years with chronic pain, fibromyalgia and chronic fatigue disorders. Intervention: Studies of the measurement properties of tests of physical capacity in people with chronic pain, fibromyalgia or chronic fatigue disorders were ...

  18. Are chiropractic tests for the lumbo-pelvic spine reliable and valid? A systematic critical literature review

    DEFF Research Database (Denmark)

    Hestbaek, L; Leboeuf-Yde, C

    2000-01-01

    OBJECTIVE: To systematically review the peer-reviewed literature about the reliability and validity of chiropractic tests used to determine the need for spinal manipulative therapy of the lumbo-pelvic spine, taking into account the quality of the studies. DATA SOURCES: The CHIROLARS database......-pelvic spine were included. DATA EXTRACTION: Data quality were assessed independently by the two reviewers, with a quality score based on predefined methodologic criteria. Results of the studies were then evaluated in relation to quality. DATA SYNTHESIS: None of the tests studied had been sufficiently...... evaluated in relation to reliability and validity. Only tests for palpation for pain had consistently acceptable results. Motion palpation of the lumbar spine might be valid but showed poor reliability, whereas motion palpation of the sacroiliac joints seemed to be slightly reliable but was not shown...

  19. A Unified Framework for Systematic Model Improvement

    DEFF Research Database (Denmark)

    Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay

    2003-01-01

    A unified framework for improving the quality of continuous time models of dynamic systems based on experimental data is presented. The framework is based on an interplay between stochastic differential equation (SDE) modelling, statistical tests and multivariate nonparametric regression. This co......-batch bioreactor, where it is illustrated how an incorrectly modelled biomass growth rate can be pinpointed and an estimate provided of the functional relation needed to properly describe it....

  20. Results of steel containment vessel model test

    International Nuclear Information System (INIS)

    Luk, V.K.; Ludwigsen, J.S.; Hessheimer, M.F.; Komine, Kuniaki; Matsumoto, Tomoyuki; Costello, J.F.

    1998-05-01

    A series of static overpressurization tests of scale models of nuclear containment structures is being conducted by Sandia National Laboratories for the Nuclear Power Engineering Corporation of Japan and the US Nuclear Regulatory Commission. Two tests are being conducted: (1) a test of a model of a steel containment vessel (SCV) and (2) a test of a model of a prestressed concrete containment vessel (PCCV). This paper summarizes the conduct of the high pressure pneumatic test of the SCV model and the results of that test. Results of this test are summarized and are compared with pretest predictions performed by the sponsoring organizations and others who participated in a blind pretest prediction effort. Questions raised by this comparison are identified and plans for posttest analysis are discussed

  1. A radioimmunoassay method for the rapid detection of Candida antibodies is experimental systematic candidiasis

    International Nuclear Information System (INIS)

    Huang, Y.; Berry, W.; Cooper, H.; Zachariah, Y.; Newman, T.

    1979-01-01

    Rabbits were employed as experimental models to evaluate a solid-phase radioimmunoassay (RIA) method for the diagnosis of systematic candidiasis. Ten rabbits were incubated subcutaneously to mimic superficial candidiasis and were found to produce no antibodies to Candida as determined by both immunodiffusion and RIA procedures. However, 94 per cent of 18 rabbits systematically infected by intravenous injection of Candida cells were observed to produce antibody as assessed by the RIA technique. These data encourage further tests with human sera and the continued development of this RIA procedure as a useful tool in the early serodiagnosis of systematic candidiasis. (Auth.)

  2. The Psychology Department Model Advisement Procedure: A Comprehensive, Systematic Approach to Career Development Advisement

    Science.gov (United States)

    Howell-Carter, Marya; Nieman-Gonder, Jennifer; Pellegrino, Jennifer; Catapano, Brittani; Hutzel, Kimberly

    2016-01-01

    The MAP (Model Advisement Procedure) is a comprehensive, systematic approach to developmental student advisement. The MAP was implemented to improve advisement consistency, improve student preparation for internships/senior projects, increase career exploration, reduce career uncertainty, and, ultimately, improve student satisfaction with the…

  3. A Systematic Review of the Anxiolytic-Like Effects of Essential Oils in Animal Models

    Directory of Open Access Journals (Sweden)

    Damião Pergentino de Sousa

    2015-10-01

    Full Text Available The clinical efficacy of standardized essential oils (such as Lavender officinalis, in treating anxiety disorders strongly suggests that these natural products are an important candidate source for new anxiolytic drugs. A systematic review of essential oils, their bioactive constituents, and anxiolytic-like activity is conducted. The essential oil with the best profile is Lavendula angustifolia, which has already been tested in controlled clinical trials with positive results. Citrus aurantium using different routes of administration also showed significant effects in several animal models, and was corroborated by different research groups. Other promising essential oils are Citrus sinensis and bergamot oil, which showed certain clinical anxiolytic actions; along with Achillea wilhemsii, Alpinia zerumbet, Citrus aurantium, and Spiranthera odoratissima, which, like Lavendula angustifolia, appear to exert anxiolytic-like effects without GABA/benzodiazepine activity, thus differing in their mechanisms of action from the benzodiazepines. The anxiolytic activity of 25 compounds commonly found in essential oils is also discussed.

  4. Systematic simulations of modified gravity: chameleon models

    Energy Technology Data Exchange (ETDEWEB)

    Brax, Philippe [Institut de Physique Theorique, CEA, IPhT, CNRS, URA 2306, F-91191Gif/Yvette Cedex (France); Davis, Anne-Christine [DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA (United Kingdom); Li, Baojiu [Institute for Computational Cosmology, Department of Physics, Durham University, Durham DH1 3LE (United Kingdom); Winther, Hans A. [Institute of Theoretical Astrophysics, University of Oslo, 0315 Oslo (Norway); Zhao, Gong-Bo, E-mail: philippe.brax@cea.fr, E-mail: a.c.davis@damtp.cam.ac.uk, E-mail: baojiu.li@durham.ac.uk, E-mail: h.a.winther@astro.uio.no, E-mail: gong-bo.zhao@port.ac.uk [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 3FX (United Kingdom)

    2013-04-01

    In this work we systematically study the linear and nonlinear structure formation in chameleon theories of modified gravity, using a generic parameterisation which describes a large class of models using only 4 parameters. For this we have modified the N-body simulation code ecosmog to perform a total of 65 simulations for different models and parameter values, including the default ΛCDM. These simulations enable us to explore a significant portion of the parameter space. We have studied the effects of modified gravity on the matter power spectrum and mass function, and found a rich and interesting phenomenology where the difference with the ΛCDM paradigm cannot be reproduced by a linear analysis even on scales as large as k ∼ 0.05 hMpc{sup −1}, since the latter incorrectly assumes that the modification of gravity depends only on the background matter density. Our results show that the chameleon screening mechanism is significantly more efficient than other mechanisms such as the dilaton and symmetron, especially in high-density regions and at early times, and can serve as a guidance to determine the parts of the chameleon parameter space which are cosmologically interesting and thus merit further studies in the future.

  5. Systematic simulations of modified gravity: chameleon models

    International Nuclear Information System (INIS)

    Brax, Philippe; Davis, Anne-Christine; Li, Baojiu; Winther, Hans A.; Zhao, Gong-Bo

    2013-01-01

    In this work we systematically study the linear and nonlinear structure formation in chameleon theories of modified gravity, using a generic parameterisation which describes a large class of models using only 4 parameters. For this we have modified the N-body simulation code ecosmog to perform a total of 65 simulations for different models and parameter values, including the default ΛCDM. These simulations enable us to explore a significant portion of the parameter space. We have studied the effects of modified gravity on the matter power spectrum and mass function, and found a rich and interesting phenomenology where the difference with the ΛCDM paradigm cannot be reproduced by a linear analysis even on scales as large as k ∼ 0.05 hMpc −1 , since the latter incorrectly assumes that the modification of gravity depends only on the background matter density. Our results show that the chameleon screening mechanism is significantly more efficient than other mechanisms such as the dilaton and symmetron, especially in high-density regions and at early times, and can serve as a guidance to determine the parts of the chameleon parameter space which are cosmologically interesting and thus merit further studies in the future

  6. Design of roundness measurement model with multi-systematic error for cylindrical components with large radius.

    Science.gov (United States)

    Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao

    2016-02-01

    The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius.

  7. Feature engineering and a proposed decision-support system for systematic reviewers of medical evidence.

    Directory of Open Access Journals (Sweden)

    Tanja Bekhuis

    Full Text Available Evidence-based medicine depends on the timely synthesis of research findings. An important source of synthesized evidence resides in systematic reviews. However, a bottleneck in review production involves dual screening of citations with titles and abstracts to find eligible studies. For this research, we tested the effect of various kinds of textual information (features on performance of a machine learning classifier. Based on our findings, we propose an automated system to reduce screeing burden, as well as offer quality assurance.We built a database of citations from 5 systematic reviews that varied with respect to domain, topic, and sponsor. Consensus judgments regarding eligibility were inferred from published reports. We extracted 5 feature sets from citations: alphabetic, alphanumeric(+, indexing, features mapped to concepts in systematic reviews, and topic models. To simulate a two-person team, we divided the data into random halves. We optimized the parameters of a Bayesian classifier, then trained and tested models on alternate data halves. Overall, we conducted 50 independent tests.All tests of summary performance (mean F3 surpassed the corresponding baseline, P<0.0001. The ranks for mean F3, precision, and classification error were statistically different across feature sets averaged over reviews; P-values for Friedman's test were .045, .002, and .002, respectively. Differences in ranks for mean recall were not statistically significant. Alphanumeric(+ features were associated with best performance; mean reduction in screening burden for this feature type ranged from 88% to 98% for the second pass through citations and from 38% to 48% overall.A computer-assisted, decision support system based on our methods could substantially reduce the burden of screening citations for systematic review teams and solo reviewers. Additionally, such a system could deliver quality assurance both by confirming concordant decisions and by naming

  8. An evaluation of a model for the systematic documentation of hospital based health promotion activities: results from a multicentre study

    DEFF Research Database (Denmark)

    Tønnesen, Hanne; Christensen, Mette E; Groene, Oliver

    2007-01-01

    The first step of handling health promotion (HP) in Diagnosis Related Groups (DRGs) is a systematic documentation and registration of the activities in the medical records. So far the possibility and tradition for systematic registration of clinical HP activities in the medical records...... and in patient administrative systems have been sparse. Therefore, the activities are mostly invisible in the registers of hospital services as well as in budgets and balances.A simple model has been described to structure the registration of the HP procedures performed by the clinical staff. The model consists...... of two parts; first part includes motivational counselling (7 codes) and the second part comprehends intervention, rehabilitation and after treatment (8 codes).The objective was to evaluate in an international study the usefulness, applicability and sufficiency of a simple model for the systematic...

  9. Reactor noise diagnostics based on multivariate autoregressive modeling: Application to LOFT [Loss-of-Fluid-Test] reactor process noise

    International Nuclear Information System (INIS)

    Gloeckler, O.; Upadhyaya, B.R.

    1987-01-01

    Multivariate noise analysis of power reactor operating signals is useful for plant diagnostics, for isolating process and sensor anomalies, and for automated plant monitoring. In order to develop a reliable procedure, the previously established techniques for empirical modeling of fluctuation signals in power reactors have been improved. Application of the complete algorithm to operational data from the Loss-of-Fluid-Test (LOFT) Reactor showed that earlier conjectures (based on physical modeling) regarding the perturbation sources in a Pressurized Water Reactor (PWR) affecting coolant temperature and neutron power fluctuations can be systematically explained. This advanced methodology has important implication regarding plant diagnostics, and system or sensor anomaly isolation. 6 refs., 24 figs

  10. Measurement properties of maximal cardiopulmonary exercise tests protocols in persons after stroke: A systematic review.

    Science.gov (United States)

    Wittink, Harriet; Verschuren, Olaf; Terwee, Caroline; de Groot, Janke; Kwakkel, Gert; van de Port, Ingrid

    2017-11-21

    To systematically review and critically appraise the literature on measurement properties of cardiopulmonary exercise test protocols for measuring aerobic capacity, VO2max, in persons after stroke. PubMed, Embase and Cinahl were searched from inception up to 15 June 2016. A total of 9 studies were identified reporting on 9 different cardiopulmonary exercise test protocols. VO2max measured with cardiopulmonary exercise test and open spirometry was the construct of interest. The target population was adult persons after stroke. We included all studies that evaluated reliability, measurement error, criterion validity, content validity, hypothesis testing and/or responsiveness of cardiopulmonary exercise test protocols. Two researchers independently screened the literature, assessed methodological quality using the COnsensus-based Standards for the selection of health Measurement INstruments checklist and extracted data on measurement properties of cardiopulmonary exercise test protocols. Most studies reported on only one measurement property. Best-evidence synthesis was derived taking into account the methodological quality of the studies, the results and the consistency of the results. No judgement could be made on which protocol is "best" for measuring VO2max in persons after stroke due to lack of high-quality studies on the measurement properties of the cardiopulmonary exercise test.

  11. Several submaximal exercise tests are reliable, valid and acceptable in people with chronic pain, fibromyalgia or chronic fatigue: a systematic review

    Directory of Open Access Journals (Sweden)

    Julia Ratter

    2014-09-01

    [Ratter J, Radlinger L, Lucas C (2014 Several submaximal exercise tests are reliable, valid and acceptable in people with chronic pain, fibromyalgia or chronic fatigue: a systematic review. Journal of Physiotherapy 60: 144–150

  12. A Systematic Approach to Determining the Identifiability of Multistage Carcinogenesis Models.

    Science.gov (United States)

    Brouwer, Andrew F; Meza, Rafael; Eisenberg, Marisa C

    2017-07-01

    Multistage clonal expansion (MSCE) models of carcinogenesis are continuous-time Markov process models often used to relate cancer incidence to biological mechanism. Identifiability analysis determines what model parameter combinations can, theoretically, be estimated from given data. We use a systematic approach, based on differential algebra methods traditionally used for deterministic ordinary differential equation (ODE) models, to determine identifiable combinations for a generalized subclass of MSCE models with any number of preinitation stages and one clonal expansion. Additionally, we determine the identifiable combinations of the generalized MSCE model with up to four clonal expansion stages, and conjecture the results for any number of clonal expansion stages. The results improve upon previous work in a number of ways and provide a framework to find the identifiable combinations for further variations on the MSCE models. Finally, our approach, which takes advantage of the Kolmogorov backward equations for the probability generating functions of the Markov process, demonstrates that identifiability methods used in engineering and mathematics for systems of ODEs can be applied to continuous-time Markov processes. © 2016 Society for Risk Analysis.

  13. A Systematic Literature Review of Agile Maturity Model Research

    Directory of Open Access Journals (Sweden)

    Vaughan Henriques

    2017-02-01

    Full Text Available Background/Aim/Purpose: A commonly implemented software process improvement framework is the capability maturity model integrated (CMMI. Existing literature indicates higher levels of CMMI maturity could result in a loss of agility due to its organizational focus. To maintain agility, research has focussed attention on agile maturity models. The objective of this paper is to find the common research themes and conclusions in agile maturity model research. Methodology: This research adopts a systematic approach to agile maturity model research, using Google Scholar, Science Direct, and IEEE Xplore as sources. In total 531 articles were initially found matching the search criteria, which was filtered to 39 articles by applying specific exclusion criteria. Contribution:: The article highlights the trends in agile maturity model research, specifically bringing to light the lack of research providing validation of such models. Findings: Two major themes emerge, being the coexistence of agile and CMMI and the development of agile principle based maturity models. The research trend indicates an increase in agile maturity model articles, particularly in the latter half of the last decade, with concentrations of research coinciding with version updates of CMMI. While there is general consensus around higher CMMI maturity levels being incompatible with true agility, there is evidence of the two coexisting when agile is introduced into already highly matured environments. Future Research:\tFuture research direction for this topic should include how to attain higher levels of CMMI maturity using only agile methods, how governance is addressed in agile environments, and whether existing agile maturity models relate to improved project success.

  14. 3S - Systematic, systemic, and systems biology and toxicology.

    Science.gov (United States)

    Smirnova, Lena; Kleinstreuer, Nicole; Corvi, Raffaella; Levchenko, Andre; Fitzpatrick, Suzanne C; Hartung, Thomas

    2018-01-01

    A biological system is more than the sum of its parts - it accomplishes many functions via synergy. Deconstructing the system down to the molecular mechanism level necessitates the complement of reconstructing functions on all levels, i.e., in our conceptualization of biology and its perturbations, our experimental models and computer modelling. Toxicology contains the somewhat arbitrary subclass "systemic toxicities"; however, there is no relevant toxic insult or general disease that is not systemic. At least inflammation and repair are involved that require coordinated signaling mechanisms across the organism. However, the more body components involved, the greater the challenge to reca-pitulate such toxicities using non-animal models. Here, the shortcomings of current systemic testing and the development of alternative approaches are summarized. We argue that we need a systematic approach to integrating existing knowledge as exemplified by systematic reviews and other evidence-based approaches. Such knowledge can guide us in modelling these systems using bioengineering and virtual computer models, i.e., via systems biology or systems toxicology approaches. Experimental multi-organ-on-chip and microphysiological systems (MPS) provide a more physiological view of the organism, facilitating more comprehensive coverage of systemic toxicities, i.e., the perturbation on organism level, without using substitute organisms (animals). The next challenge is to establish disease models, i.e., micropathophysiological systems (MPPS), to expand their utility to encompass biomedicine. Combining computational and experimental systems approaches and the chal-lenges of validating them are discussed. The suggested 3S approach promises to leverage 21st century technology and systematic thinking to achieve a paradigm change in studying systemic effects.

  15. Systematic overview finds variation in approaches to investigating and reporting on sources of heterogeneity in systematic reviews of diagnostic studies

    NARCIS (Netherlands)

    Naaktgeboren, Christiana A.; van Enst, Wynanda A.; Ochodo, Eleanor A.; de Groot, Joris A. H.; Hooft, Lotty; Leeflang, Mariska M.; Bossuyt, Patrick M.; Moons, Karel G. M.; Reitsma, Johannes B.

    2014-01-01

    To examine how authors explore and report on sources of heterogeneity in systematic reviews of diagnostic accuracy studies. A cohort of systematic reviews of diagnostic tests was systematically identified. Data were extracted on whether an exploration of the sources of heterogeneity was undertaken,

  16. Promoting the uptake of HIV testing among men who have sex with men: systematic review of effectiveness and cost-effectiveness.

    Science.gov (United States)

    Lorenc, Theo; Marrero-Guillamón, Isaac; Aggleton, Peter; Cooper, Chris; Llewellyn, Alexis; Lehmann, Angela; Lindsay, Catriona

    2011-06-01

    What interventions are effective and cost-effective in increasing the uptake of HIV testing among men who have sex with men (MSM)? A systematic review was conducted of the following databases: AEGIS, ASSIA, BL Direct, BNI, Centre for Reviews and Dissemination, Cochrane Database of Systematic Reviews, CINAHL, Current Contents Connect, EconLit, EMBASE, ERIC, HMIC, Medline, Medline In-Process, NRR, PsychINFO, Scopus, SIGLE, Social Policy and Practice, Web of Science, websites, journal hand-searching, citation chasing and expert recommendations. Prospective studies of the effectiveness or cost-effectiveness of interventions (randomised controlled trial (RCT), controlled trial, one-group or any economic analysis) were included if the intervention aimed to increase the uptake of HIV testing among MSM in a high-income (Organization for Economic Co-operation and Development) country. Quality was assessed and data were extracted using standardised tools. Results were synthesised narratively. Twelve effectiveness studies and one cost-effectiveness study were located, covering a range of intervention types. There is evidence that rapid testing and counselling in community settings (one RCT), and intensive peer counselling (one RCT), can increase the uptake of HIV testing among MSM. There are promising results regarding the introduction of opt-out testing in sexually transmitted infection clinics (two one-group studies). Findings regarding other interventions, including bundling HIV tests with other tests, peer outreach in community settings, and media campaigns, are inconclusive. Findings indicate several promising approaches to increasing HIV testing among MSM. However, there is limited evidence overall, and evidence for the effectiveness of key intervention types (particularly peer outreach and media campaigns) remains lacking.

  17. NET model coil test possibilities

    International Nuclear Information System (INIS)

    Erb, J.; Gruenhagen, A.; Herz, W.; Jentzsch, K.; Komarek, P.; Lotz, E.; Malang, S.; Maurer, W.; Noether, G.; Ulbricht, A.; Vogt, A.; Zahn, G.; Horvath, I.; Kwasnitza, K.; Marinucci, C.; Pasztor, G.; Sborchia, C.; Weymuth, P.; Peters, A.; Roeterdink, A.

    1987-11-01

    A single full size coil for NET/INTOR represents an investment of the order of 40 MUC (Million Unit Costs). Before such an amount of money or even more for the 16 TF coils is invested as much risks as possible must be eliminated by a comprehensive development programme. In the course of such a programme a coil technology verification test should finally prove the feasibility of NET/INTOR TF coils. This study report is almost exclusively dealing with such a verification test by model coil testing. These coils will be built out of two Nb 3 Sn-conductors based on two concepts already under development and investigation. Two possible coil arrangements are discussed: A cluster facility, where two model coils out of the two Nb 3 TF-conductors are used, and the already tested LCT-coils producing a background field. A solenoid arrangement, where in addition to the two TF model coils another model coil out of a PF-conductor for the central PF-coils of NET/INTOR is used instead of LCT background coils. Technical advantages and disadvantages are worked out in order to compare and judge both facilities. Costs estimates and the time schedules broaden the base for a decision about the realisation of such a facility. (orig.) [de

  18. GENERATING TEST CASES FOR PLATFORM INDEPENDENT MODEL BY USING USE CASE MODEL

    OpenAIRE

    Hesham A. Hassan,; Zahraa. E. Yousif

    2010-01-01

    Model-based testing refers to testing and test case generation based on a model that describes the behavior of the system. Extensive use of models throughout all the phases of software development starting from the requirement engineering phase has led to increased importance of Model Based Testing. The OMG initiative MDA has revolutionized the way models would be used for software development. Ensuring that all user requirements are addressed in system design and the design is getting suffic...

  19. Acute Myocardial Infarction Readmission Risk Prediction Models: A Systematic Review of Model Performance.

    Science.gov (United States)

    Smith, Lauren N; Makam, Anil N; Darden, Douglas; Mayo, Helen; Das, Sandeep R; Halm, Ethan A; Nguyen, Oanh Kieu

    2018-01-01

    Hospitals are subject to federal financial penalties for excessive 30-day hospital readmissions for acute myocardial infarction (AMI). Prospectively identifying patients hospitalized with AMI at high risk for readmission could help prevent 30-day readmissions by enabling targeted interventions. However, the performance of AMI-specific readmission risk prediction models is unknown. We systematically searched the published literature through March 2017 for studies of risk prediction models for 30-day hospital readmission among adults with AMI. We identified 11 studies of 18 unique risk prediction models across diverse settings primarily in the United States, of which 16 models were specific to AMI. The median overall observed all-cause 30-day readmission rate across studies was 16.3% (range, 10.6%-21.0%). Six models were based on administrative data; 4 on electronic health record data; 3 on clinical hospital data; and 5 on cardiac registry data. Models included 7 to 37 predictors, of which demographics, comorbidities, and utilization metrics were the most frequently included domains. Most models, including the Centers for Medicare and Medicaid Services AMI administrative model, had modest discrimination (median C statistic, 0.65; range, 0.53-0.79). Of the 16 reported AMI-specific models, only 8 models were assessed in a validation cohort, limiting generalizability. Observed risk-stratified readmission rates ranged from 3.0% among the lowest-risk individuals to 43.0% among the highest-risk individuals, suggesting good risk stratification across all models. Current AMI-specific readmission risk prediction models have modest predictive ability and uncertain generalizability given methodological limitations. No existing models provide actionable information in real time to enable early identification and risk-stratification of patients with AMI before hospital discharge, a functionality needed to optimize the potential effectiveness of readmission reduction interventions

  20. The Validity and Responsiveness of Isometric Lower Body Multi-Joint Tests of Muscular Strength: a Systematic Review.

    Science.gov (United States)

    Drake, David; Kennedy, Rodney; Wallace, Eric

    2017-12-01

    Researchers and practitioners working in sports medicine and science require valid tests to determine the effectiveness of interventions and enhance understanding of mechanisms underpinning adaptation. Such decision making is influenced by the supportive evidence describing the validity of tests within current research. The objective of this study is to review the validity of lower body isometric multi-joint tests ability to assess muscular strength and determine the current level of supporting evidence. Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines were followed in a systematic fashion to search, assess and synthesize existing literature on this topic. Electronic databases such as Web of Science, CINAHL and PubMed were searched up to 18 March 2015. Potential inclusions were screened against eligibility criteria relating to types of test, measurement instrument, properties of validity assessed and population group and were required to be published in English. The Consensus-based Standards for the Selection of health Measurement Instruments (COSMIN) checklist was used to assess methodological quality and measurement property rating of included studies. Studies rated as fair or better in methodological quality were included in the best evidence synthesis. Fifty-nine studies met the eligibility criteria for quality appraisal. The ten studies that rated fair or better in methodological quality were included in the best evidence synthesis. The most frequently investigated lower body isometric multi-joint tests for validity were the isometric mid-thigh pull and isometric squat. The validity of each of these tests was strong in terms of reliability and construct validity. The evidence for responsiveness of tests was found to be moderate for the isometric squat test and unknown for the isometric mid-thigh pull. No tests using the isometric leg press met the criteria for inclusion in the best evidence synthesis. Researchers and

  1. Controls on stream network branching angles, tested using landscape evolution models

    Science.gov (United States)

    Theodoratos, Nikolaos; Seybold, Hansjörg; Kirchner, James W.

    2016-04-01

    Stream networks are striking landscape features. The topology of stream networks has been extensively studied, but their geometry has received limited attention. Analyses of nearly 1 million stream junctions across the contiguous United States [1] have revealed that stream branching angles vary systematically with climate and topographic gradients at continental scale. Stream networks in areas with wet climates and gentle slopes tend to have wider branching angles than in areas with dry climates or steep slopes, but the mechanistic linkages underlying these empirical correlations remain unclear. Under different climatic and topographic conditions different runoff generation mechanisms and, consequently, transport processes are dominant. Models [2] and experiments [3] have shown that the relative strength of channel incision versus diffusive hillslope transport controls the spacing between valleys, an important geometric property of stream networks. We used landscape evolution models (LEMs) to test whether similar factors control network branching angles as well. We simulated stream networks using a wide range of hillslope diffusion and channel incision parameters. The resulting branching angles vary systematically with the parameters, but by much less than the regional variability in real-world stream networks. Our results suggest that the competition between hillslope and channeling processes influences branching angles, but that other mechanisms may also be needed to account for the variability in branching angles observed in the field. References: [1] H. Seybold, D. H. Rothman, and J. W. Kirchner, 2015, Climate's watermark in the geometry of river networks, Submitted manuscript. [2] J. T. Perron, W. E. Dietrich, and J. W. Kirchner, 2008, Controls on the spacing of first-order valleys, Journal of Geophysical Research, 113, F04016. [3] K. E. Sweeney, J. J. Roering, and C. Ellis, 2015, Experimental evidence for hillslope control of landscape scale, Science, 349

  2. KIDMED TEST; PREVALENCE OF LOW ADHERENCE TO THE MEDITERRANEAN DIET IN CHILDREN AND YOUNG; A SYSTEMATIC REVIEW.

    Science.gov (United States)

    García Cabrera, S; Herrera Fernández, N; Rodríguez Hernández, C; Nissensohn, M; Román-Viñas, B; Serra-Majem, L

    2015-12-01

    during the last decades, a quick and important modification of the dietary habits has been observed in the Mediterranean countries, especially among young people. Several authors have evaluated the pattern of adherence to the Mediterranean Diet in this group of population, by using the KIDMED test. the purpose of this study was to evaluate the adherence to the Mediterranean Diet among children and adolescents by using the KIDMED test through a systematic review and meta-analysis. PubMed database was accessed until January 2014. Only cross-sectional studies evaluating children and young people were included. A random effects model was considered. eighteen cross-sectional studies were included. The population age ranged from 2 to 25 years. The total sample included 24 067 people. The overall percentage of high adherence to the Mediterranean Diet was 10% (95% CI 0.07-0.13), while the low adhesion was 21% (IC 95% 0.14 to 0.27). In the low adherence group, further analyses were performed by defined subgroups, finding differences for the age of the population and the geographical area. the results obtained showed important differences between high and low adherence to the Mediterranean Diet levels, although successive subgroup analyzes were performed. There is a clear trend towards the abandonment of the Mediterranean lifestyle. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  3. Hydraulic Model Tests on Modified Wave Dragon

    DEFF Research Database (Denmark)

    Hald, Tue; Lynggaard, Jakob

    A floating model of the Wave Dragon (WD) was built in autumn 1998 by the Danish Maritime Institute in scale 1:50, see Sørensen and Friis-Madsen (1999) for reference. This model was subjected to a series of model tests and subsequent modifications at Aalborg University and in the following...... are found in Hald and Lynggaard (2001). Model tests and reconstruction are carried out during the phase 3 project: ”Wave Dragon. Reconstruction of an existing model in scale 1:50 and sequentiel tests of changes to the model geometry and mass distribution parameters” sponsored by the Danish Energy Agency...

  4. A Systematic Review of Point of Care Testing for Chlamydia trachomatis, Neisseria gonorrhoeae, and Trichomonas vaginalis

    Directory of Open Access Journals (Sweden)

    Sasha Herbst de Cortina

    2016-01-01

    Full Text Available Objectives. Systematic review of point of care (POC diagnostic tests for sexually transmitted infections: Chlamydia trachomatis (CT, Neisseria gonorrhoeae (NG, and Trichomonas vaginalis (TV. Methods. Literature search on PubMed for articles from January 2010 to August 2015, including original research in English on POC diagnostics for sexually transmitted CT, NG, and/or TV. Results. We identified 33 publications with original research on POC diagnostics for CT, NG, and/or TV. Thirteen articles evaluated test performance, yielding at least one test for each infection with sensitivity and specificity ≥90%. Each infection also had currently available tests with sensitivities <60%. Three articles analyzed cost effectiveness, and five publications discussed acceptability and feasibility. POC testing was acceptable to both providers and patients and was also demonstrated to be cost effective. Fourteen proof of concept articles introduced new tests. Conclusions. Highly sensitive and specific POC tests are available for CT, NG, and TV, but improvement is possible. Future research should focus on acceptability, feasibility, and cost of POC testing. While pregnant women specifically have not been studied, the results available in nonpregnant populations are encouraging for the ability to test and treat women in antenatal care to prevent adverse pregnancy and neonatal outcomes.

  5. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  6. A numerical test method of California bearing ratio on graded crushed rocks using particle flow modeling

    Directory of Open Access Journals (Sweden)

    Yingjun Jiang

    2015-04-01

    Full Text Available In order to better understand the mechanical properties of graded crushed rocks (GCRs and to optimize the relevant design, a numerical test method based on the particle flow modeling technique PFC2D is developed for the California bearing ratio (CBR test on GCRs. The effects of different testing conditions and micro-mechanical parameters used in the model on the CBR numerical results have been systematically studied. The reliability of the numerical technique is verified. The numerical results suggest that the influences of the loading rate and Poisson's ratio on the CBR numerical test results are not significant. As such, a loading rate of 1.0–3.0 mm/min, a piston diameter of 5 cm, a specimen height of 15 cm and a specimen diameter of 15 cm are adopted for the CBR numerical test. The numerical results reveal that the CBR values increase with the friction coefficient at the contact and shear modulus of the rocks, while the influence of Poisson's ratio on the CBR values is insignificant. The close agreement between the CBR numerical results and experimental results suggests that the numerical simulation of the CBR values is promising to help assess the mechanical properties of GCRs and to optimize the grading design. Besides, the numerical study can provide useful insights on the mesoscopic mechanism.

  7. Systematic Multi‐Scale Model Development Strategy for the Fragrance Spraying Process and Transport

    DEFF Research Database (Denmark)

    Heitzig, M.; Rong, Y.; Gregson, C.

    2012-01-01

    The fast and efficient development and application of reliable models with appropriate degree of detail to predict the behavior of fragrance aerosols are challenging problems of high interest to the related industries. A generic modeling template for the systematic derivation of specific fragrance......‐aided modeling framework, which is structured based on workflows for different general modeling tasks. The benefits of the fragrance spraying template are highlighted by a case study related to the derivation of a fragrance aerosol model that is able to reflect measured dynamic droplet size distribution profiles...... aerosol models is proposed. The main benefits of the fragrance spraying template are the speed‐up of the model development/derivation process, the increase in model quality, and the provision of structured domain knowledge where needed. The fragrance spraying template is integrated in a generic computer...

  8. Geochemical Testing And Model Development - Residual Tank Waste Test Plan

    International Nuclear Information System (INIS)

    Cantrell, K.J.; Connelly, M.P.

    2010-01-01

    This Test Plan describes the testing and chemical analyses release rate studies on tank residual samples collected following the retrieval of waste from the tank. This work will provide the data required to develop a contaminant release model for the tank residuals from both sludge and salt cake single-shell tanks. The data are intended for use in the long-term performance assessment and conceptual model development.

  9. Deterministic Modeling of the High Temperature Test Reactor with DRAGON-HEXPEDITE

    International Nuclear Information System (INIS)

    Ortensi, J.; Pope, M.A.; Ferrer, R.M.; Cogliati, J.J.; Bess, J.D.; Ougouag, A.M.

    2010-01-01

    The Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine the INL's current prismatic reactor analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 fuel column thin annular core, and the fully loaded core critical condition with 30 fuel columns. Special emphasis is devoted to physical phenomena and artifacts in HTTR that are similar to phenomena and artifacts in the NGNP base design. The DRAGON code is used in this study since it offers significant ease and versatility in modeling prismatic designs. DRAGON can generate transport solutions via Collision Probability (CP), Method of Characteristics (MOC) and Discrete Ordinates (Sn). A fine group cross-section library based on the SHEM 281 energy structure is used in the DRAGON calculations. The results from this study show reasonable agreement in the calculation of the core multiplication factor with the MC methods, but a consistent bias of 2-3% with the experimental values is obtained. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement partially stems from the fact that during the experiments the control rods were adjusted to maintain criticality, whereas in the model, the rod positions were fixed. In addition, this work includes a brief study of a cross section generation approach that seeks to decouple the domain in order to account for neighbor effects. This spectral

  10. Business model stress testing : A practical approach to test the robustness of a business model

    NARCIS (Netherlands)

    Haaker, T.I.; Bouwman, W.A.G.A.; Janssen, W; de Reuver, G.A.

    Business models and business model innovation are increasingly gaining attention in practice as well as in academic literature. However, the robustness of business models (BM) is seldom tested vis-à-vis the fast and unpredictable changes in digital technologies, regulation and markets. The

  11. What women want. Women's preferences for the management of low-grade abnormal cervical screening tests: a systematic review

    DEFF Research Database (Denmark)

    Frederiksen, Maria Eiholm; Lynge, E; Rebolj, M

    2012-01-01

    Please cite this paper as: Frederiksen M, Lynge E, Rebolj M. What women want. Women's preferences for the management of low-grade abnormal cervical screening tests: a systematic review. BJOG 2011; DOI: 10.1111/j.1471-0528.2011.03130.x. Background If human papillomavirus (HPV) testing will replace...... cytology in primary cervical screening, the frequency of low-grade abnormal screening tests will double. Several available alternatives for the follow-up of low-grade abnormal screening tests have similar outcomes. In this situation, women's preferences have been proposed as a guide for management....... Selection criteria Studies asking women to state a preference between active follow-up and observation for the management of low-grade abnormalities on screening cytology or HPV tests. Data collection and analysis Information on study design, participants and outcomes was retrieved using a prespecified form...

  12. Eating disorders among fashion models: a systematic review of the literature.

    Science.gov (United States)

    Zancu, Simona Alexandra; Enea, Violeta

    2017-09-01

    In the light of recent concerns regarding the eating disorders among fashion models and professional regulations of fashion model occupation, an examination of the scientific evidence on this issue is necessary. The article reviews findings on the prevalence of eating disorders and body image concerns among professional fashion models. A systematic literature search was conducted using ProQUEST, EBSCO, PsycINFO, SCOPUS, and Gale Canage electronic databases. A very low number of studies conducted on fashion models and eating disorders resulted between 1980 and 2015, with seven articles included in this review. Overall, results of these studies do not indicate a higher prevalence of eating disorders among fashion models compared to non-models. Fashion models have a positive body image and generally do not report more dysfunctional eating behaviors than controls. However, fashion models are on average slightly underweight with significantly lower BMI than controls, and give higher importance to appearance and thin body shape, and thus have a higher prevalence of partial-syndrome eating disorders than controls. Despite public concerns, research on eating disorders among professional fashion models is extremely scarce and results cannot be generalized to all models. The existing research fails to clarify the matter of eating disorders among fashion models and given the small number of studies, further research is needed.

  13. Effect of a high-fat diet and alcohol on cutaneous repair: A systematic review of murine experimental models.

    Directory of Open Access Journals (Sweden)

    Daiane Figueiredo Rosa

    Full Text Available Chronic alcohol intake associated with an inappropriate diet can cause lesions in multiple organs and tissues and complicate the tissue repair process. In a systematic review, we analyzed the relevance of alcohol and high fat consumption to cutaneous and repair, compared the main methodologies used and the most important parameters tested. Preclinical investigations with murine models were assessed to analyze whether the current evidence support clinical trials.The studies were selected from MEDLINE/PubMed and Scopus databases, according to Fig 1. All 15 identified articles had their data extracted. The reporting bias was investigated according to the ARRIVE (Animal Research: Reporting of in Vivo Experiments strategy.In general, animals offered a high-fat diet and alcohol showed decreased cutaneous wound closure, delayed skin contraction, chronic inflammation and incomplete re-epithelialization.In further studies, standardized experimental design is needed to establish comparable study groups and advance the overall knowledge background, facilitating data translatability from animal models to human clinical conditions.

  14. A Systematic Review of Behavioral Interventions to Reduce Condomless Sex and Increase HIV Testing for Latino MSM.

    Science.gov (United States)

    Pérez, Ashley; Santamaria, E Karina; Operario, Don

    2017-12-15

    Latino men who have sex with men (MSM) in the United States are disproportionately affected by HIV, and there have been calls to improve availability of culturally sensitive HIV prevention programs for this population. This article provides a systematic review of intervention programs to reduce condomless sex and/or increase HIV testing among Latino MSM. We searched four electronic databases using a systematic review protocol, screened 1777 unique records, and identified ten interventions analyzing data from 2871 Latino MSM. Four studies reported reductions in condomless anal intercourse, and one reported reductions in number of sexual partners. All studies incorporated surface structure cultural features such as bilingual study recruitment, but the incorporation of deep structure cultural features, such as machismo and sexual silence, was lacking. There is a need for rigorously designed interventions that incorporate deep structure cultural features in order to reduce HIV among Latino MSM.

  15. MORPHOLOGY OF GALAXY CLUSTERS: A COSMOLOGICAL MODEL-INDEPENDENT TEST OF THE COSMIC DISTANCE-DUALITY RELATION

    International Nuclear Information System (INIS)

    Meng Xiaolei; Zhang Tongjie; Zhan Hu; Wang Xin

    2012-01-01

    Aiming at comparing different morphological models of galaxy clusters, we use two new methods to make a cosmological model-independent test of the distance-duality (DD) relation. The luminosity distances come from the Union2 compilation of Supernovae Type Ia. The angular diameter distances are given by two cluster models (De Filippis et al. and Bonamente et al.). The advantage of our methods is that they can reduce statistical errors. Concerning the morphological hypotheses for cluster models, it is mainly focused on the comparison between the elliptical β-model and spherical β-model. The spherical β-model is divided into two groups in terms of different reduction methods of angular diameter distances, i.e., the conservative spherical β-model and corrected spherical β-model. Our results show that the DD relation is consistent with the elliptical β-model at 1σ confidence level (CL) for both methods, whereas for almost all spherical β-model parameterizations, the DD relation can only be accommodated at 3σ CL, particularly for the conservative spherical β-model. In order to minimize systematic uncertainties, we also apply the test to the overlap sample, i.e., the same set of clusters modeled by both De Filippis et al. and Bonamente et al. It is found that the DD relation is compatible with the elliptically modeled overlap sample at 1σ CL; however, for most of the parameterizations the DD relation cannot be accommodated even at 3σ CL for any of the two spherical β-models. Therefore, it is reasonable that the marked triaxial ellipsoidal model is a better geometrical hypothesis describing the structure of the galaxy cluster compared with the spherical β-model if the DD relation is valid in cosmological observations.

  16. Impact of Enterovirus Testing on Resource Use in Febrile Young Infants: A Systematic Review.

    Science.gov (United States)

    Wallace, Sowdhamini S; Lopez, Michelle A; Caviness, A Chantal

    2017-02-01

    Enterovirus infection commonly causes fever in infants aged 0 to 90 days and, without testing, is difficult to differentiate from serious bacterial infection. To determine the cost savings of routine enterovirus testing and identify subgroups of infants with greater potential impact from testing among infants 0 to 90 days old with fever. Studies were identified systematically from published and unpublished literature by using Embase, Medline, the Cochrane database, and conference proceedings. Inclusion criteria were original studies, in any language, of enterovirus infection including the outcomes of interest in infants aged 0 to 90 days. Standardized instruments were used to appraise each study. The evidence quality was evaluated using Grading of Recommendations Assessment, Development, and Evaluation criteria. Two investigators independently searched the literature, screened and critically appraised the studies, extracted the data, and applied the Grading of Recommendations Assessment, Development, and Evaluation criteria. Of the 257 unique studies identified and screened, 32 were completely reviewed and 8 were included. Routine enterovirus testing was associated with reduced hospital length of stay and cost savings during peak enterovirus season. Cerebrospinal fluid pleocytosis was a poor predictor of enterovirus meningitis. The studies were all observational and the evidence was of low quality. Enterovirus polymerase chain reaction testing, independent of cerebrospinal fluid pleocytosis, can reduce length of stay and achieve cost savings, especially during times of high enterovirus prevalence. Additional study is needed to identify subgroups that may achieve greater cost savings from testing to additionally enhance the efficiency of testing. Copyright © 2017 by the American Academy of Pediatrics.

  17. Overload prevention in model supports for wind tunnel model testing

    Directory of Open Access Journals (Sweden)

    Anton IVANOVICI

    2015-09-01

    Full Text Available Preventing overloads in wind tunnel model supports is crucial to the integrity of the tested system. Results can only be interpreted as valid if the model support, conventionally called a sting remains sufficiently rigid during testing. Modeling and preliminary calculation can only give an estimate of the sting’s behavior under known forces and moments but sometimes unpredictable, aerodynamically caused model behavior can cause large transient overloads that cannot be taken into account at the sting design phase. To ensure model integrity and data validity an analog fast protection circuit was designed and tested. A post-factum analysis was carried out to optimize the overload detection and a short discussion on aeroelastic phenomena is included to show why such a detector has to be very fast. The last refinement of the concept consists in a fast detector coupled with a slightly slower one to differentiate between transient overloads that decay in time and those that are the result of aeroelastic unwanted phenomena. The decision to stop or continue the test is therefore conservatively taken preserving data and model integrity while allowing normal startup loads and transients to manifest.

  18. Systematic Review of Health Economic Impact Evaluations of Risk Prediction Models : Stop Developing, Start Evaluating

    NARCIS (Netherlands)

    van Giessen, Anoukh; Peters, Jaime; Wilcher, Britni; Hyde, Chris; Moons, Carl; de Wit, Ardine; Koffijberg, Erik

    2017-01-01

    Background: Although health economic evaluations (HEEs) are increasingly common for therapeutic interventions, they appear to be rare for the use of risk prediction models (PMs). Objectives: To evaluate the current state of HEEs of PMs by performing a comprehensive systematic review. Methods: Four

  19. Primary care models for treating opioid use disorders: What actually works? A systematic review.

    Directory of Open Access Journals (Sweden)

    Pooja Lagisetty

    Full Text Available Primary care-based models for Medication-Assisted Treatment (MAT have been shown to reduce mortality for Opioid Use Disorder (OUD and have equivalent efficacy to MAT in specialty substance treatment facilities.The objective of this study is to systematically analyze current evidence-based, primary care OUD MAT interventions and identify program structures and processes associated with improved patient outcomes in order to guide future policy and implementation in primary care settings.PubMed, EMBASE, CINAHL, and PsychInfo.We included randomized controlled or quasi experimental trials and observational studies evaluating OUD treatment in primary care settings treating adult patient populations and assessed structural domains using an established systems engineering framework.We included 35 interventions (10 RCTs and 25 quasi-experimental interventions that all tested MAT, buprenorphine or methadone, in primary care settings across 8 countries. Most included interventions used joint multi-disciplinary (specialty addiction services combined with primary care and coordinated care by physician and non-physician provider delivery models to provide MAT. Despite large variability in reported patient outcomes, processes, and tasks/tools used, similar key design factors arose among successful programs including integrated clinical teams with support staff who were often advanced practice clinicians (nurses and pharmacists as clinical care managers, incorporating patient "agreements," and using home inductions to make treatment more convenient for patients and providers.The findings suggest that multidisciplinary and coordinated care delivery models are an effective strategy to implement OUD treatment and increase MAT access in primary care, but research directly comparing specific structures and processes of care models is still needed.

  20. Effectiveness and cost-effectiveness of serum B-type natriuretic peptide testing and monitoring in patients with heart failure in primary and secondary care: an evidence synthesis, cohort study and cost-effectiveness model.

    Science.gov (United States)

    Pufulete, Maria; Maishman, Rachel; Dabner, Lucy; Mohiuddin, Syed; Hollingworth, William; Rogers, Chris A; Higgins, Julian; Dayer, Mark; Macleod, John; Purdy, Sarah; McDonagh, Theresa; Nightingale, Angus; Williams, Rachael; Reeves, Barnaby C

    2017-08-01

    Heart failure (HF) affects around 500,000 people in the UK. HF medications are frequently underprescribed and B-type natriuretic peptide (BNP)-guided therapy may help to optimise treatment. To evaluate the clinical effectiveness and cost-effectiveness of BNP-guided therapy compared with symptom-guided therapy in HF patients. Systematic review, cohort study and cost-effectiveness model. A literature review and usual care in the NHS. (a) HF patients in randomised controlled trials (RCTs) of BNP-guided therapy; and (b) patients having usual care for HF in the NHS. Systematic review : BNP-guided therapy or symptom-guided therapy in primary or secondary care. Cohort study : BNP monitored (≥ 6 months' follow-up and three or more BNP tests and two or more tests per year), BNP tested (≥ 1 tests but not BNP monitored) or never tested. Cost-effectiveness model : BNP-guided therapy in specialist clinics. Mortality, hospital admission (all cause and HF related) and adverse events; and quality-adjusted life-years (QALYs) for the cost-effectiveness model. Systematic review : Individual participant or aggregate data from eligible RCTs. Cohort study : The Clinical Practice Research Datalink, Hospital Episode Statistics and National Heart Failure Audit (NHFA). A systematic literature search (five databases, trial registries, grey literature and reference lists of publications) for published and unpublished RCTs. Five RCTs contributed individual participant data (IPD) and eight RCTs contributed aggregate data (1536 participants were randomised to BNP-guided therapy and 1538 participants were randomised to symptom-guided therapy). For all-cause mortality, the hazard ratio (HR) for BNP-guided therapy was 0.87 [95% confidence interval (CI) 0.73 to 1.04]. Patients who were aged Chris A Rogers' and Maria Pufulete's time contributing to the study. Syed Mohiuddin's time is supported by the NIHR Collaboration for Leadership in Applied Health Research and Care West at University

  1. Dual-use tools and systematics-aware analysis workflows in the ATLAS Run-II analysis model

    CERN Document Server

    FARRELL, Steven; The ATLAS collaboration

    2015-01-01

    The ATLAS analysis model has been overhauled for the upcoming run of data collection in 2015 at 13 TeV. One key component of this upgrade was the Event Data Model (EDM), which now allows for greater flexibility in the choice of analysis software framework and provides powerful new features that can be exploited by analysis software tools. A second key component of the upgrade is the introduction of a dual-use tool technology, which provides abstract interfaces for analysis software tools to run in either the Athena framework or a ROOT-based framework. The tool interfaces, including a new interface for handling systematic uncertainties, have been standardized for the development of improved analysis workflows and consolidation of high-level analysis tools. This presentation will cover the details of the dual-use tool functionality, the systematics interface, and how these features fit into a centrally supported analysis environment.

  2. Dual-use tools and systematics-aware analysis workflows in the ATLAS Run-2 analysis model

    CERN Document Server

    FARRELL, Steven; The ATLAS collaboration; Calafiura, Paolo; Delsart, Pierre-Antoine; Elsing, Markus; Koeneke, Karsten; Krasznahorkay, Attila; Krumnack, Nils; Lancon, Eric; Lavrijsen, Wim; Laycock, Paul; Lei, Xiaowen; Strandberg, Sara Kristina; Verkerke, Wouter; Vivarelli, Iacopo; Woudstra, Martin

    2015-01-01

    The ATLAS analysis model has been overhauled for the upcoming run of data collection in 2015 at 13 TeV. One key component of this upgrade was the Event Data Model (EDM), which now allows for greater flexibility in the choice of analysis software framework and provides powerful new features that can be exploited by analysis software tools. A second key component of the upgrade is the introduction of a dual-use tool technology, which provides abstract interfaces for analysis software tools to run in either the Athena framework or a ROOT-based framework. The tool interfaces, including a new interface for handling systematic uncertainties, have been standardized for the development of improved analysis workflows and consolidation of high-level analysis tools. This paper will cover the details of the dual-use tool functionality, the systematics interface, and how these features fit into a centrally supported analysis environment.

  3. Model-based testing for embedded systems

    CERN Document Server

    Zander, Justyna; Mosterman, Pieter J

    2011-01-01

    What the experts have to say about Model-Based Testing for Embedded Systems: "This book is exactly what is needed at the exact right time in this fast-growing area. From its beginnings over 10 years ago of deriving tests from UML statecharts, model-based testing has matured into a topic with both breadth and depth. Testing embedded systems is a natural application of MBT, and this book hits the nail exactly on the head. Numerous topics are presented clearly, thoroughly, and concisely in this cutting-edge book. The authors are world-class leading experts in this area and teach us well-used

  4. The Alcock Paczy'nski test with Baryon Acoustic Oscillations: systematic effects for future surveys

    Energy Technology Data Exchange (ETDEWEB)

    Lepori, Francesca; Viel, Matteo; Baccigalupi, Carlo [SISSA—International School for Advanced Studies, Via Bonomea 265, 34136 Trieste (Italy); Dio, Enea Di [INAF—Osservatorio Astronomico di Trieste, Via G.B. Tiepolo 11, I-34143 Trieste (Italy); Durrer, Ruth, E-mail: flepori@sissa.it, E-mail: enea.didio@oats.inaf.it, E-mail: viel@oats.inaf.it, E-mail: carlo.baccigalupi@sissa.it, E-mail: Ruth.Durrer@unige.ch [Université de Genève, Département de Physique Théorique and CAP, 24 quai Ernest-Ansermet, CH-1211 Genève 4 (Switzerland)

    2017-02-01

    We investigate the Alcock Paczy'nski (AP) test applied to the Baryon Acoustic Oscillation (BAO) feature in the galaxy correlation function. By using a general formalism that includes relativistic effects, we quantify the importance of the linear redshift space distortions and gravitational lensing corrections to the galaxy number density fluctuation. We show that redshift space distortions significantly affect the shape of the correlation function, both in radial and transverse directions, causing different values of galaxy bias to induce offsets up to 1% in the AP test. On the other hand, we find that the lensing correction around the BAO scale modifies the amplitude but not the shape of the correlation function and therefore does not introduce any systematic effect. Furthermore, we investigate in details how the AP test is sensitive to redshift binning: a window function in transverse direction suppresses correlations and shifts the peak position toward smaller angular scales. We determine the correction that should be applied in order to account for this effect, when performing the test with data from three future planned galaxy redshift surveys: Euclid, the Dark Energy Spectroscopic Instrument (DESI) and the Square Kilometer Array (SKA).

  5. Evidence synthesis to inform model-based cost-effectiveness evaluations of diagnostic tests: a methodological review of health technology assessments

    Directory of Open Access Journals (Sweden)

    Bethany Shinkins

    2017-04-01

    Full Text Available Abstract Background Evaluations of diagnostic tests are challenging because of the indirect nature of their impact on patient outcomes. Model-based health economic evaluations of tests allow different types of evidence from various sources to be incorporated and enable cost-effectiveness estimates to be made beyond the duration of available study data. To parameterize a health-economic model fully, all the ways a test impacts on patient health must be quantified, including but not limited to diagnostic test accuracy. Methods We assessed all UK NIHR HTA reports published May 2009-July 2015. Reports were included if they evaluated a diagnostic test, included a model-based health economic evaluation and included a systematic review and meta-analysis of test accuracy. From each eligible report we extracted information on the following topics: 1 what evidence aside from test accuracy was searched for and synthesised, 2 which methods were used to synthesise test accuracy evidence and how did the results inform the economic model, 3 how/whether threshold effects were explored, 4 how the potential dependency between multiple tests in a pathway was accounted for, and 5 for evaluations of tests targeted at the primary care setting, how evidence from differing healthcare settings was incorporated. Results The bivariate or HSROC model was implemented in 20/22 reports that met all inclusion criteria. Test accuracy data for health economic modelling was obtained from meta-analyses completely in four reports, partially in fourteen reports and not at all in four reports. Only 2/7 reports that used a quantitative test gave clear threshold recommendations. All 22 reports explored the effect of uncertainty in accuracy parameters but most of those that used multiple tests did not allow for dependence between test results. 7/22 tests were potentially suitable for primary care but the majority found limited evidence on test accuracy in primary care settings

  6. Systematic integration of experimental data and models in systems biology.

    Science.gov (United States)

    Li, Peter; Dada, Joseph O; Jameson, Daniel; Spasic, Irena; Swainston, Neil; Carroll, Kathleen; Dunn, Warwick; Khan, Farid; Malys, Naglis; Messiha, Hanan L; Simeonidis, Evangelos; Weichart, Dieter; Winder, Catherine; Wishart, Jill; Broomhead, David S; Goble, Carole A; Gaskell, Simon J; Kell, Douglas B; Westerhoff, Hans V; Mendes, Pedro; Paton, Norman W

    2010-11-29

    The behaviour of biological systems can be deduced from their mathematical models. However, multiple sources of data in diverse forms are required in the construction of a model in order to define its components and their biochemical reactions, and corresponding parameters. Automating the assembly and use of systems biology models is dependent upon data integration processes involving the interoperation of data and analytical resources. Taverna workflows have been developed for the automated assembly of quantitative parameterised metabolic networks in the Systems Biology Markup Language (SBML). A SBML model is built in a systematic fashion by the workflows which starts with the construction of a qualitative network using data from a MIRIAM-compliant genome-scale model of yeast metabolism. This is followed by parameterisation of the SBML model with experimental data from two repositories, the SABIO-RK enzyme kinetics database and a database of quantitative experimental results. The models are then calibrated and simulated in workflows that call out to COPASIWS, the web service interface to the COPASI software application for analysing biochemical networks. These systems biology workflows were evaluated for their ability to construct a parameterised model of yeast glycolysis. Distributed information about metabolic reactions that have been described to MIRIAM standards enables the automated assembly of quantitative systems biology models of metabolic networks based on user-defined criteria. Such data integration processes can be implemented as Taverna workflows to provide a rapid overview of the components and their relationships within a biochemical system.

  7. Design of Test Tracks for Odometry Calibration of Wheeled Mobile Robots

    Directory of Open Access Journals (Sweden)

    Changbae Jung

    2011-09-01

    Full Text Available Pose estimation for mobile robots depends basically on accurate odometry information. Odometry from the wheel's encoder is widely used for simple and inexpensive implementation. As the travel distance increases, odometry suffers from kinematic modeling errors regarding the wheels. Therefore, in order to improve the odometry accuracy, it is necessary that systematic errors be calibrated. The UMBmark test is a practical and useful scheme for calibrating the systematic errors of two-wheeled mobile robots. However, the square path track size used in the test has not been validated. A consideration of the calibration equations, experimental conditions, and modeling errors is essential to improve the calibration accuracy. In this paper, we analyze the effect on calibration performance of the approximation errors of calibration equations and nonsystematic errors under experimental conditions. Then, we propose a test track size for improving the accuracy of odometry calibration. From simulation and experimental results, we show that the proposed test track size significantly improves the calibration accuracy of odometry under a normal range of kinematic modeling errors for robots.

  8. A comprehensive model for executing knowledge management audits in organizations: a systematic review.

    Science.gov (United States)

    Shahmoradi, Leila; Ahmadi, Maryam; Sadoughi, Farahnaz; Piri, Zakieh; Gohari, Mahmood Reza

    2015-01-01

    A knowledge management audit (KMA) is the first phase in knowledge management implementation. Incomplete or incomprehensive execution of the KMA has caused many knowledge management programs to fail. A study was undertaken to investigate how KMAs are performed systematically in organizations and present a comprehensive model for performing KMAs based on a systematic review. Studies were identified by searching electronic databases such as Emerald, LISA, and the Cochrane library and e-journals such as the Oxford Journal and hand searching of printed journals, theses, and books in the Tehran University of Medical Sciences digital library. The sources used in this study consisted of studies available through the digital library of the Tehran University of Medical Sciences that were published between 2000 and 2013, including both Persian- and English-language sources, as well as articles explaining the steps involved in performing a KMA. A comprehensive model for KMAs is presented in this study. To successfully execute a KMA, it is necessary to perform the appropriate preliminary activities in relation to the knowledge management infrastructure, determine the knowledge management situation, and analyze and use the available data on this situation.

  9. Maintaining Sexual Desire in Long-Term Relationships: A Systematic Review and Conceptual Model.

    Science.gov (United States)

    Mark, Kristen P; Lasslo, Julie A

    The most universally experienced sexual response is sexual desire. Though research on this topic has increased in recent years, low and high desire are still problematized in clinical settings and the broader culture. However, despite knowledge that sexual desire ebbs and flows both within and between individuals, and that problems with sexual desire are strongly linked to problems with relationships, there is a critical gap in understanding the factors that contribute to maintaining sexual desire in the context of relationships. This article offers a systematic review of the literature to provide researchers, educators, clinicians, and the broader public with an overview and a conceptual model of nonclinical sexual desire in long-term relationships. First, we systematically identified peer-reviewed, English-language articles that focused on the maintenance of sexual desire in the context of nonclinical romantic relationships. Second, we reviewed a total of 64 articles that met inclusion criteria and synthesized them into factors using a socioecological framework categorized as individual, interpersonal, and societal in nature. These findings are used to build a conceptual model of maintaining sexual desire in long-term relationships. Finally, we discuss the limitations of the existing research and suggest clear directions for future research.

  10. Developing and Optimising the Use of Logic Models in Systematic Reviews: Exploring Practice and Good Practice in the Use of Programme Theory in Reviews.

    Science.gov (United States)

    Kneale, Dylan; Thomas, James; Harris, Katherine

    2015-01-01

    Logic models are becoming an increasingly common feature of systematic reviews, as is the use of programme theory more generally in systematic reviewing. Logic models offer a framework to help reviewers to 'think' conceptually at various points during the review, and can be a useful tool in defining study inclusion and exclusion criteria, guiding the search strategy, identifying relevant outcomes, identifying mediating and moderating factors, and communicating review findings. In this paper we critique the use of logic models in systematic reviews and protocols drawn from two databases representing reviews of health interventions and international development interventions. Programme theory featured only in a minority of the reviews and protocols included. Despite drawing from different disciplinary traditions, reviews and protocols from both sources shared several limitations in their use of logic models and theories of change, and these were used almost unanimously to solely depict pictorially the way in which the intervention worked. Logic models and theories of change were consequently rarely used to communicate the findings of the review. Logic models have the potential to be an aid integral throughout the systematic reviewing process. The absence of good practice around their use and development may be one reason for the apparent limited utility of logic models in many existing systematic reviews. These concerns are addressed in the second half of this paper, where we offer a set of principles in the use of logic models and an example of how we constructed a logic model for a review of school-based asthma interventions.

  11. Conceptualising paediatric health disparities: a metanarrative systematic review and unified conceptual framework.

    Science.gov (United States)

    Ridgeway, Jennifer L; Wang, Zhen; Finney Rutten, Lila J; van Ryn, Michelle; Griffin, Joan M; Murad, M Hassan; Asiedu, Gladys B; Egginton, Jason S; Beebe, Timothy J

    2017-08-04

    There exists a paucity of work in the development and testing of theoretical models specific to childhood health disparities even though they have been linked to the prevalence of adult health disparities including high rates of chronic disease. We conducted a systematic review and thematic analysis of existing models of health disparities specific to children to inform development of a unified conceptual framework. We systematically reviewed articles reporting theoretical or explanatory models of disparities on a range of outcomes related to child health. We searched Ovid Medline In-Process & Other Non-Indexed Citations, Ovid MEDLINE, Ovid Embase, Ovid Cochrane Central Register of Controlled Trials, Ovid Cochrane Database of Systematic Reviews, and Scopus (database inception to 9 July 2015). A metanarrative approach guided the analysis process. A total of 48 studies presenting 48 models were included. This systematic review found multiple models but no consensus on one approach. However, we did discover a fair amount of overlap, such that the 48 models reviewed converged into the unified conceptual framework. The majority of models included factors in three domains: individual characteristics and behaviours (88%), healthcare providers and systems (63%), and environment/community (56%), . Only 38% of models included factors in the health and public policies domain. A disease-agnostic unified conceptual framework may inform integration of existing knowledge of child health disparities and guide future research. This multilevel framework can focus attention among clinical, basic and social science research on the relationships between policy, social factors, health systems and the physical environment that impact children's health outcomes. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  12. The use of scale models in impact testing

    International Nuclear Information System (INIS)

    Donelan, P.J.; Dowling, A.R.

    1985-01-01

    Theoretical analysis, component testing and model flask testing are employed to investigate the validity of scale models for demonstrating the behaviour of Magnox flasks under impact conditions. Model testing is shown to be a powerful and convenient tool provided adequate care is taken with detail design and manufacture of models and with experimental control. (author)

  13. Theoretical Models, Assessment Frameworks and Test Construction.

    Science.gov (United States)

    Chalhoub-Deville, Micheline

    1997-01-01

    Reviews the usefulness of proficiency models influencing second language testing. Findings indicate that several factors contribute to the lack of congruence between models and test construction and make a case for distinguishing between theoretical models. Underscores the significance of an empirical, contextualized and structured approach to the…

  14. Simulation modeling for stratified breast cancer screening - a systematic review of cost and quality of life assumptions.

    Science.gov (United States)

    Arnold, Matthias

    2017-12-02

    The economic evaluation of stratified breast cancer screening gains momentum, but produces also very diverse results. Systematic reviews so far focused on modeling techniques and epidemiologic assumptions. However, cost and utility parameters received only little attention. This systematic review assesses simulation models for stratified breast cancer screening based on their cost and utility parameters in each phase of breast cancer screening and care. A literature review was conducted to compare economic evaluations with simulation models of personalized breast cancer screening. Study quality was assessed using reporting guidelines. Cost and utility inputs were extracted, standardized and structured using a care delivery framework. Studies were then clustered according to their study aim and parameters were compared within the clusters. Eighteen studies were identified within three study clusters. Reporting quality was very diverse in all three clusters. Only two studies in cluster 1, four studies in cluster 2 and one study in cluster 3 scored high in the quality appraisal. In addition to the quality appraisal, this review assessed if the simulation models were consistent in integrating all relevant phases of care, if utility parameters were consistent and methodological sound and if cost were compatible and consistent in the actual parameters used for screening, diagnostic work up and treatment. Of 18 studies, only three studies did not show signs of potential bias. This systematic review shows that a closer look into the cost and utility parameter can help to identify potential bias. Future simulation models should focus on integrating all relevant phases of care, using methodologically sound utility parameters and avoiding inconsistent cost parameters.

  15. A systematic hub loads model of a horizontal wind turbine

    International Nuclear Information System (INIS)

    Kazacoks, Romans; Jamieson, Peter

    2014-01-01

    The wind turbine industry has focused offshore on increasing the capacity of a single unit through up-scaling their machines. There is however a lack of systematic studies on how loads vary due to properties of a wind turbine and scaling of wind turbines. The purpose of this paper is to study how applied blade modifications, with similarities such as mass, stiffness and dimensions, influence blade root moments and lifetime damage equivalent loads (DELs) of the rotor blades. In order to produce fatigue load blade root moment trends based on the applied modifications. It was found that a linear trend of lifetime DELs based on the applied modifications of blades, which have effect on the natural frequency of blade of the original or reference model. As the control system was tuned for the specific frequency of the reference model. The linear trend of lifetime DELs was generated as long as the natural frequency of the reference model was preserved. For larger modifications of the wind turbine the controller would need retuning

  16. Yield of community-based tuberculosis targeted testing and treatment in foreign-born populations in the United States: A systematic review.

    Directory of Open Access Journals (Sweden)

    Mohsen Malekinejad

    Full Text Available To synthesize outputs and outcomes of community-based tuberculosis targeted testing and treatment (TTT programs in foreign-born populations (FBP in the United States (US.We systematically searched five bibliographic databases and other key resources. Two reviewers independently applied eligibility criteria to screen citations and extracted data from included studies. We excluded studies that contained 90%. We used random-effects meta-analytic models to calculate pooled proportions and 95% confidence intervals (CI for community-based TTT cascade steps (e.g., recruited, tested and treated, and used them to create two hypothetical cascades for 100 individuals.Fifteen studies conducted in 10 US states met inclusion criteria. Studies were heterogeneous in recruitment strategies and mostly recruited participants born in Latin America. Of 100 hypothetical participants (predominantly FBP reached by community-based TTT, 40.4 (95% CI 28.6 to 50.1 would have valid test results, 15.7 (95% CI 9.9 to 21.8 would test positive, and 3.6 (95% CI 1.4 to 6.0 would complete LTBI treatment. Likewise, of 100 hypothetical participants (majority FBP reached, 77.9 (95% CI 54.0 to 92.1 would have valid test results, 26.5 (95% CI 18.0 to 33.5 would test positive, and 5.4 (95% CI 2.1 to 9.0 would complete LTBI treatment. Of those with valid test results, pooled proportions of LTBI test positive for predominantly FBP and majority FBP were 38.9% (95% CI 28.6 to 49.8 and 34.3% (95% CI 29.3 to 39.5, respectively.We observed high attrition throughout the care cascade in FBP participating in LTBI community-based TTT studies. Few studies included cascade steps prior to LTBI diagnosis, limiting our review findings. Moreover, Asia-born populations in the US are substantially underrepresented in the FBP community-based TTT literature.

  17. Towards universal voluntary HIV testing and counselling: a systematic review and meta-analysis of community-based approaches.

    Directory of Open Access Journals (Sweden)

    Amitabh B Suthar

    2013-08-01

    Full Text Available BACKGROUND: Effective national and global HIV responses require a significant expansion of HIV testing and counselling (HTC to expand access to prevention and care. Facility-based HTC, while essential, is unlikely to meet national and global targets on its own. This article systematically reviews the evidence for community-based HTC. METHODS AND FINDINGS: PubMed was searched on 4 March 2013, clinical trial registries were searched on 3 September 2012, and Embase and the World Health Organization Global Index Medicus were searched on 10 April 2012 for studies including community-based HTC (i.e., HTC outside of health facilities. Randomised controlled trials, and observational studies were eligible if they included a community-based testing approach and reported one or more of the following outcomes: uptake, proportion receiving their first HIV test, CD4 value at diagnosis, linkage to care, HIV positivity rate, HTC coverage, HIV incidence, or cost per person tested (outcomes are defined fully in the text. The following community-based HTC approaches were reviewed: (1 door-to-door testing (systematically offering HTC to homes in a catchment area, (2 mobile testing for the general population (offering HTC via a mobile HTC service, (3 index testing (offering HTC to household members of people with HIV and persons who may have been exposed to HIV, (4 mobile testing for men who have sex with men, (5 mobile testing for people who inject drugs, (6 mobile testing for female sex workers, (7 mobile testing for adolescents, (8 self-testing, (9 workplace HTC, (10 church-based HTC, and (11 school-based HTC. The Newcastle-Ottawa Quality Assessment Scale and the Cochrane Collaboration's "risk of bias" tool were used to assess the risk of bias in studies with a comparator arm included in pooled estimates. 117 studies, including 864,651 participants completing HTC, met the inclusion criteria. The percentage of people offered community-based HTC who accepted HTC

  18. Empirical tests of natural selection-based evolutionary accounts of ADHD: a systematic review.

    Science.gov (United States)

    Thagaard, Marthe S; Faraone, Stephen V; Sonuga-Barke, Edmund J; Østergaard, Søren D

    2016-10-01

    ADHD is a prevalent and highly heritable mental disorder associated with significant impairment, morbidity and increased rates of mortality. This combination of high prevalence and high morbidity/mortality seen in ADHD and other mental disorders presents a challenge to natural selection-based models of human evolution. Several hypotheses have been proposed in an attempt to resolve this apparent paradox. The aim of this study was to review the evidence for these hypotheses. We conducted a systematic review of the literature on empirical investigations of natural selection-based evolutionary accounts for ADHD in adherence with the PRISMA guideline. The PubMed, Embase, and PsycINFO databases were screened for relevant publications, by combining search terms covering evolution/selection with search terms covering ADHD. The search identified 790 records. Of these, 15 full-text articles were assessed for eligibility, and three were included in the review. Two of these reported on the evolution of the seven-repeat allele of the ADHD-associated dopamine receptor D4 gene, and one reported on the results of a simulation study of the effect of suggested ADHD-traits on group survival. The authors of the three studies interpreted their findings as favouring the notion that ADHD-traits may have been associated with increased fitness during human evolution. However, we argue that none of the three studies really tap into the core symptoms of ADHD, and that their conclusions therefore lack validity for the disorder. This review indicates that the natural selection-based accounts of ADHD have not been subjected to empirical test and therefore remain hypothetical.

  19. Animal models for testing anti-prion drugs.

    Science.gov (United States)

    Fernández-Borges, Natalia; Elezgarai, Saioa R; Eraña, Hasier; Castilla, Joaquín

    2013-01-01

    Prion diseases belong to a group of fatal infectious diseases with no effective therapies available. Throughout the last 35 years, less than 50 different drugs have been tested in different experimental animal models without hopeful results. An important limitation when searching for new drugs is the existence of appropriate models of the disease. The three different possible origins of prion diseases require the existence of different animal models for testing anti-prion compounds. Wild type, over-expressing transgenic mice and other more sophisticated animal models have been used to evaluate a diversity of compounds which some of them were previously tested in different in vitro experimental models. The complexity of prion diseases will require more pre-screening studies, reliable sporadic (or spontaneous) animal models and accurate chemical modifications of the selected compounds before having an effective therapy against human prion diseases. This review is intended to put on display the more relevant animal models that have been used in the search of new antiprion therapies and describe some possible procedures when handling chemical compounds presumed to have anti-prion activity prior to testing them in animal models.

  20. A systematic approach to obtain validated Partial Least Square models for predicting lipoprotein subclasses from serum NMR spectra

    NARCIS (Netherlands)

    Mihaleva, V.V.; van Schalkwijk, D.B.; de Graaf, A.A.; van Duynhoven, J.; van Dorsten, F.A.; Vervoort, J.; Smilde, A.; Westerhuis, J.A.; Jacobs, D.M.

    2014-01-01

    A systematic approach is described for building validated PLS models that predict cholesterol and triglyceride concentrations in lipoprotein subclasses in fasting serum from a normolipidemic, healthy population. The PLS models were built on diffusion-edited 1H NMR spectra and calibrated on

  1. A systematic approach to obtain validated partial least square models for predicting lipoprotein subclasses from serum NMR spectra

    NARCIS (Netherlands)

    Mihaleva, V.V.; Schalkwijk, van D.B.; Graaf, de A.A.; Duynhoven, van J.P.M.; Dorsten, van F.A.; Vervoort, J.J.M.; Smilde, A.K.; Westerhuis, J.A.; Jacobs, D.M.

    2014-01-01

    A systematic approach is described for building validated PLS models that predict cholesterol and triglyceride concentrations in lipoprotein subclasses in fasting serum from a normolipidemic, healthy population. The PLS models were built on diffusion-edited (1)H NMR spectra and calibrated on

  2. A systematic approach to obtain validated partial least square models for predicting lipoprotein subclasses from serum nmr spectra

    NARCIS (Netherlands)

    Mihaleva, V.V.; Schalkwijk, D.B. van; Graaf, A.A. de; Duynhoven, J. van; Dorsten, F.A. van; Vervoort, J.; Smilde, A.; Westerhuis, J.A.; Jacobs, D.M.

    2014-01-01

    A systematic approach is described for building validated PLS models that predict cholesterol and triglyceride concentrations in lipoprotein subclasses in fasting serum from a normolipidemic, healthy population. The PLS models were built on diffusion-edited 1H NMR spectra and calibrated on

  3. The effectiveness of psychoeducation and systematic desensitization to reduce test anxiety among first-year pharmacy students.

    Science.gov (United States)

    Rajiah, Kingston; Saravanan, Coumaravelou

    2014-11-15

    To analyze the effect of psychological intervention on reducing performance anxiety and the consequences of the intervention on first-year pharmacy students. In this experimental study, 236 first-year undergraduate pharmacy students from a private university in Malaysia were approached between weeks 5 and 7 of their first semester to participate in the study. The completed responses for the Westside Test Anxiety Scale (WTAS), the Kessler Perceived Distress Scale (PDS), and the Academic Motivation Scale (AMS) were received from 225 students. Out of 225 students, 42 exhibited moderate to high test anxiety according to the WTAS (score ranging from 30 to 39) and were randomly placed into either an experiment group (n=21) or a waiting list control group (n=21). The prevalence of test anxiety among pharmacy students in this study was lower compared to other university students in previous studies. The present study's anxiety management of psychoeducation and systematic education for test anxiety reduced lack of motivation and psychological distress and improved grade point average (GPA). Psychological intervention helped significantly reduce scores of test anxiety, psychological distress, and lack of motivation, and it helped improve students' GPA.

  4. An experimental test of the information model for negotiation of biparental care.

    Directory of Open Access Journals (Sweden)

    Jessica Meade

    Full Text Available BACKGROUND: Theoretical modelling of biparental care suggests that it can be a stable strategy if parents partially compensate for changes in behaviour by their partners. In empirical studies, however, parents occasionally match rather than compensate for the actions of their partners. The recently proposed "information model" adds to the earlier theory by factoring in information on brood value and/or need into parental decision-making. This leads to a variety of predicted parental responses following a change in partner work-rate depending on the information available to parents. METHODOLOGY/PRINCIPAL FINDINGS: We experimentally test predictions of the information model using a population of long-tailed tits. We show that parental information on brood need varies systematically through the nestling period and use this variation to predict parental responses to an experimental increase in partner work-rate via playback of extra chick begging calls. When parental information is relatively high, partial compensation is predicted, whereas when parental information is low, a matching response is predicted. CONCLUSIONS/SIGNIFICANCE: We find that although some responses are consistent with predictions, parents match a change in their partner's work-rate more often than expected and we discuss possible explanations for our findings.

  5. Model-Based Software Testing for Object-Oriented Software

    Science.gov (United States)

    Biju, Soly Mathew

    2008-01-01

    Model-based testing is one of the best solutions for testing object-oriented software. It has a better test coverage than other testing styles. Model-based testing takes into consideration behavioural aspects of a class, which are usually unchecked in other testing methods. An increase in the complexity of software has forced the software industry…

  6. Life course socio-economic position and quality of life in adulthood: a systematic review of life course models

    Science.gov (United States)

    2012-01-01

    Background A relationship between current socio-economic position and subjective quality of life has been demonstrated, using wellbeing, life and needs satisfaction approaches. Less is known regarding the influence of different life course socio-economic trajectories on later quality of life. Several conceptual models have been proposed to help explain potential life course effects on health, including accumulation, latent, pathway and social mobility models. This systematic review aimed to assess whether evidence supported an overall relationship between life course socio-economic position and quality of life during adulthood and if so, whether there was support for one or more life course models. Methods A review protocol was developed detailing explicit inclusion and exclusion criteria, search terms, data extraction items and quality appraisal procedures. Literature searches were performed in 12 electronic databases during January 2012 and the references and citations of included articles were checked for additional relevant articles. Narrative synthesis was used to analyze extracted data and studies were categorized based on the life course model analyzed. Results Twelve studies met the eligibility criteria and used data from 10 datasets and five countries. Study quality varied and heterogeneity between studies was high. Seven studies assessed social mobility models, five assessed the latent model, two assessed the pathway model and three tested the accumulation model. Evidence indicated an overall relationship, but mixed results were found for each life course model. Some evidence was found to support the latent model among women, but not men. Social mobility models were supported in some studies, but overall evidence suggested little to no effect. Few studies addressed accumulation and pathway effects and study heterogeneity limited synthesis. Conclusions To improve potential for synthesis in this area, future research should aim to increase study

  7. Tests for the Assessment of Sport-Specific Performance in Olympic Combat Sports: A Systematic Review With Practical Recommendations

    Directory of Open Access Journals (Sweden)

    Helmi Chaabene

    2018-04-01

    Full Text Available The regular monitoring of physical fitness and sport-specific performance is important in elite sports to increase the likelihood of success in competition. This study aimed to systematically review and to critically appraise the methodological quality, validation data, and feasibility of the sport-specific performance assessment in Olympic combat sports like amateur boxing, fencing, judo, karate, taekwondo, and wrestling. A systematic search was conducted in the electronic databases PubMed, Google-Scholar, and Science-Direct up to October 2017. Studies in combat sports were included that reported validation data (e.g., reliability, validity, sensitivity of sport-specific tests. Overall, 39 studies were eligible for inclusion in this review. The majority of studies (74% contained sample sizes <30 subjects. Nearly, 1/3 of the reviewed studies lacked a sufficient description (e.g., anthropometrics, age, expertise level of the included participants. Seventy-two percent of studies did not sufficiently report inclusion/exclusion criteria of their participants. In 62% of the included studies, the description and/or inclusion of a familiarization session (s was either incomplete or not existent. Sixty-percent of studies did not report any details about the stability of testing conditions. Approximately half of the studies examined reliability measures of the included sport-specific tests (intraclass correlation coefficient [ICC] = 0.43–1.00. Content validity was addressed in all included studies, criterion validity (only the concurrent aspect of it in approximately half of the studies with correlation coefficients ranging from r = −0.41 to 0.90. Construct validity was reported in 31% of the included studies and predictive validity in only one. Test sensitivity was addressed in 13% of the included studies. The majority of studies (64% ignored and/or provided incomplete information on test feasibility and methodological limitations of the sport

  8. 1/3-scale model testing program

    International Nuclear Information System (INIS)

    Yoshimura, H.R.; Attaway, S.W.; Bronowski, D.R.; Uncapher, W.L.; Huerta, M.; Abbott, D.G.

    1989-01-01

    This paper describes the drop testing of a one-third scale model transport cask system. Two casks were supplied by Transnuclear, Inc. (TN) to demonstrate dual purpose shipping/storage casks. These casks will be used to ship spent fuel from DOEs West Valley demonstration project in New York to the Idaho National Engineering Laboratory (INEL) for long term spent fuel dry storage demonstration. As part of the certification process, one-third scale model tests were performed to obtain experimental data. Two 9-m (30-ft) drop tests were conducted on a mass model of the cask body and scaled balsa and redwood filled impact limiters. In the first test, the cask system was tested in an end-on configuration. In the second test, the system was tested in a slap-down configuration where the axis of the cask was oriented at a 10 degree angle with the horizontal. Slap-down occurs for shallow angle drops where the primary impact at one end of the cask is followed by a secondary impact at the other end. The objectives of the testing program were to (1) obtain deceleration and displacement information for the cask and impact limiter system, (2) obtain dynamic force-displacement data for the impact limiters, (3) verify the integrity of the impact limiter retention system, and (4) examine the crush behavior of the limiters. This paper describes both test results in terms of measured deceleration, post test deformation measurements, and the general structural response of the system

  9. Unit testing, model validation, and biological simulation.

    Science.gov (United States)

    Sarma, Gopal P; Jacobs, Travis W; Watts, Mark D; Ghayoomie, S Vahid; Larson, Stephen D; Gerkin, Richard C

    2016-01-01

    The growth of the software industry has gone hand in hand with the development of tools and cultural practices for ensuring the reliability of complex pieces of software. These tools and practices are now acknowledged to be essential to the management of modern software. As computational models and methods have become increasingly common in the biological sciences, it is important to examine how these practices can accelerate biological software development and improve research quality. In this article, we give a focused case study of our experience with the practices of unit testing and test-driven development in OpenWorm, an open-science project aimed at modeling Caenorhabditis elegans. We identify and discuss the challenges of incorporating test-driven development into a heterogeneous, data-driven project, as well as the role of model validation tests, a category of tests unique to software which expresses scientific models.

  10. [The Offer of Medical-Diagnostic Self-Tests on German Language Websites: Results of a Systematic Internet Search].

    Science.gov (United States)

    Kuecuekbalaban, P; Schmidt, S; Muehlan, H

    2018-03-01

    The aim of the current study was to provide an overview of medical-diagnostic self-tests which can be purchased without a medical prescription on German language websites. From September 2014 to March 2015, a systematic internet research was conducted with the following search terms: self-test, self-diagnosis, home test, home diagnosis, quick test, rapid test. 513 different self-tests for the diagnostics of 52 diverse diseases or health risks were identified, including chronic diseases (e. g. diabetes, chronic disease of the kidneys, liver, and lungs), sexually transmitted diseases (e. g. HIV, chlamydia, gonorrhea), infectious diseases (e. g. tuberculosis, malaria, Helicobacter pylori), allergies (e. g. house dust, cats, histamine) and cancer as well as tests for the diagnostics of 12 different psychotropic substances. These were sold by 90 companies in Germany and by other foreign companies. The number of medical-diagnostic self-tests which can be bought without a medical prescription on the Internet has increased enormously in the last 10 years. Further studies are needed for the identification of the determinants of the use of self-tests as well as the impact of the application on the experience and behavior of the user. © Georg Thieme Verlag KG Stuttgart · New York.

  11. Measurement of the fine-structure constant as a test of the Standard Model

    Science.gov (United States)

    Parker, Richard H.; Yu, Chenghui; Zhong, Weicheng; Estey, Brian; Müller, Holger

    2018-04-01

    Measurements of the fine-structure constant α require methods from across subfields and are thus powerful tests of the consistency of theory and experiment in physics. Using the recoil frequency of cesium-133 atoms in a matter-wave interferometer, we recorded the most accurate measurement of the fine-structure constant to date: α = 1/137.035999046(27) at 2.0 × 10‑10 accuracy. Using multiphoton interactions (Bragg diffraction and Bloch oscillations), we demonstrate the largest phase (12 million radians) of any Ramsey-Bordé interferometer and control systematic effects at a level of 0.12 part per billion. Comparison with Penning trap measurements of the electron gyromagnetic anomaly ge ‑ 2 via the Standard Model of particle physics is now limited by the uncertainty in ge ‑ 2; a 2.5σ tension rejects dark photons as the reason for the unexplained part of the muon’s magnetic moment at a 99% confidence level. Implications for dark-sector candidates and electron substructure may be a sign of physics beyond the Standard Model that warrants further investigation.

  12. Separating response variability from structural inconsistency to test models of risky decision making

    Directory of Open Access Journals (Sweden)

    Michael H. Birnbaum

    2012-07-01

    Full Text Available Individual true and error theory assumes that responses by the same person to the same choice problem within a block of trials are based on the same true preferences but may show preference reversals due to random error. Between blocks, a person{}'s true preferences may differ or stay the same. This theory is illustrated with studies testing two critical properties that distinguish models of risky decision making: (1 restricted branch independence, which is implied by original prospect theory and violated in a specific way by both cumulative prospect theory and the priority heuristic; and (2 stochastic dominance, which is implied by cumulative prospect theory. Corrected for random error, most individuals systematically violated stochastic dominance, ruling out cumulative prospect theory. Furthermore, most people violated restricted branch independence in the opposite way predicted by that theory and the priority heuristic. Both violations are consistent with the transfer of attention exchange model. No one was found whose data were compatible with cumulative prospect theory, except for those that were also compatible with expected utility, and no one satisfied the priority heuristic.

  13. Temperature Buffer Test. Final THM modelling

    Energy Technology Data Exchange (ETDEWEB)

    Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan [Clay Technology AB, Lund (Sweden); Ledesma, Alberto; Jacinto, Abel [UPC, Universitat Politecnica de Catalunya, Barcelona (Spain)

    2012-01-15

    The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code{sub B}right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code{sub B}right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to

  14. Temperature Buffer Test. Final THM modelling

    International Nuclear Information System (INIS)

    Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan; Ledesma, Alberto; Jacinto, Abel

    2012-01-01

    The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code B right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code B right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to

  15. Modeling of novel diagnostic strategies for active tuberculosis - a systematic review: current practices and recommendations.

    Directory of Open Access Journals (Sweden)

    Alice Zwerling

    Full Text Available The field of diagnostics for active tuberculosis (TB is rapidly developing. TB diagnostic modeling can help to inform policy makers and support complicated decisions on diagnostic strategy, with important budgetary implications. Demand for TB diagnostic modeling is likely to increase, and an evaluation of current practice is important. We aimed to systematically review all studies employing mathematical modeling to evaluate cost-effectiveness or epidemiological impact of novel diagnostic strategies for active TB.Pubmed, personal libraries and reference lists were searched to identify eligible papers. We extracted data on a wide variety of model structure, parameter choices, sensitivity analyses and study conclusions, which were discussed during a meeting of content experts.From 5619 records a total of 36 papers were included in the analysis. Sixteen papers included population impact/transmission modeling, 5 were health systems models, and 24 included estimates of cost-effectiveness. Transmission and health systems models included specific structure to explore the importance of the diagnostic pathway (n = 4, key determinants of diagnostic delay (n = 5, operational context (n = 5, and the pre-diagnostic infectious period (n = 1. The majority of models implemented sensitivity analysis, although only 18 studies described multi-way sensitivity analysis of more than 2 parameters simultaneously. Among the models used to make cost-effectiveness estimates, most frequent diagnostic assays studied included Xpert MTB/RIF (n = 7, and alternative nucleic acid amplification tests (NAATs (n = 4. Most (n = 16 of the cost-effectiveness models compared new assays to an existing baseline and generated an incremental cost-effectiveness ratio (ICER.Although models have addressed a small number of important issues, many decisions regarding implementation of TB diagnostics are being made without the full benefits of insight from mathematical

  16. Immunochemical faecal occult blood test for colorectal cancer screening: a systematic review.

    Science.gov (United States)

    Syful Azlie, M F; Hassan, M R; Junainah, S; Rugayah, B

    2015-02-01

    A systematic review on the effectiveness and costeffectiveness of Immunochemical faecal occult IFOBT for CRC screening was carried out. A total of 450 relevant titles were identified, 41 abstracts were screened and 18 articles were included in the results. There was fair level of retrievable evidence to suggest that the sensitivity and specificity of IFOBT varies with the cut-off point of haemoglobin, whereas the diagnostic accuracy performance was influenced by high temperature and haemoglobin stability. A screening programme using IFOBT can be effective for prevention of advanced CRC and reduced mortality. There was also evidence to suggest that IFOBT is cost-effective in comparison with no screening, whereby a two-day faecal collection method was found to be costeffective as a means of screening for CRC. Based on the review, quantitative IFOBT method can be used in Malaysia as a screening test for CRC. The use of fully automated IFOBT assay would be highly desirable.

  17. Vicarious Desensitization of Test Anxiety Through Observation of Video-taped Treatment

    Science.gov (United States)

    Mann, Jay

    1972-01-01

    Procedural variations were compared for a vicarious group treatment of test anxiety involving observation of videotapes depicting systematic desensitization of a model. The theoretical implications of the present study and the feasibility of using videotaped materials to treat test anxiety and other avoidance responses in school settings are…

  18. Horns Rev II, 2D-Model Tests

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Brorsen, Michael

    This report present the results of 2D physical model tests carried out in the shallow wave flume at Dept. of Civil Engineering, Aalborg University (AAU), Denmark. The starting point for the present report is the previously carried out run-up tests described in Lykke Andersen & Frigaard, 2006. The......-shaped access platforms on piles. The Model tests include mainly regular waves and a few irregular wave tests. These tests have been conducted at Aalborg University from 9. November, 2006 to 17. November, 2006.......This report present the results of 2D physical model tests carried out in the shallow wave flume at Dept. of Civil Engineering, Aalborg University (AAU), Denmark. The starting point for the present report is the previously carried out run-up tests described in Lykke Andersen & Frigaard, 2006....... The objective of the tests was to investigate the impact pressures generated on a horizontal platform and a cone platform for selected sea states calibrated by Lykke Andersen & Frigaard, 2006. The measurements should be used for assessment of slamming coefficients for the design of horizontal and cone...

  19. Measurement properties and feasibility of clinical tests to assess sit-to-stand/stand-to-sit tasks in subjects with neurological disease: a systematic review

    Directory of Open Access Journals (Sweden)

    Paula F. S. Silva

    2014-04-01

    Full Text Available BACKGROUND: Subjects with neurological disease (ND usually show impaired performance during sit-to-stand and stand-to-sit tasks, with a consequent reduction in their mobility levels. OBJECTIVE: To determine the measurement properties and feasibility previously investigated for clinical tests that evaluate sit-to-stand and stand-to-sit in subjects with ND. METHOD: A systematic literature review following the PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses protocol was performed. Systematic literature searches of databases (MEDLINE/SCIELO/LILACS/PEDro were performed to identify relevant studies. In all studies, the following inclusion criteria were assessed: investigation of any measurement property or the feasibility of clinical tests that evaluate sit-to-stand and stand-to-sit tasks in subjects with ND published in any language through December 2012. The COSMIN checklist was used to evaluate the methodological quality of the included studies. RESULTS: Eleven studies were included. The measurement properties/feasibility were most commonly investigated for the five-repetition sit-to-stand test, which showed good test-retest reliability (Intraclass Correlation Coefficient:ICC=0.94-0.99 for subjects with stroke, cerebral palsy and dementia. The ICC values were higher for this test than for the number of repetitions in the 30-s test. The five-repetition sit-to-stand test also showed good inter/intra-rater reliabilities (ICC=0.97-0.99 for stroke and inter-rater reliability (ICC=0.99 for subjects with Parkinson disease and incomplete spinal cord injury. For this test, the criterion-related validity for subjects with stroke, cerebral palsy and incomplete spinal cord injury was, in general, moderate (correlation=0.40-0.77, and the feasibility and safety were good for subjects with Alzheimer's disease. CONCLUSIONS: The five-repetition sit-to-stand test was used more often in subjects with ND, and most of the measurement

  20. Testing constancy of unconditional variance in volatility models by misspecification and specification tests

    DEFF Research Database (Denmark)

    Silvennoinen, Annastiina; Terasvirta, Timo

    The topic of this paper is testing the hypothesis of constant unconditional variance in GARCH models against the alternative that the unconditional variance changes deterministically over time. Tests of this hypothesis have previously been performed as misspecification tests after fitting a GARCH...... models. An application to exchange rate returns is included....

  1. An extended systematic mapping study about the scalability of i* Models

    Directory of Open Access Journals (Sweden)

    Paulo Lima

    2016-12-01

    Full Text Available i* models have been used for requirements specification in many domains, such as healthcare, telecommunication, and air traffic control. Managing the scalability and the complexity of such models is an important challenge in Requirements Engineering (RE. Scalability is also one of the most intractable issues in the design of visual notations in general: a well-known problem with visual representations is that they do not scale well. This issue has led us to investigate scalability in i* models and its variants by means of a systematic mapping study. This paper is an extended version of a previous paper on the scalability of i* including papers indicated by specialists. Moreover, we also discuss the challenges and open issues regarding scalability of i* models and its variants. A total of 126 papers were analyzed in order to understand: how the RE community perceives scalability; and which proposals have considered this topic. We found that scalability issues are indeed perceived as relevant and that further work is still required, even though many potential solutions have already been proposed. This study can be a starting point for researchers aiming to further advance the treatment of scalability in i* models.

  2. Reply to ''Test of a chromomagnetic model for hadron mass differences''

    International Nuclear Information System (INIS)

    Silvestre-Brac, B.

    1993-01-01

    The shortcomings of the chromomagnetic model, as raised by Lichtenberg and Roncaglia, are analyzed and relativized. The chromomagnetic model fails to provide correct binding energies for multiquark systems and even to predict qualitatively stability of such objects. However, it is simple and physically sound so as to discriminate among the most favorable structures. As such, its use for a systematic study of a whole set of candidates is highly recommended in a first step

  3. Deformation modeling and the strain transient dip test

    International Nuclear Information System (INIS)

    Jones, W.B.; Rohde, R.W.; Swearengen, J.C.

    1980-01-01

    Recent efforts in material deformation modeling reveal a trend toward unifying creep and plasticity with a single rate-dependent formulation. While such models can describe actual material deformation, most require a number of different experiments to generate model parameter information. Recently, however, a new model has been proposed in which most of the requisite constants may be found by examining creep transients brought about through abrupt changes in creep stress (strain transient dip test). The critical measurement in this test is the absence of a resolvable creep rate after a stress drop. As a consequence, the result is extraordinarily sensitive to strain resolution as well as machine mechanical response. This paper presents the design of a machine in which these spurious effects have been minimized and discusses the nature of the strain transient dip test using the example of aluminum. It is concluded that the strain transient dip test is not useful as the primary test for verifying any micromechanical model of deformation. Nevertheless, if a model can be developed which is verifiable by other experimentts, data from a dip test machine may be used to generate model parameters

  4. Methods for testing transport models

    International Nuclear Information System (INIS)

    Singer, C.; Cox, D.

    1993-01-01

    This report documents progress to date under a three-year contract for developing ''Methods for Testing Transport Models.'' The work described includes (1) choice of best methods for producing ''code emulators'' for analysis of very large global energy confinement databases, (2) recent applications of stratified regressions for treating individual measurement errors as well as calibration/modeling errors randomly distributed across various tokamaks, (3) Bayesian methods for utilizing prior information due to previous empirical and/or theoretical analyses, (4) extension of code emulator methodology to profile data, (5) application of nonlinear least squares estimators to simulation of profile data, (6) development of more sophisticated statistical methods for handling profile data, (7) acquisition of a much larger experimental database, and (8) extensive exploratory simulation work on a large variety of discharges using recently improved models for transport theories and boundary conditions. From all of this work, it has been possible to define a complete methodology for testing new sets of reference transport models against much larger multi-institutional databases

  5. Methodological quality of systematic reviews on influenza vaccination.

    Science.gov (United States)

    Remschmidt, Cornelius; Wichmann, Ole; Harder, Thomas

    2014-03-26

    There is a growing body of evidence on the risks and benefits of influenza vaccination in various target groups. Systematic reviews are of particular importance for policy decisions. However, their methodological quality can vary considerably. To investigate the methodological quality of systematic reviews on influenza vaccination (efficacy, effectiveness, safety) and to identify influencing factors. A systematic literature search on systematic reviews on influenza vaccination was performed, using MEDLINE, EMBASE and three additional databases (1990-2013). Review characteristics were extracted and the methodological quality of the reviews was evaluated using the assessment of multiple systematic reviews (AMSTAR) tool. U-test, Kruskal-Wallis test, chi-square test, and multivariable linear regression analysis were used to assess the influence of review characteristics on AMSTAR-score. Fourty-six systematic reviews fulfilled the inclusion criteria. Average methodological quality was high (median AMSTAR-score: 8), but variability was large (AMSTAR range: 0-11). Quality did not differ significantly according to vaccination target group. Cochrane reviews had higher methodological quality than non-Cochrane reviews (p=0.001). Detailed analysis showed that this was due to better study selection and data extraction, inclusion of unpublished studies, and better reporting of study characteristics (all p<0.05). In the adjusted analysis, no other factor, including industry sponsorship or journal impact factor had an influence on AMSTAR score. Systematic reviews on influenza vaccination showed large differences regarding their methodological quality. Reviews conducted by the Cochrane collaboration were of higher quality than others. When using systematic reviews to guide the development of vaccination recommendations, the methodological quality of a review in addition to its content should be considered. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Modelling the pile load test

    OpenAIRE

    Prekop Ľubomír

    2017-01-01

    This paper deals with the modelling of the load test of horizontal resistance of reinforced concrete piles. The pile belongs to group of piles with reinforced concrete heads. The head is pressed with steel arches of a bridge on motorway D1 Jablonov - Studenec. Pile model was created in ANSYS with several models of foundation having properties found out from geotechnical survey. Finally some crucial results obtained from computer models are presented and compared with these obtained from exper...

  7. Systematic review of prognostic models in traumatic brain injury

    Directory of Open Access Journals (Sweden)

    Roberts Ian

    2006-11-01

    Full Text Available Abstract Background Traumatic brain injury (TBI is a leading cause of death and disability world-wide. The ability to accurately predict patient outcome after TBI has an important role in clinical practice and research. Prognostic models are statistical models that combine two or more items of patient data to predict clinical outcome. They may improve predictions in TBI patients. Multiple prognostic models for TBI have accumulated for decades but none of them is widely used in clinical practice. The objective of this systematic review is to critically assess existing prognostic models for TBI Methods Studies that combine at least two variables to predict any outcome in patients with TBI were searched in PUBMED and EMBASE. Two reviewers independently examined titles, abstracts and assessed whether each met the pre-defined inclusion criteria. Results A total of 53 reports including 102 models were identified. Almost half (47% were derived from adult patients. Three quarters of the models included less than 500 patients. Most of the models (93% were from high income countries populations. Logistic regression was the most common analytical strategy to derived models (47%. In relation to the quality of the derivation models (n:66, only 15% reported less than 10% pf loss to follow-up, 68% did not justify the rationale to include the predictors, 11% conducted an external validation and only 19% of the logistic models presented the results in a clinically user-friendly way Conclusion Prognostic models are frequently published but they are developed from small samples of patients, their methodological quality is poor and they are rarely validated on external populations. Furthermore, they are not clinically practical as they are not presented to physicians in a user-friendly way. Finally because only a few are developed using populations from low and middle income countries, where most of trauma occurs, the generalizability to these setting is limited.

  8. Qualitative reasoning for biological network inference from systematic perturbation experiments.

    Science.gov (United States)

    Badaloni, Silvana; Di Camillo, Barbara; Sambo, Francesco

    2012-01-01

    The systematic perturbation of the components of a biological system has been proven among the most informative experimental setups for the identification of causal relations between the components. In this paper, we present Systematic Perturbation-Qualitative Reasoning (SPQR), a novel Qualitative Reasoning approach to automate the interpretation of the results of systematic perturbation experiments. Our method is based on a qualitative abstraction of the experimental data: for each perturbation experiment, measured values of the observed variables are modeled as lower, equal or higher than the measurements in the wild type condition, when no perturbation is applied. The algorithm exploits a set of IF-THEN rules to infer causal relations between the variables, analyzing the patterns of propagation of the perturbation signals through the biological network, and is specifically designed to minimize the rate of false positives among the inferred relations. Tested on both simulated and real perturbation data, SPQR indeed exhibits a significantly higher precision than the state of the art.

  9. Debates—Hypothesis testing in hydrology: Introduction

    Science.gov (United States)

    Blöschl, Günter

    2017-03-01

    This paper introduces the papers in the "Debates—Hypothesis testing in hydrology" series. The four articles in the series discuss whether and how the process of testing hypotheses leads to progress in hydrology. Repeated experiments with controlled boundary conditions are rarely feasible in hydrology. Research is therefore not easily aligned with the classical scientific method of testing hypotheses. Hypotheses in hydrology are often enshrined in computer models which are tested against observed data. Testability may be limited due to model complexity and data uncertainty. All four articles suggest that hypothesis testing has contributed to progress in hydrology and is needed in the future. However, the procedure is usually not as systematic as the philosophy of science suggests. A greater emphasis on a creative reasoning process on the basis of clues and explorative analyses is therefore needed.

  10. Testing the standard model

    International Nuclear Information System (INIS)

    Gordon, H.; Marciano, W.; Williams, H.H.

    1982-01-01

    We summarize here the results of the standard model group which has studied the ways in which different facilities may be used to test in detail what we now call the standard model, that is SU/sub c/(3) x SU(2) x U(1). The topics considered are: W +- , Z 0 mass, width; sin 2 theta/sub W/ and neutral current couplings; W + W - , Wγ; Higgs; QCD; toponium and naked quarks; glueballs; mixing angles; and heavy ions

  11. lmerTest Package: Tests in Linear Mixed Effects Models

    DEFF Research Database (Denmark)

    Kuznetsova, Alexandra; Brockhoff, Per B.; Christensen, Rune Haubo Bojesen

    2017-01-01

    One of the frequent questions by users of the mixed model function lmer of the lme4 package has been: How can I get p values for the F and t tests for objects returned by lmer? The lmerTest package extends the 'lmerMod' class of the lme4 package, by overloading the anova and summary functions...... by providing p values for tests for fixed effects. We have implemented the Satterthwaite's method for approximating degrees of freedom for the t and F tests. We have also implemented the construction of Type I - III ANOVA tables. Furthermore, one may also obtain the summary as well as the anova table using...

  12. The effects of model and data complexity on predictions from species distributions models

    DEFF Research Database (Denmark)

    García-Callejas, David; Bastos, Miguel

    2016-01-01

    How complex does a model need to be to provide useful predictions is a matter of continuous debate across environmental sciences. In the species distributions modelling literature, studies have demonstrated that more complex models tend to provide better fits. However, studies have also shown...... that predictive performance does not always increase with complexity. Testing of species distributions models is challenging because independent data for testing are often lacking, but a more general problem is that model complexity has never been formally described in such studies. Here, we systematically...

  13. Systematics of the level density parameters

    International Nuclear Information System (INIS)

    Ignatyuk, A.V.; Istekov, K.K.; Smirenkin, G.N.

    1977-01-01

    The excitation energy dependence of nucleus energy-level density is phenomenologically systematized in terms of the Fermi gas model. The analysis has been conducted in the atomic mass number range of A(>=)150, where the collective effects are mostly pronounced. The density parameter a(U) is obtained using data on neutron resonances. To depict energy spectra of nuclear states in the Fermi gas model (1) the contributions from collective rotational and vibrational modes (2), as well as from pair correlations (3) are also taken into account. It is shown, that at excitation energies close to the neutron binding energy all three systematics of a(U) yield practically the same energy-level densities. At high energies only the (2) and (3) systematics are valid, and at energies lower than the neutron binding energy only the last systematics will be adequate

  14. Should we systematically test patients with clinically isolated syndrome for auto-antibodies?

    Science.gov (United States)

    Negrotto, Laura; Tur, Carmen; Tintoré, Mar; Arrambide, Georgina; Sastre-Garriga, Jaume; Río, Jordi; Comabella, Manuel; Nos, Carlos; Galán, Ingrid; Vidal-Jordana, Angela; Simon, Eva; Castilló, Joaquín; Palavra, Filipe; Mitjana, Raquel; Auger, Cristina; Rovira, Àlex; Montalban, Xavier

    2015-12-01

    Several autoimmune diseases (ADs) can mimic multiple sclerosis (MS). For this reason, testing for auto-antibodies (auto-Abs) is often included in the diagnostic work-up of patients with a clinically isolated syndrome (CIS). The purpose was to study how useful it was to systematically determine antinuclear-antibodies, anti-SSA and anti-SSB in a non-selected cohort of CIS patients, regarding the identification of other ADs that could represent an alternative diagnosis. From a prospective CIS cohort, we selected 772 patients in which auto-Ab levels were tested within the first year from CIS. Baseline characteristics of auto-Ab positive and negative patients were compared. A retrospective revision of clinical records was then performed in the auto-Ab positive patients to identify those who developed ADs during follow-up. One or more auto-Ab were present in 29.4% of patients. Only 1.8% of patients developed other ADs during a mean follow-up of 6.6 years. In none of these cases the concurrent AD was considered the cause of the CIS. In all cases the diagnosis of the AD resulted from the development of signs and/or symptoms suggestive of each disease. Antinuclear-antibodies, anti-SSA and anti-SSB should not be routinely determined in CIS patients but only in those presenting symptoms suggestive of other ADs. © The Author(s), 2015.

  15. Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing

    Science.gov (United States)

    Nance, Donald; Liever, Peter; Nielsen, Tanner

    2015-01-01

    The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test, conducted at Marshall Space Flight Center. The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.

  16. Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing

    Science.gov (United States)

    Nance, Donald K.; Liever, Peter A.

    2015-01-01

    The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test (SMAT), conducted at Marshall Space Flight Center (MSFC). The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.

  17. A systematic review of predictive models for asthma development in children.

    Science.gov (United States)

    Luo, Gang; Nkoy, Flory L; Stone, Bryan L; Schmick, Darell; Johnson, Michael D

    2015-11-28

    Asthma is the most common pediatric chronic disease affecting 9.6 % of American children. Delay in asthma diagnosis is prevalent, resulting in suboptimal asthma management. To help avoid delay in asthma diagnosis and advance asthma prevention research, researchers have proposed various models to predict asthma development in children. This paper reviews these models. A systematic review was conducted through searching in PubMed, EMBASE, CINAHL, Scopus, the Cochrane Library, the ACM Digital Library, IEEE Xplore, and OpenGrey up to June 3, 2015. The literature on predictive models for asthma development in children was retrieved, with search results limited to human subjects and children (birth to 18 years). Two independent reviewers screened the literature, performed data extraction, and assessed article quality. The literature search returned 13,101 references in total. After manual review, 32 of these references were determined to be relevant and are discussed in the paper. We identify several limitations of existing predictive models for asthma development in children, and provide preliminary thoughts on how to address these limitations. Existing predictive models for asthma development in children have inadequate accuracy. Efforts to improve these models' performance are needed, but are limited by a lack of a gold standard for asthma development in children.

  18. Modelling the pile load test

    Directory of Open Access Journals (Sweden)

    Prekop Ľubomír

    2017-01-01

    Full Text Available This paper deals with the modelling of the load test of horizontal resistance of reinforced concrete piles. The pile belongs to group of piles with reinforced concrete heads. The head is pressed with steel arches of a bridge on motorway D1 Jablonov - Studenec. Pile model was created in ANSYS with several models of foundation having properties found out from geotechnical survey. Finally some crucial results obtained from computer models are presented and compared with these obtained from experiment.

  19. A procedure for the significance testing of unmodeled errors in GNSS observations

    Science.gov (United States)

    Li, Bofeng; Zhang, Zhetao; Shen, Yunzhong; Yang, Ling

    2018-01-01

    It is a crucial task to establish a precise mathematical model for global navigation satellite system (GNSS) observations in precise positioning. Due to the spatiotemporal complexity of, and limited knowledge on, systematic errors in GNSS observations, some residual systematic errors would inevitably remain even after corrected with empirical model and parameterization. These residual systematic errors are referred to as unmodeled errors. However, most of the existing studies mainly focus on handling the systematic errors that can be properly modeled and then simply ignore the unmodeled errors that may actually exist. To further improve the accuracy and reliability of GNSS applications, such unmodeled errors must be handled especially when they are significant. Therefore, a very first question is how to statistically validate the significance of unmodeled errors. In this research, we will propose a procedure to examine the significance of these unmodeled errors by the combined use of the hypothesis tests. With this testing procedure, three components of unmodeled errors, i.e., the nonstationary signal, stationary signal and white noise, are identified. The procedure is tested by using simulated data and real BeiDou datasets with varying error sources. The results show that the unmodeled errors can be discriminated by our procedure with approximately 90% confidence. The efficiency of the proposed procedure is further reassured by applying the time-domain Allan variance analysis and frequency-domain fast Fourier transform. In summary, the spatiotemporally correlated unmodeled errors are commonly existent in GNSS observations and mainly governed by the residual atmospheric biases and multipath. Their patterns may also be impacted by the receiver.

  20. Combinatorial QSAR modeling of chemical toxicants tested against Tetrahymena pyriformis.

    Science.gov (United States)

    Zhu, Hao; Tropsha, Alexander; Fourches, Denis; Varnek, Alexandre; Papa, Ester; Gramatica, Paola; Oberg, Tomas; Dao, Phuong; Cherkasov, Artem; Tetko, Igor V

    2008-04-01

    Selecting most rigorous quantitative structure-activity relationship (QSAR) approaches is of great importance in the development of robust and predictive models of chemical toxicity. To address this issue in a systematic way, we have formed an international virtual collaboratory consisting of six independent groups with shared interests in computational chemical toxicology. We have compiled an aqueous toxicity data set containing 983 unique compounds tested in the same laboratory over a decade against Tetrahymena pyriformis. A modeling set including 644 compounds was selected randomly from the original set and distributed to all groups that used their own QSAR tools for model development. The remaining 339 compounds in the original set (external set I) as well as 110 additional compounds (external set II) published recently by the same laboratory (after this computational study was already in progress) were used as two independent validation sets to assess the external predictive power of individual models. In total, our virtual collaboratory has developed 15 different types of QSAR models of aquatic toxicity for the training set. The internal prediction accuracy for the modeling set ranged from 0.76 to 0.93 as measured by the leave-one-out cross-validation correlation coefficient ( Q abs2). The prediction accuracy for the external validation sets I and II ranged from 0.71 to 0.85 (linear regression coefficient R absI2) and from 0.38 to 0.83 (linear regression coefficient R absII2), respectively. The use of an applicability domain threshold implemented in most models generally improved the external prediction accuracy but at the same time led to a decrease in chemical space coverage. Finally, several consensus models were developed by averaging the predicted aquatic toxicity for every compound using all 15 models, with or without taking into account their respective applicability domains. We find that consensus models afford higher prediction accuracy for the

  1. Diagnostic tests and algorithms used in the investigation of haematuria: systematic reviews and economic evaluation.

    Science.gov (United States)

    Rodgers, M; Nixon, J; Hempel, S; Aho, T; Kelly, J; Neal, D; Duffy, S; Ritchie, G; Kleijnen, J; Westwood, M

    2006-06-01

    To determine the most effective diagnostic strategy for the investigation of microscopic and macroscopic haematuria in adults. Electronic databases from inception to October 2003, updated in August 2004. A systematic review was undertaken according to published guidelines. Decision analytic modelling was undertaken, based on the findings of the review, expert opinion and additional information from the literature, to assess the relative cost-effectiveness of plausible alternative tests that are part of diagnostic algorithms for haematuria. A total of 118 studies met the inclusion criteria. No studies that evaluated the effectiveness of diagnostic algorithms for haematuria or the effectiveness of screening for haematuria or investigating its underlying cause were identified. Eighteen out of 19 identified studies evaluated dipstick tests and data from these suggested that these are moderately useful in establishing the presence of, but cannot be used to rule out, haematuria. Six studies using haematuria as a test for the presence of a disease indicated that the detection of microhaematuria cannot alone be considered a useful test either to rule in or rule out the presence of a significant underlying pathology (urinary calculi or bladder cancer). Forty-eight of 80 studies addressed methods to localise the source of bleeding (renal or lower urinary tract). The methods and thresholds described in these studies varied greatly, precluding any estimate of a 'best performance' threshold that could be applied across patient groups. However, studies of red blood cell morphology that used a cut-off value of 80% dysmorphic cells for glomerular disease reported consistently high specificities (potentially useful in ruling in a renal cause for haematuria). The reported sensitivities were generally low. Twenty-eight studies included data on the accuracy of laboratory tests (tumour markers, cytology) for the diagnosis of bladder cancer. The majority of tumour marker studies

  2. The Couplex test cases: models and lessons

    International Nuclear Information System (INIS)

    Bourgeat, A.; Kern, M.; Schumacher, S.; Talandier, J.

    2003-01-01

    The Couplex test cases are a set of numerical test models for nuclear waste deep geological disposal simulation. They are centered around the numerical issues arising in the near and far field transport simulation. They were used in an international contest, and are now becoming a reference in the field. We present the models used in these test cases, and show sample results from the award winning teams. (authors)

  3. To err is human, to correct is public health: a systematic review examining poor quality testing and misdiagnosis of HIV status.

    Science.gov (United States)

    Johnson, Cheryl C; Fonner, Virginia; Sands, Anita; Ford, Nathan; Obermeyer, Carla Mahklouf; Tsui, Sharon; Wong, Vincent; Baggaley, Rachel

    2017-08-29

    In accordance with global testing and treatment targets, many countries are seeking ways to reach the "90-90-90" goals, starting with diagnosing 90% of all people with HIV. Quality HIV testing services are needed to enable people with HIV to be diagnosed and linked to treatment as early as possible. It is essential that opportunities to reach people with undiagnosed HIV are not missed, diagnoses are correct and HIV-negative individuals are not inadvertently initiated on life-long treatment. We conducted this systematic review to assess the magnitude of misdiagnosis and to describe poor HIV testing practices using rapid diagnostic tests. We systematically searched peer-reviewed articles, abstracts and grey literature published from 1 January 1990 to 19 April 2017. Studies were included if they used at least two rapid diagnostic tests and reported on HIV misdiagnosis, factors related to potential misdiagnosis or described quality issues and errors related to HIV testing. Sixty-four studies were included in this review. A small proportion of false positive (median 3.1%, interquartile range (IQR): 0.4-5.2%) and false negative (median: 0.4%, IQR: 0-3.9%) diagnoses were identified. Suboptimal testing strategies were the most common factor in studies reporting misdiagnoses, particularly false positive diagnoses due to using a "tiebreaker" test to resolve discrepant test results. A substantial proportion of false negative diagnoses were related to retesting among people on antiretroviral therapy. Conclusions HIV testing errors and poor practices, particularly those resulting in false positive or false negative diagnoses, do occur but are preventable. Efforts to accelerate HIV diagnosis and linkage to treatment should be complemented by efforts to improve the quality of HIV testing services and strengthen the quality management systems, particularly the use of validated testing algorithms and strategies, retesting people diagnosed with HIV before initiating treatment and

  4. Coronary Computed Tomography Angiography vs Functional Stress Testing for Patients With Suspected Coronary Artery Disease: A Systematic Review and Meta-analysis.

    Science.gov (United States)

    Foy, Andrew J; Dhruva, Sanket S; Peterson, Brandon; Mandrola, John M; Morgan, Daniel J; Redberg, Rita F

    2017-11-01

    Coronary computed tomography angiography (CCTA) is a new approach for the diagnosis of anatomical coronary artery disease (CAD), but it is unclear how CCTA performs compared with the standard approach of functional stress testing. To compare the clinical effectiveness of CCTA with that of functional stress testing for patients with suspected CAD. A systematic literature search was conducted in PubMed and MEDLINE for English-language randomized clinical trials of CCTA published from January 1, 2000, to July 10, 2016. Researchers selected randomized clinical trials that compared a primary strategy of CCTA with that of functional stress testing for patients with suspected CAD and reported data on patient clinical events and changes in therapy. Two reviewers independently extracted data from and assessed the quality of the trials. This analysis followed the PRISMA statement for reporting systematic reviews and meta-analyses and used the Cochrane Collaboration's tool for assessing risk of bias in randomized trials. The Mantel-Haenszel method was used to conduct the primary analysis. Summary relative risks were calculated with a random-effects model. The outcomes of interest were all-cause mortality, cardiac hospitalization, myocardial infarction, invasive coronary angiography, coronary revascularization, new CAD diagnoses, and change in prescription for aspirin and statins. Thirteen trials were included, with 10 315 patients in the CCTA arm and 9777 patients in the functional stress testing arm who were followed up for a mean duration of 18 months. There were no statistically significant differences between CCTA and functional stress testing in death (1.0% vs 1.1%; risk ratio [RR], 0.93; 95% CI, 0.71-1.21) or cardiac hospitalization (2.7% vs 2.7%; RR, 0.98; 95% CI, 0.79-1.21), but CCTA was associated with a reduction in the incidence of myocardial infarction (0.7% vs 1.1%; RR, 0.71; 95% CI, 0.53-0.96). Patients undergoing CCTA were significantly more likely to undergo

  5. Factors Models of Scrum Adoption in the Software Development Process: A Systematic Literature Review

    Directory of Open Access Journals (Sweden)

    Marilyn Sihuay

    2018-05-01

    Full Text Available (Background The adoption of Agile Software Development (ASD, in particular Scrum, has grown significantly since its introduction in 2001. However, in Lima, many ASDs implementations have been not suitable (uncompleted or inconsistent, thus losing benefits obtainable by this approach and the critical success factors in this context are unknown. (Objective To analyze factors models used in the evaluation of the adoption of ASDs, as these factors models can contribute to explaining the success or failure of these adoptions. (Method In this study we used a systematic literature review. (Result Ten models have been identified; their similarities and differences are presented. (Conclusion Each model identified consider different factors, however some of them are shared by five of these models, such as team member attributes, engaging customer, customer collaboration, experience and work environment.

  6. Linear Logistic Test Modeling with R

    Science.gov (United States)

    Baghaei, Purya; Kubinger, Klaus D.

    2015-01-01

    The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…

  7. The Influence of Social and Environmental Labels on Purchasing: An Information and Systematic-heuristic Processing Approach

    Directory of Open Access Journals (Sweden)

    Raquel Redondo Palomo

    2015-07-01

    Full Text Available This paper aims at exploring how social and environmental (SE labels influence purchasing. By drawing on the information processing and the systematic-heuristic models, this study tests the process followed by consumers when purchasing SE labeled-products. Information was gathered through a structured questionnaire in personal interviews with 400 consumers responsible for household shopping of Fast-moving Consumer Goods (FMCG, who were randomly approached at shopping malls in four areas of Madrid, Spain. They were asked about recognition, knowledge, credibility, perceived utility and purchases on 12 different labels; the influence of these variables on purchase is modeled and tested by path analysis. This study suggests that a systematic-heuristic information processing occurs when consumers buy SE-labeled FMCG products, as the purchase of this type of goods depends on the recognition of a label, knowledge of the issue/issuer, as well as the credibility and the perceived utility of SE labels. Motivation for being informed influences the process, being an antecedent of awareness, comprehension and perceived utility. This model shows a dual processing mode: systematic and heuristic, where the lack of cognitive capacity could explain why these two processing modes co-occur. This paper adds value to existing literature on SE labels and consumption by applying the information processing model, which has not been used before in the field of responsible consumption, in addition to open a promising avenue for research, by offering complementary theories to the existing ones, based on attitudes.

  8. Systematic Analysis of Quantitative Logic Model Ensembles Predicts Drug Combination Effects on Cell Signaling Networks

    Science.gov (United States)

    2016-08-27

    bovine serum albumin (BSA) diluted to the amount corresponding to that in the media of the stimulated cells. Phospho-JNK comprises two isoforms whose...information accompanies this paper on the CPT: Pharmacometrics & Systems Pharmacology website (http://www.wileyonlinelibrary.com/psp4) Systematic Analysis of Quantitative Logic Model Morris et al. 553 www.wileyonlinelibrary/psp4

  9. A test of inflated zeros for Poisson regression models.

    Science.gov (United States)

    He, Hua; Zhang, Hui; Ye, Peng; Tang, Wan

    2017-01-01

    Excessive zeros are common in practice and may cause overdispersion and invalidate inference when fitting Poisson regression models. There is a large body of literature on zero-inflated Poisson models. However, methods for testing whether there are excessive zeros are less well developed. The Vuong test comparing a Poisson and a zero-inflated Poisson model is commonly applied in practice. However, the type I error of the test often deviates seriously from the nominal level, rendering serious doubts on the validity of the test in such applications. In this paper, we develop a new approach for testing inflated zeros under the Poisson model. Unlike the Vuong test for inflated zeros, our method does not require a zero-inflated Poisson model to perform the test. Simulation studies show that when compared with the Vuong test our approach not only better at controlling type I error rate, but also yield more power.

  10. Ethical, social, and cultural issues related to clinical genetic testing and counseling in low- and middle-income countries: protocol for a systematic review.

    Science.gov (United States)

    Zhong, Adrina; Darren, Benedict; Dimaras, Helen

    2017-07-11

    There has been little focus in the literature on how to build genetic testing and counseling services in low- and middle-income countries in a responsible, ethical, and culturally appropriate manner. It is unclear to what extent this area is being explored and what form further research should take. The proposed knowledge synthesis aims to fill this gap in knowledge and mine the existing data to determine the breadth of work in this area and identify ethical, social, and cultural issues that have emerged. An integrated knowledge translation approach will be undertaken by engaging knowledge users throughout the review to ensure relevance to their practice. Electronic databases encompassing various disciplines, such as healthcare, social sciences, and public health, will be searched. Studies that address clinical genetic testing and/or counseling and ethical, social, and/or cultural issues of these genetic services, and are performed in low- and middle-income countries as defined by World Bank will be considered for inclusion. Two independent reviewers will be involved in a two-stage literature screening process, data extraction, and quality appraisal. Studies included in the review will be analyzed by thematic analysis. A narrative synthesis guided by the social ecological model will be used to summarize findings. This systematic review will provide a foundation of evidence regarding ethical, social, and cultural issues related to clinical genetic testing and counseling in low- and middle-income countries. Using the social ecological model as a conceptual framework will facilitate the understanding of broader influences of the sociocultural context on an individual's experience with clinical genetic testing and counseling, thereby informing interdisciplinary sectors in future recommendations for practice and policy. PROSPERO CRD42016042894.

  11. Conformance test development with the Java modeling language

    DEFF Research Database (Denmark)

    Søndergaard, Hans; Korsholm, Stephan E.; Ravn, Anders P.

    2017-01-01

    In order to claim conformance with a Java Specification Request, a Java implementation has to pass all tests in an associated Technology Compatibility Kit (TCK). This paper presents a model-based development of a TCK test suite and a test execution tool for the draft Safety-Critical Java (SCJ......) profile specification. The Java Modeling Language (JML) is used to model conformance constraints for the profile. JML annotations define contracts for classes and interfaces. The annotations are translated by a tool into runtime assertion checks.Hereby the design and elaboration of the concrete test cases...

  12. Robust, open-source removal of systematics in Kepler data

    Science.gov (United States)

    Aigrain, S.; Parviainen, H.; Roberts, S.; Reece, S.; Evans, T.

    2017-10-01

    We present ARC2 (Astrophysically Robust Correction 2), an open-source python-based systematics-correction pipeline, to correct for the Kepler prime mission long-cadence light curves. The ARC2 pipeline identifies and corrects any isolated discontinuities in the light curves and then removes trends common to many light curves. These trends are modelled using the publicly available co-trending basis vectors, within an (approximate) Bayesian framework with 'shrinkage' priors to minimize the risk of overfitting and the injection of any additional noise into the corrected light curves, while keeping any astrophysical signals intact. We show that the ARC2 pipeline's performance matches that of the standard Kepler PDC-MAP data products using standard noise metrics, and demonstrate its ability to preserve astrophysical signals using injection tests with simulated stellar rotation and planetary transit signals. Although it is not identical, the ARC2 pipeline can thus be used as an open-source alternative to PDC-MAP, whenever the ability to model the impact of the systematics removal process on other kinds of signal is important.

  13. Systematic Assessment Through Mathematical Model For Sustainability Reporting In Malaysia Context

    Science.gov (United States)

    Lanang, Wan Nurul Syahirah Wan; Turan, Faiz Mohd; Johan, Kartina

    2017-08-01

    Sustainability assessment have been studied and increasingly recognized as a powerful and valuable tool to measure the performance of sustainability in a company or industry. Nowadays, there are many existing tools that the users can use for sustainable development. There are various initiatives exists on tools for sustainable development, though most of the tools focused on environmental, economy and social aspects. Using the Green Project Management (GPM) P5 concept that suggests the firms not only needs to engage in mainly 3Ps principle: planet, profit, people responsible behaviours, but also, product and process need to be included in the practices, this study will introduce a new mathematical model for assessing the level of sustainability practice in the company. Based on multiple case studies, involving in-depth interviews with senior directors, feedback from experts, and previous engineering report, a systematic approach is done with the aims to obtain the respective data from the feedbacks and to be developed into a new mathematical model. By reviewing on the methodology of this research it comprises of several phases where it starts with the analyzation of the parameters and criteria selection according to the Malaysian context of industry. Moving on to the next step is data analysis involving regression and finally the normalisation process will be done to determine the result of this research either succeeded or not. Lastly, this study is expected to provide a clear guideline to any company or organization to assimilate the sustainability assessment in their development stage. In future, the better understanding towards the sustainability assessment is attained to be aligned unitedly in order to integrated the process approach into the systematic approach for the sustainability assessment.

  14. Pile Model Tests Using Strain Gauge Technology

    Science.gov (United States)

    Krasiński, Adam; Kusio, Tomasz

    2015-09-01

    Ordinary pile bearing capacity tests are usually carried out to determine the relationship between load and displacement of pile head. The measurement system required in such tests consists of force transducer and three or four displacement gauges. The whole system is installed at the pile head above the ground level. This approach, however, does not give us complete information about the pile-soil interaction. We can only determine the total bearing capacity of the pile, without the knowledge of its distribution into the shaft and base resistances. Much more information can be obtained by carrying out a test of instrumented pile equipped with a system for measuring the distribution of axial force along its core. In the case of pile model tests the use of such measurement is difficult due to small scale of the model. To find a suitable solution for axial force measurement, which could be applied to small scale model piles, we had to take into account the following requirements: - a linear and stable relationship between measured and physical values, - the force measurement accuracy of about 0.1 kN, - the range of measured forces up to 30 kN, - resistance of measuring gauges against aggressive counteraction of concrete mortar and against moisture, - insensitivity to pile bending, - economical factor. These requirements can be fulfilled by strain gauge sensors if an appropriate methodology is used for test preparation (Hoffmann [1]). In this paper, we focus on some aspects of the application of strain gauge sensors for model pile tests. The efficiency of the method is proved on the examples of static load tests carried out on SDP model piles acting as single piles and in a group.

  15. Peak Vertical Ground Reaction Force during Two-Leg Landing: A Systematic Review and Mathematical Modeling

    Directory of Open Access Journals (Sweden)

    Wenxin Niu

    2014-01-01

    Full Text Available Objectives. (1 To systematically review peak vertical ground reaction force (PvGRF during two-leg drop landing from specific drop height (DH, (2 to construct a mathematical model describing correlations between PvGRF and DH, and (3 to analyze the effects of some factors on the pooled PvGRF regardless of DH. Methods. A computerized bibliographical search was conducted to extract PvGRF data on a single foot when participants landed with both feet from various DHs. An innovative mathematical model was constructed to analyze effects of gender, landing type, shoes, ankle stabilizers, surface stiffness and sample frequency on PvGRF based on the pooled data. Results. Pooled PvGRF and DH data of 26 articles showed that the square root function fits their relationship well. An experimental validation was also done on the regression equation for the medicum frequency. The PvGRF was not significantly affected by surface stiffness, but was significantly higher in men than women, the platform than suspended landing, the barefoot than shod condition, and ankle stabilizer than control condition, and higher than lower frequencies. Conclusions. The PvGRF and root DH showed a linear relationship. The mathematical modeling method with systematic review is helpful to analyze the influence factors during landing movement without considering DH.

  16. The Latent Class Model as a Measurement Model for Situational Judgment Tests

    Directory of Open Access Journals (Sweden)

    Frank Rijmen

    2011-11-01

    Full Text Available In a situational judgment test, it is often debatable what constitutes a correct answer to a situation. There is currently a multitude of scoring procedures. Establishing a measurement model can guide the selection of a scoring rule. It is argued that the latent class model is a good candidate for a measurement model. Two latent class models are applied to the Managing Emotions subtest of the Mayer, Salovey, Caruso Emotional Intelligence Test: a plain-vanilla latent class model, and a second-order latent class model that takes into account the clustering of several possible reactions within each hypothetical scenario of the situational judgment test. The results for both models indicated that there were three subgroups characterised by the degree to which differentiation occurred between possible reactions in terms of perceived effectiveness. Furthermore, the results for the second-order model indicated a moderate cluster effect.

  17. A systematic review of current immunological tests for the diagnosis of cattle brucellosis.

    Science.gov (United States)

    Ducrotoy, Marie J; Muñoz, Pilar M; Conde-Álvarez, Raquel; Blasco, José M; Moriyón, Ignacio

    2018-03-01

    Brucellosis is a worldwide extended zoonosis with a heavy economic and public health impact. Cattle, sheep and goats are infected by smooth Brucella abortus and Brucella melitensis, and represent a common source of the human disease. Brucellosis diagnosis in these animals is largely based on detection of a specific immunoresponse. We review here the immunological tests used for the diagnosis of cattle brucellosis. First, we discuss how the diagnostic sensitivity (DSe) and specificity (DSp), balance should be adjusted for brucellosis diagnosis, and the difficulties that brucellosis tests specifically present for the estimation of DSe/DSp in frequentistic (gold standard) and Bayesian analyses. Then, we present a systematic review (PubMed, GoogleScholar and CABdirect) of works (154 out of 991; years 1960-August 2017) identified (by title and Abstract content) as DSe and DSp studies of smooth lipopolysaccharide, O-polysaccharide-core, native hapten and protein diagnostic tests. We summarize data of gold standard studies (n = 23) complying with strict inclusion and exclusion criteria with regards to test methodology and definition of the animals studied (infected and S19 or RB51 vaccinated cattle, and Brucella-free cattle affected or not by false positive serological reactions). We also discuss some studies (smooth lipopolysaccharide tests, protein antibody and delayed type hypersensitivity [skin] tests) that do not meet the criteria and yet fill some of the gaps in information. We review Bayesian studies (n = 5) and report that in most cases priors and assumptions on conditional dependence/independence are not coherent with the variable serological picture of the disease in different epidemiological scenarios and the bases (antigen, isotype and immunoglobulin properties involved) of brucellosis tests, practical experience and the results of gold standard studies. We conclude that very useful lipopolysaccharide (buffered plate antigen and indirect ELISA) and

  18. [Skilled communication as "intervention" : Models for systematic communication in the healthcare system].

    Science.gov (United States)

    Weinert, M; Mayer, H; Zojer, E

    2015-02-01

    Specific communication training is currently not integrated into anesthesiology curricula. At the same time communication is an important key factor when working with colleagues, in the physician-patient relationship, during management of emergencies and in avoiding or reducing the legal consequences of adverse medical events. Therefore, focused attention should be brought to this area. In other high risk industries, specific communication training has been standard for a long time and in medicine there is an approach to teach and train these soft skills by simulation. Systematic communication training, however, is rarely an established component of specialist training. It is impossible not to communicate whereby nonverbal indications, such as gestures, mimic expression, posture and tone play an important part. Miscommunication, however, is common and leads to unproductive behavior. The cause of this is not always obvious. This article provides an overview of the communication models of Shannon, Watzlawick et al. and Schulz von Thun et al. and describes their limitations. The "Process Communication Model®" (PCM) is also introduced. An overview is provided with examples of how this tool can be used to look at the communication process from a systematic point of view. People have different psychological needs. Not taking care of these needs will result in individual stress behavior, which can be graded into first, second and third degrees of severity (driver behavior, mask behavior and desperation). These behavior patterns become exposed in predictable sequences. Furthermore, on the basis of this model, successful communication can be established while unproductive behavior that occurs during stress can be dealt with appropriately. Because of the importance of communication in all areas of medical care, opportunities exist to focus research on the influence of targeted communication on patient outcome, complications and management of emergencies.

  19. Modelling the transmission of healthcare associated infections: a systematic review

    Science.gov (United States)

    2013-01-01

    Background Dynamic transmission models are increasingly being used to improve our understanding of the epidemiology of healthcare-associated infections (HCAI). However, there has been no recent comprehensive review of this emerging field. This paper summarises how mathematical models have informed the field of HCAI and how methods have developed over time. Methods MEDLINE, EMBASE, Scopus, CINAHL plus and Global Health databases were systematically searched for dynamic mathematical models of HCAI transmission and/or the dynamics of antimicrobial resistance in healthcare settings. Results In total, 96 papers met the eligibility criteria. The main research themes considered were evaluation of infection control effectiveness (64%), variability in transmission routes (7%), the impact of movement patterns between healthcare institutes (5%), the development of antimicrobial resistance (3%), and strain competitiveness or co-colonisation with different strains (3%). Methicillin-resistant Staphylococcus aureus was the most commonly modelled HCAI (34%), followed by vancomycin resistant enterococci (16%). Other common HCAIs, e.g. Clostridum difficile, were rarely investigated (3%). Very few models have been published on HCAI from low or middle-income countries. The first HCAI model has looked at antimicrobial resistance in hospital settings using compartmental deterministic approaches. Stochastic models (which include the role of chance in the transmission process) are becoming increasingly common. Model calibration (inference of unknown parameters by fitting models to data) and sensitivity analysis are comparatively uncommon, occurring in 35% and 36% of studies respectively, but their application is increasing. Only 5% of models compared their predictions to external data. Conclusions Transmission models have been used to understand complex systems and to predict the impact of control policies. Methods have generally improved, with an increased use of stochastic models, and

  20. Horns Rev II, 2D-Model Tests

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Frigaard, Peter

    This report present the results of 2D physical model tests carried out in the shallow wave flume at Dept. of Civil Engineering, Aalborg University (AAU). The objective of the tests was: To investigate the combined influence of the pile diameter to water depth ratio and the wave height to water...... depth ratio on wave run-up of piles. The measurements should be used to design access platforms on piles. The Model tests include: Calibration of regular and irregular sea states at the location of the pile (without structure in place). Measurement of wave run-up for the calibrated sea states...... on the front side of the pile (0 to 90 degrees). These tests have been conducted at Aalborg University from 9. October, 2006 to 8. November, 2006. Unless otherwise mentioned, all values given in this report are in model scale....

  1. Model tests on dynamic performance of RC shear walls

    International Nuclear Information System (INIS)

    Nagashima, Toshio; Shibata, Akenori; Inoue, Norio; Muroi, Kazuo.

    1991-01-01

    For the inelastic dynamic response analysis of a reactor building subjected to earthquakes, it is essentially important to properly evaluate its restoring force characteristics under dynamic loading condition and its damping performance. Reinforced concrete shear walls are the main structural members of a reactor building, and dominate its seismic behavior. In order to obtain the basic information on the dynamic restoring force characteristics and damping performance of shear walls, the dynamic test using a large shaking table, static displacement control test and the pseudo-dynamic test on the models of a shear wall were conducted. In the dynamic test, four specimens were tested on a large shaking table. In the static test, four specimens were tested, and in the pseudo-dynamic test, three specimens were tested. These tests are outlined. The results of these tests were compared, placing emphasis on the restoring force characteristics and damping performance of the RC wall models. The strength was higher in the dynamic test models than in the static test models mainly due to the effect of loading rate. (K.I.)

  2. Veterans' informal caregivers in the "sandwich generation": a systematic review toward a resilience model.

    Science.gov (United States)

    Smith-Osborne, Alexa; Felderhoff, Brandi

    2014-01-01

    Social work theory advanced the formulation of the construct of the sandwich generation to apply to the emerging generational cohort of caregivers, most often middle-aged women, who were caring for maturing children and aging parents simultaneously. This systematic review extends that focus by synthesizing the literature on sandwich generation caregivers for the general aging population with dementia and for veterans with dementia and polytrauma. It develops potential protective mechanisms based on empirical literature to support an intervention resilience model for social work practitioners. This theoretical model addresses adaptive coping of sandwich- generation families facing ongoing challenges related to caregiving demands.

  3. THE MISHKIN TEST: AN ANALYSIS OF MODEL EXTENSIONS

    Directory of Open Access Journals (Sweden)

    Diana MURESAN

    2015-04-01

    Full Text Available This paper reviews empirical research that apply Mishkin test for the examination of the existence of accruals anomaly using alternative approaches. Mishkin test is a test used in macro-econometrics for rational hypothesis, which test for the market efficiency. Starting with Sloan (1996 the model has been applied to accruals anomaly literature. Since Sloan (1996, the model has known various improvements and it has been the subject to many debates in the literature regarding its efficacy. Nevertheless, the current evidence strengthens the pervasiveness of the model. The analyses realized on the extended studies on Mishkin test highlights that adding additional variables enhances the results, providing insightful information about the occurrence of accruals anomaly.

  4. Correlation of the New York Heart Association classification and the cardiopulmonary exercise test: A systematic review.

    Science.gov (United States)

    Lim, Fang Yi; Yap, Jonathan; Gao, Fei; Teo, Ling Li; Lam, Carolyn S P; Yeo, Khung Keong

    2018-07-15

    The New York Heart Association (NYHA) classification is frequently used in the management of heart failure but may be limited by patient and physician subjectivity. Cardiopulmonary exercise testing (CPET) provides a potentially more objective measurement of functional status. We aim to study the correlation between NYHA classification and peak oxygen consumption (pVO 2 ) on Cardiopulmonary Exercise Testing (CPET) within and across published studies. A systematic literature review on all studies reporting both NYHA class and CPET data was performed, and pVO 2 from CPET was correlated to reported NYHA class within and across eligible studies. 38 studies involving 2645 patients were eligible. Heterogenity was assessed by the Q statistic, which is a χ2 test and marker of systematic differences between studies. Within each NYHA class, significant heterogeneity in pVO 2 was seen across studies: NYHA I (n = 17, Q = 486.7, p < 0.0001), II (n = 24, Q = 381.0, p < 0.0001), III (n = 32, Q = 761.3, p < 0.0001) and IV (n = 5, Q = 12.8, p = 0.012). Significant differences in mean pVO 2 were observed between NYHA I and II (23.8 vs 17.6 mL/(kg·min), p < 0.0001) and II and III (17.6 vs 13.3 mL/(kg·min), p < 0.0001); but not between NYHA III and IV (13.3 vs 12.5 mL/(kg·min), p = 0.45). These differences remained significant after adjusting for age, gender, ejection fraction and region of study. There was a general inverse correlation between NYHA class and pVO 2. However, significant heterogeneity in pVO 2 exists across studies within each NYHA class. While the NYHA classification holds clinical value in heart failure management, direct comparison across studies may have its limitations. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Quantitative, steady-state properties of Catania's computational model of the operant reserve.

    Science.gov (United States)

    Berg, John P; McDowell, J J

    2011-05-01

    Catania (2005) found that a computational model of the operant reserve (Skinner, 1938) produced realistic behavior in initial, exploratory analyses. Although Catania's operant reserve computational model demonstrated potential to simulate varied behavioral phenomena, the model was not systematically tested. The current project replicated and extended the Catania model, clarified its capabilities through systematic testing, and determined the extent to which it produces behavior corresponding to matching theory. Significant departures from both classic and modern matching theory were found in behavior generated by the model across all conditions. The results suggest that a simple, dynamic operant model of the reflex reserve does not simulate realistic steady state behavior. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Model Testing - Bringing the Ocean into the Laboratory

    DEFF Research Database (Denmark)

    Aage, Christian

    2000-01-01

    Hydrodynamic model testing, the principle of bringing the ocean into the laboratory to study the behaviour of the ocean itself and the response of man-made structures in the ocean in reduced scale, has been known for centuries. Due to an insufficient understanding of the physics involved, however......, the early model tests often gave incomplete or directly misleading results.This keynote lecture deals with some of the possibilities and problems within the field of hydrodynamic and hydraulic model testing....

  7. Transition between process models (BPMN and service models (WS-BPEL and other standards: A systematic review

    Directory of Open Access Journals (Sweden)

    Marko Jurišić

    2011-12-01

    Full Text Available BPMN and BPEL have become de facto standards for modeling of business processes and imple-mentation of business processes via Web services. There is a quintessential problem of discrep-ancy between these two approaches as they are applied in different phases of lifecycle and theirfundamental concepts are different — BPMN is a graph based language while BPEL is basicallya block-based programming language. This paper shows basic concepts and gives an overviewof research and ideas which emerged during last two years, presents state of the art and possiblefuture research directions. Systematic literature review was performed and critical review wasgiven regarding the potential of the given solutions.

  8. Conditional Monte Carlo randomization tests for regression models.

    Science.gov (United States)

    Parhat, Parwen; Rosenberger, William F; Diao, Guoqing

    2014-08-15

    We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.

  9. Systematic analysis of fly models with multiple drivers reveals different effects of ataxin-1 and huntingtin in neuron subtype-specific expression.

    Directory of Open Access Journals (Sweden)

    Risa Shiraishi

    Full Text Available The fruit fly, Drosophila melanogaster, is a commonly used model organism for neurodegenerative diseases. Its major advantages include a short lifespan and its susceptibility to manipulation using sophisticated genetic techniques. Here, we report the systematic comparison of fly models of two polyglutamine (polyQ diseases. We induced expression of the normal and mutant forms of full-length Ataxin-1 and Huntingtin exon 1 in cholinergic, dopaminergic, and motor neurons, and glial cells using cell type-specific drivers. We systematically analyzed their effects based on multiple phenotypes: eclosion rate, lifespan, motor performance, and circadian rhythms of spontaneous activity. This systematic assay system enabled us to quantitatively evaluate and compare the functional disabilities of different genotypes. The results suggest different effects of Ataxin-1 and Huntingtin on specific types of neural cells during development and in adulthood. In addition, we confirmed the therapeutic effects of LiCl and butyrate using representative models. These results support the usefulness of this assay system for screening candidate chemical compounds that modify the pathologies of polyQ diseases.

  10. The Model Identification Test: A Limited Verbal Science Test

    Science.gov (United States)

    McIntyre, P. J.

    1972-01-01

    Describes the production of a test with a low verbal load for use with elementary school science students. Animated films were used to present appropriate and inappropriate models of the behavior of particles of matter. (AL)

  11. Development of Test-Analysis Models (TAM) for correlation of dynamic test and analysis results

    Science.gov (United States)

    Angelucci, Filippo; Javeed, Mehzad; Mcgowan, Paul

    1992-01-01

    The primary objective of structural analysis of aerospace applications is to obtain a verified finite element model (FEM). The verified FEM can be used for loads analysis, evaluate structural modifications, or design control systems. Verification of the FEM is generally obtained as the result of correlating test and FEM models. A test analysis model (TAM) is very useful in the correlation process. A TAM is essentially a FEM reduced to the size of the test model, which attempts to preserve the dynamic characteristics of the original FEM in the analysis range of interest. Numerous methods for generating TAMs have been developed in the literature. The major emphasis of this paper is a description of the procedures necessary for creation of the TAM and the correlation of the reduced models with the FEM or the test results. Herein, three methods are discussed, namely Guyan, Improved Reduced System (IRS), and Hybrid. Also included are the procedures for performing these analyses using MSC/NASTRAN. Finally, application of the TAM process is demonstrated with an experimental test configuration of a ten bay cantilevered truss structure.

  12. Do negative screening test results cause false reassurance? A systematic review.

    Science.gov (United States)

    Cooper, Grace C; Harvie, Michelle N; French, David P

    2017-11-01

    It has been suggested that receiving a negative screening test result may cause false reassurance or have a 'certificate of health effect'. False reassurance in those receiving a negative screening test result may result in them wrongly believing themselves to be at lower risk of the disease, and consequently less likely to engage in health-related behaviours that would lower their risk. The present systematic review aimed to identify the evidence regarding false reassurance effects due to negative screening test results in adults (over 18 years) screened for the presence of a disease or its precursors, where disease or precursors are linked to lifestyle behaviours. MEDLINE and PsycINFO were searched for trials that compared a group who had received negative screening results to an unscreened control group. The following outcomes were considered as markers of false reassurance: perceived risk of disease; anxiety and worry about disease; health-related behaviours or intention to change health-related behaviours (i.e., smoking, diet, physical activity, and alcohol consumption); self-rated health status. Nine unique studies were identified, reporting 55 measures in relation to the outcomes considered. Outcomes were measured at various time points from immediately following screening to up to 11 years after screening. Despite considerable variation in outcome measures used and timing of measurements, effect sizes for comparisons between participants who received negative screening test results and control participants were typically small with few statistically significant differences. There was evidence of high risk of bias, and measures of behaviours employed were often not valid. The limited evidence base provided little evidence of false reassurance following a negative screening test results on any of four outcomes examined. False reassurance should not be considered a significant harm of screening, but further research is warranted. Statement of contribution

  13. Wave Reflection Model Tests

    DEFF Research Database (Denmark)

    Burcharth, H. F.; Larsen, Brian Juul

    The investigation concerns the design of a new internal breakwater in the main port of Ibiza. The objective of the model tests was in the first hand to optimize the cross section to make the wave reflection low enough to ensure that unacceptable wave agitation will not occur in the port. Secondly...

  14. Test-driven verification/validation of model transformations

    Institute of Scientific and Technical Information of China (English)

    László LENGYEL; Hassan CHARAF

    2015-01-01

    Why is it important to verify/validate model transformations? The motivation is to improve the quality of the trans-formations, and therefore the quality of the generated software artifacts. Verified/validated model transformations make it possible to ensure certain properties of the generated software artifacts. In this way, verification/validation methods can guarantee different requirements stated by the actual domain against the generated/modified/optimized software products. For example, a verified/ validated model transformation can ensure the preservation of certain properties during the model-to-model transformation. This paper emphasizes the necessity of methods that make model transformation verified/validated, discusses the different scenarios of model transformation verification and validation, and introduces the principles of a novel test-driven method for verifying/ validating model transformations. We provide a solution that makes it possible to automatically generate test input models for model transformations. Furthermore, we collect and discuss the actual open issues in the field of verification/validation of model transformations.

  15. Systematic model for lean product development implementation in an automotive related company

    Directory of Open Access Journals (Sweden)

    Daniel Osezua Aikhuele

    2017-07-01

    Full Text Available Lean product development is a major innovative business strategy that employs sets of practices to achieve an efficient, innovative and a sustainable product development. Despite the many benefits and high hopes in the lean strategy, many companies are still struggling, and unable to either achieve or sustain substantial positive results with their lean implementation efforts. However, as the first step towards addressing this issue, this paper seeks to propose a systematic model that considers the administrative and implementation limitations of lean thinking practices in the product development process. The model which is based on the integration of fuzzy Shannon’s entropy and Modified Technique for Order Preference by Similarity to the Ideal Solution (M-TOPSIS model for the lean product development practices implementation with respective to different criteria including management and leadership, financial capabilities, skills and expertise and organization culture, provides a guide or roadmap for product development managers on the lean implementation route.

  16. Analysing model fit of psychometric process models: An overview, a new test and an application to the diffusion model.

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten

    2017-05-01

    Cognitive psychometric models embed cognitive process models into a latent trait framework in order to allow for individual differences. Due to their close relationship to the response process the models allow for profound conclusions about the test takers. However, before such a model can be used its fit has to be checked carefully. In this manuscript we give an overview over existing tests of model fit and show their relation to the generalized moment test of Newey (Econometrica, 53, 1985, 1047) and Tauchen (J. Econometrics, 30, 1985, 415). We also present a new test, the Hausman test of misspecification (Hausman, Econometrica, 46, 1978, 1251). The Hausman test consists of a comparison of two estimates of the same item parameters which should be similar if the model holds. The performance of the Hausman test is evaluated in a simulation study. In this study we illustrate its application to two popular models in cognitive psychometrics, the Q-diffusion model and the D-diffusion model (van der Maas, Molenaar, Maris, Kievit, & Boorsboom, Psychol Rev., 118, 2011, 339; Molenaar, Tuerlinckx, & van der Maas, J. Stat. Softw., 66, 2015, 1). We also compare the performance of the test to four alternative tests of model fit, namely the M 2 test (Molenaar et al., J. Stat. Softw., 66, 2015, 1), the moment test (Ranger et al., Br. J. Math. Stat. Psychol., 2016) and the test for binned time (Ranger & Kuhn, Psychol. Test. Asess. , 56, 2014b, 370). The simulation study indicates that the Hausman test is superior to the latter tests. The test closely adheres to the nominal Type I error rate and has higher power in most simulation conditions. © 2017 The British Psychological Society.

  17. Kinematic tests of exotic flat cosmological models

    International Nuclear Information System (INIS)

    Charlton, J.C.; Turner, M.S.; NASA/Fermilab Astrophysics Center, Batavia, IL)

    1987-01-01

    Theoretical prejudice and inflationary models of the very early universe strongly favor the flat, Einstein-de Sitter model of the universe. At present the observational data conflict with this prejudice. This conflict can be resolved by considering flat models of the universe which posses a smooth component of energy density. The kinematics of such models, where the smooth component is relativistic particles, a cosmological term, a network of light strings, or fast-moving, light strings is studied in detail. The observational tests which can be used to discriminate between these models are also discussed. These tests include the magnitude-redshift, lookback time-redshift, angular size-redshift, and comoving volume-redshift diagrams and the growth of density fluctuations. 58 references

  18. Kinematic tests of exotic flat cosmological models

    International Nuclear Information System (INIS)

    Charlton, J.C.; Turner, M.S.

    1986-05-01

    Theoretical prejudice and inflationary models of the very early Universe strongly favor the flat, Einstein-deSitter model of the Universe. At present the observational data conflict with this prejudice. This conflict can be resolved by considering flat models of the Universe which possess a smooth component by energy density. We study in detail the kinematics of such models, where the smooth component is relativistic particles, a cosmological term, a network of light strings, or fast-moving, light strings. We also discuss the observational tests which can be used to discriminate between these models. These tests include the magnitude-redshift, lookback time-redshift, angular size-redshift, and comoving volume-redshift diagrams and the growth of density fluctuations

  19. Kinematic tests of exotic flat cosmological models

    Energy Technology Data Exchange (ETDEWEB)

    Charlton, J.C.; Turner, M.S.

    1986-05-01

    Theoretical prejudice and inflationary models of the very early Universe strongly favor the flat, Einstein-deSitter model of the Universe. At present the observational data conflict with this prejudice. This conflict can be resolved by considering flat models of the Universe which possess a smooth component by energy density. We study in detail the kinematics of such models, where the smooth component is relativistic particles, a cosmological term, a network of light strings, or fast-moving, light strings. We also discuss the observational tests which can be used to discriminate between these models. These tests include the magnitude-redshift, lookback time-redshift, angular size-redshift, and comoving volume-redshift diagrams and the growth of density fluctuations.

  20. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    2001-01-01

    A model for constrained computerized adaptive testing is proposed in which the information on the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  1. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    1997-01-01

    A model for constrained computerized adaptive testing is proposed in which the information in the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  2. A systematic literature review of open source software quality assessment models.

    Science.gov (United States)

    Adewumi, Adewole; Misra, Sanjay; Omoregbe, Nicholas; Crawford, Broderick; Soto, Ricardo

    2016-01-01

    Many open source software (OSS) quality assessment models are proposed and available in the literature. However, there is little or no adoption of these models in practice. In order to guide the formulation of newer models so they can be acceptable by practitioners, there is need for clear discrimination of the existing models based on their specific properties. Based on this, the aim of this study is to perform a systematic literature review to investigate the properties of the existing OSS quality assessment models by classifying them with respect to their quality characteristics, the methodology they use for assessment, and their domain of application so as to guide the formulation and development of newer models. Searches in IEEE Xplore, ACM, Science Direct, Springer and Google Search is performed so as to retrieve all relevant primary studies in this regard. Journal and conference papers between the year 2003 and 2015 were considered since the first known OSS quality model emerged in 2003. A total of 19 OSS quality assessment model papers were selected. To select these models we have developed assessment criteria to evaluate the quality of the existing studies. Quality assessment models are classified into five categories based on the quality characteristics they possess namely: single-attribute, rounded category, community-only attribute, non-community attribute as well as the non-quality in use models. Our study reflects that software selection based on hierarchical structures is found to be the most popular selection method in the existing OSS quality assessment models. Furthermore, we found that majority (47%) of the existing models do not specify any domain of application. In conclusion, our study will be a valuable contribution to the community and helps the quality assessment model developers in formulating newer models and also to the practitioners (software evaluators) in selecting suitable OSS in the midst of alternatives.

  3. The Technical Quality of Test Items Generated Using a Systematic Approach to Item Writing.

    Science.gov (United States)

    Siskind, Theresa G.; Anderson, Lorin W.

    The study was designed to examine the similarity of response options generated by different item writers using a systematic approach to item writing. The similarity of response options to student responses for the same item stems presented in an open-ended format was also examined. A non-systematic (subject matter expertise) approach and a…

  4. Model Selection in Continuous Test Norming With GAMLSS.

    Science.gov (United States)

    Voncken, Lieke; Albers, Casper J; Timmerman, Marieke E

    2017-06-01

    To compute norms from reference group test scores, continuous norming is preferred over traditional norming. A suitable continuous norming approach for continuous data is the use of the Box-Cox Power Exponential model, which is found in the generalized additive models for location, scale, and shape. Applying the Box-Cox Power Exponential model for test norming requires model selection, but it is unknown how well this can be done with an automatic selection procedure. In a simulation study, we compared the performance of two stepwise model selection procedures combined with four model-fit criteria (Akaike information criterion, Bayesian information criterion, generalized Akaike information criterion (3), cross-validation), varying data complexity, sampling design, and sample size in a fully crossed design. The new procedure combined with one of the generalized Akaike information criterion was the most efficient model selection procedure (i.e., required the smallest sample size). The advocated model selection procedure is illustrated with norming data of an intelligence test.

  5. PREDICTIONS OF DISPERSION AND DEPOSITION OF FALLOUT FROM NUCLEAR TESTING USING THE NOAA-HYSPLIT METEOROLOGICAL MODEL

    Science.gov (United States)

    Moroz, Brian E.; Beck, Harold L.; Bouville, André; Simon, Steven L.

    2013-01-01

    The NOAA Hybrid Single-Particle Lagrangian Integrated Trajectory Model (HYSPLIT) was evaluated as a research tool to simulate the dispersion and deposition of radioactive fallout from nuclear tests. Model-based estimates of fallout can be valuable for use in the reconstruction of past exposures from nuclear testing, particularly, where little historical fallout monitoring data is available. The ability to make reliable predictions about fallout deposition could also have significant importance for nuclear events in the future. We evaluated the accuracy of the HYSPLIT-predicted geographic patterns of deposition by comparing those predictions against known deposition patterns following specific nuclear tests with an emphasis on nuclear weapons tests conducted in the Marshall Islands. We evaluated the ability of the computer code to quantitatively predict the proportion of fallout particles of specific sizes deposited at specific locations as well as their time of transport. In our simulations of fallout from past nuclear tests, historical meteorological data were used from a reanalysis conducted jointly by the National Centers for Environmental Prediction (NCEP) and the National Center for Atmospheric Research (NCAR). We used a systematic approach in testing the HYSPLIT model by simulating the release of a range of particles sizes from a range of altitudes and evaluating the number and location of particles deposited. Our findings suggest that the quantity and quality of meteorological data are the most important factors for accurate fallout predictions and that when satisfactory meteorological input data are used, HYSPLIT can produce relatively accurate deposition patterns and fallout arrival times. Furthermore, when no other measurement data are available, HYSPLIT can be used to indicate whether or not fallout might have occurred at a given location and provide, at minimum, crude quantitative estimates of the magnitude of the deposited activity. A variety of

  6. Predictions of dispersion and deposition of fallout from nuclear testing using the NOAA-HYSPLIT meteorological model.

    Science.gov (United States)

    Moroz, Brian E; Beck, Harold L; Bouville, André; Simon, Steven L

    2010-08-01

    The NOAA Hybrid Single-Particle Lagrangian Integrated Trajectory Model (HYSPLIT) was evaluated as a research tool to simulate the dispersion and deposition of radioactive fallout from nuclear tests. Model-based estimates of fallout can be valuable for use in the reconstruction of past exposures from nuclear testing, particularly where little historical fallout monitoring data are available. The ability to make reliable predictions about fallout deposition could also have significant importance for nuclear events in the future. We evaluated the accuracy of the HYSPLIT-predicted geographic patterns of deposition by comparing those predictions against known deposition patterns following specific nuclear tests with an emphasis on nuclear weapons tests conducted in the Marshall Islands. We evaluated the ability of the computer code to quantitatively predict the proportion of fallout particles of specific sizes deposited at specific locations as well as their time of transport. In our simulations of fallout from past nuclear tests, historical meteorological data were used from a reanalysis conducted jointly by the National Centers for Environmental Prediction (NCEP) and the National Center for Atmospheric Research (NCAR). We used a systematic approach in testing the HYSPLIT model by simulating the release of a range of particle sizes from a range of altitudes and evaluating the number and location of particles deposited. Our findings suggest that the quantity and quality of meteorological data are the most important factors for accurate fallout predictions and that, when satisfactory meteorological input data are used, HYSPLIT can produce relatively accurate deposition patterns and fallout arrival times. Furthermore, when no other measurement data are available, HYSPLIT can be used to indicate whether or not fallout might have occurred at a given location and provide, at minimum, crude quantitative estimates of the magnitude of the deposited activity. A variety of

  7. Rate-control algorithms testing by using video source model

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Anna

    2008-01-01

    In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set.......In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set....

  8. The Efficacy of Trastuzumab in Animal Models of Breast Cancer: A Systematic Review and Meta-Analysis.

    Directory of Open Access Journals (Sweden)

    Jiarong Chen

    Full Text Available Breast cancer is the most frequent cancers and is the second leading cause of cancer death among women. Trastuzumab is an effective treatment, the first monoclonal antibody directed against the human epidermal growth factor receptor 2 (HER2. To inform the development of other effective treatments we report summary estimates of efficacy of trastuzumab on survival and tumour volume in animal models of breast cancer.We searched PubMed and EMBASE systematically to identify publications testing trastuzumab in animal models of breast cancer. Data describing tumour volume, median survival and animal features were extracted and we assessed quality using a 12-item checklist. We analysed the impact of study design and quality and evidence for publication bias.We included data from 83 studies reporting 169 experiments using 2076 mice. Trastuzumab treatment caused a substantial reduction in tumour growth, with tumours in treated animals growing to 32.6% of the volume of tumours in control animals (95%CI 27.8%-38.2%. Median survival was prolonged by a factor of 1.45 (1.30-1.62. Many study design and quality features accounted for between-study heterogeneity and we found evidence suggesting publication bias.We have found trastuzumab to be effective in animal breast cancer models across a range of experimental circumstances. However the presence of publication bias and a low prevalence of measures to reduce bias provide a focus for future improvements in preclinical breast cancer research.

  9. Evaluation models and criteria of the quality of hospital websites: a systematic review study

    OpenAIRE

    Jeddi, Fatemeh Rangraz; Gilasi, Hamidreza; Khademi, Sahar

    2017-01-01

    Introduction Hospital websites are important tools in establishing communication and exchanging information between patients and staff, and thus should enjoy an acceptable level of quality. The aim of this study was to identify proper models and criteria to evaluate the quality of hospital websites. Methods This research was a systematic review study. The international databases such as Science Direct, Google Scholar, PubMed, Proquest, Ovid, Elsevier, Springer, and EBSCO together with regiona...

  10. Implementing learning organization components in Ardabil Regional Water Company based on Marquardt systematic model

    OpenAIRE

    Shahram Mirzaie Daryani; Azadeh Zirak

    2015-01-01

    This main purpose of this study was to survey the implementation of learning organization characteristics based on Marquardt systematic model in Ardabil Regional Water Company. Two hundred and four staff (164 employees and 40 authorities) participated in the study. For data collection Marquardt questionnaire was used which its validity and reliability had been confirmed. The results of the data analysis showed that learning organization characteristics were used more than average level in som...

  11. Superconducting solenoid model magnet test results

    International Nuclear Information System (INIS)

    Carcagno, R.; Dimarco, J.; Feher, S.; Ginsburg, C.M.; Hess, C.; Kashikhin, V.V.; Orris, D.F.; Pischalnikov, Y.; Sylvester, C.; Tartaglia, M.A.; Terechkine, I.; Tompkins, J.C.; Wokas, T.; Fermilab

    2006-01-01

    Superconducting solenoid magnets suitable for the room temperature front end of the Fermilab High Intensity Neutrino Source (formerly known as Proton Driver), an 8 GeV superconducting H- linac, have been designed and fabricated at Fermilab, and tested in the Fermilab Magnet Test Facility. We report here results of studies on the first model magnets in this program, including the mechanical properties during fabrication and testing in liquid helium at 4.2 K, quench performance, and magnetic field measurements. We also describe new test facility systems and instrumentation that have been developed to accomplish these tests

  12. Superconducting solenoid model magnet test results

    Energy Technology Data Exchange (ETDEWEB)

    Carcagno, R.; Dimarco, J.; Feher, S.; Ginsburg, C.M.; Hess, C.; Kashikhin, V.V.; Orris, D.F.; Pischalnikov, Y.; Sylvester, C.; Tartaglia, M.A.; Terechkine, I.; /Fermilab

    2006-08-01

    Superconducting solenoid magnets suitable for the room temperature front end of the Fermilab High Intensity Neutrino Source (formerly known as Proton Driver), an 8 GeV superconducting H- linac, have been designed and fabricated at Fermilab, and tested in the Fermilab Magnet Test Facility. We report here results of studies on the first model magnets in this program, including the mechanical properties during fabrication and testing in liquid helium at 4.2 K, quench performance, and magnetic field measurements. We also describe new test facility systems and instrumentation that have been developed to accomplish these tests.

  13. Testing the compounding structure of the CP-INARCH model

    OpenAIRE

    Weiß, Christian H.; Gonçalves, Esmeralda; Lopes, Nazaré Mendes

    2017-01-01

    A statistical test to distinguish between a Poisson INARCH model and a Compound Poisson INARCH model is proposed, based on the form of the probability generating function of the compounding distribution of the conditional law of the model. For first-order autoregression, the normality of the test statistics’ asymptotic distribution is established, either in the case where the model parameters are specified, or when such parameters are consistently estimated. As the test statistics’ law involv...

  14. Is the standard model really tested?

    International Nuclear Information System (INIS)

    Takasugi, E.

    1989-01-01

    It is discussed how the standard model is really tested. Among various tests, I concentrate on the CP violation phenomena in K and B meson system. Especially, the resent hope to overcome the theoretical uncertainty in the evaluation on the CP violation of K meson system is discussed. (author)

  15. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  16. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  17. A systematic composite service design modeling method using graph-based theory.

    Science.gov (United States)

    Elhag, Arafat Abdulgader Mohammed; Mohamad, Radziah; Aziz, Muhammad Waqar; Zeshan, Furkh

    2015-01-01

    The composite service design modeling is an essential process of the service-oriented software development life cycle, where the candidate services, composite services, operations and their dependencies are required to be identified and specified before their design. However, a systematic service-oriented design modeling method for composite services is still in its infancy as most of the existing approaches provide the modeling of atomic services only. For these reasons, a new method (ComSDM) is proposed in this work for modeling the concept of service-oriented design to increase the reusability and decrease the complexity of system while keeping the service composition considerations in mind. Furthermore, the ComSDM method provides the mathematical representation of the components of service-oriented design using the graph-based theoryto facilitate the design quality measurement. To demonstrate that the ComSDM method is also suitable for composite service design modeling of distributed embedded real-time systems along with enterprise software development, it is implemented in the case study of a smart home. The results of the case study not only check the applicability of ComSDM, but can also be used to validate the complexity and reusability of ComSDM. This also guides the future research towards the design quality measurement such as using the ComSDM method to measure the quality of composite service design in service-oriented software system.

  18. Few promising multivariable prognostic models exist for recovery of people with non-specific neck pain in musculoskeletal primary care: A systematic review

    NARCIS (Netherlands)

    R.W. Wingbermühle (Roel); E. van Trijffel (Emiel); Nelissen, P.M. (Paul M.); B.W. Koes (Bart); A.P. Verhagen (Arianne)

    2017-01-01

    markdownabstractQuestion: Which multivariable prognostic model(s) for recovery in people with neck pain can be used in primary care? Design: Systematic review of studies evaluating multivariable prognostic models. Participants: People with non-specific neck pain presenting at primary care.

  19. Model Based Analysis and Test Generation for Flight Software

    Science.gov (United States)

    Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep

    2009-01-01

    We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.

  20. A systematic review protocol investigating tests for physical or physiological qualities and game-specific skills commonly used in rugby and related sports and their psychometric properties.

    Science.gov (United States)

    Chiwaridzo, Matthew; Ferguson, Gillian D; Smits-Engelsman, Bouwien C M

    2016-07-27

    Scientific focus on rugby has increased over the recent years, providing evidence of the physical or physiological characteristics and game-specific skills needed in the sport. Identification of tests commonly used to measure these characteristics is important for the development of test batteries, which in turn may be used for talent identification and injury prevention programmes. Although there are a number of tests available in the literature to measure physical or physiological variables and game-specific skills, there is limited information available on the psychometric properties of the tests. Therefore, the purpose of this study is to systematically review the literature for tests commonly used in rugby to measure physical or physiological characteristics and rugby-specific skills, documenting evidence of reliability and validity of the identified tests. A systematic review will be conducted. Electronic databases such as Scopus, MEDLINE via EBSCOhost and PubMed, Academic Search Premier, CINAHL and Africa-Wide Information via EBSCOhost will be searched for original research articles published in English from January 1, 1995, to December 31, 2015, using a pre-defined search strategy. The principal investigator will select potentially relevant articles from titles and abstracts. To minimise bias, full text of titles and abstracts deemed potentially relevant will be retrieved and reviewed by two independent reviewers based on the inclusion criteria. Data extraction will be conducted by the principal investigator and verified by two independent reviewers. The Consensus-based Standards for the Selection of Health Measurement Instruments (COSMIN) checklist will be used to assess the methodological quality of the selected studies. Choosing an appropriate test to be included in the screening test battery should be based on sound psychometric properties of the test available. This systematic review will provide an overview of the tests commonly used in rugby union

  1. Methods for testing transport models

    International Nuclear Information System (INIS)

    Singer, C.; Cox, D.

    1991-01-01

    Substantial progress has been made over the past year on six aspects of the work supported by this grant. As a result, we have in hand for the first time a fairly complete set of transport models and improved statistical methods for testing them against large databases. We also have initial results of such tests. These results indicate that careful application of presently available transport theories can reasonably well produce a remarkably wide variety of tokamak data

  2. A tutorial on testing the race model inequality

    DEFF Research Database (Denmark)

    Gondan, Matthias; Minakata, Katsumi

    2016-01-01

    , to faster responses to redundant signals. In contrast, coactivation models assume integrated processing of the combined stimuli. To distinguish between these two accounts, Miller (1982) derived the well-known race model inequality, which has become a routine test for behavioral data in experiments...... with redundant signals. In this tutorial, we review the basic properties of redundant signals experiments and current statistical procedures used to test the race model inequality during the period between 2011 and 2014. We highlight and discuss several issues concerning study design and the test of the race...... model inequality, such as inappropriate control of Type I error, insufficient statistical power, wrong treatment of omitted responses or anticipations and the interpretation of violations of the race model inequality. We make detailed recommendations on the design of redundant signals experiments...

  3. Variable amplitude fatigue, modelling and testing

    International Nuclear Information System (INIS)

    Svensson, Thomas.

    1993-01-01

    Problems related to metal fatigue modelling and testing are here treated in four different papers. In the first paper different views of the subject are summarised in a literature survey. In the second paper a new model for fatigue life is investigated. Experimental results are established which are promising for further development of the mode. In the third paper a method is presented that generates a stochastic process, suitable to fatigue testing. The process is designed in order to resemble certain fatigue related features in service life processes. In the fourth paper fatigue problems in transport vibrations are treated

  4. Testing Parametric versus Semiparametric Modelling in Generalized Linear Models

    NARCIS (Netherlands)

    Härdle, W.K.; Mammen, E.; Müller, M.D.

    1996-01-01

    We consider a generalized partially linear model E(Y|X,T) = G{X'b + m(T)} where G is a known function, b is an unknown parameter vector, and m is an unknown function.The paper introduces a test statistic which allows to decide between a parametric and a semiparametric model: (i) m is linear, i.e.

  5. Bayes Factor Covariance Testing in Item Response Models.

    Science.gov (United States)

    Fox, Jean-Paul; Mulder, Joris; Sinharay, Sandip

    2017-12-01

    Two marginal one-parameter item response theory models are introduced, by integrating out the latent variable or random item parameter. It is shown that both marginal response models are multivariate (probit) models with a compound symmetry covariance structure. Several common hypotheses concerning the underlying covariance structure are evaluated using (fractional) Bayes factor tests. The support for a unidimensional factor (i.e., assumption of local independence) and differential item functioning are evaluated by testing the covariance components. The posterior distribution of common covariance components is obtained in closed form by transforming latent responses with an orthogonal (Helmert) matrix. This posterior distribution is defined as a shifted-inverse-gamma, thereby introducing a default prior and a balanced prior distribution. Based on that, an MCMC algorithm is described to estimate all model parameters and to compute (fractional) Bayes factor tests. Simulation studies are used to show that the (fractional) Bayes factor tests have good properties for testing the underlying covariance structure of binary response data. The method is illustrated with two real data studies.

  6. A person fit test for IRT models for polytomous items

    NARCIS (Netherlands)

    Glas, Cornelis A.W.; Dagohoy, A.V.

    2007-01-01

    A person fit test based on the Lagrange multiplier test is presented for three item response theory models for polytomous items: the generalized partial credit model, the sequential model, and the graded response model. The test can also be used in the framework of multidimensional ability

  7. Test-Driven, Model-Based Systems Engineering

    DEFF Research Database (Denmark)

    Munck, Allan

    Hearing systems have evolved over many years from simple mechanical devices (horns) to electronic units consisting of microphones, amplifiers, analog filters, loudspeakers, batteries, etc. Digital signal processors replaced analog filters to provide better performance end new features. Central....... This thesis concerns methods for identifying, selecting and implementing tools for various aspects of model-based systems engineering. A comprehensive method was proposed that include several novel steps such as techniques for analyzing the gap between requirements and tool capabilities. The method...... was verified with good results in two case studies for selection of a traceability tool (single-tool scenario) and a set of modeling tools (multi-tool scenarios). Models must be subjected to testing to allow engineers to predict functionality and performance of systems. Test-first strategies are known...

  8. Systematic studies for medium-heavy even-even nuclei

    International Nuclear Information System (INIS)

    Chen, Y.; Zhao, Y.M.; Chen, J.Q.

    1995-01-01

    The systematics for the excitation energies of the ground, β, and γ bands are presented using the empirical total np interaction V NP . Some regularities found in the previous studies are tested by the systematics in the V NP schemes. The systematics of the β and γ bands are presented in detail. Elegant regularities are observed for the excitation energies. The correlation phenomenon of the general behavior among different bands within each major shell is pointed out

  9. Mixed Portmanteau Test for Diagnostic Checking of Time Series Models

    Directory of Open Access Journals (Sweden)

    Sohail Chand

    2014-01-01

    Full Text Available Model criticism is an important stage of model building and thus goodness of fit tests provides a set of tools for diagnostic checking of the fitted model. Several tests are suggested in literature for diagnostic checking. These tests use autocorrelation or partial autocorrelation in the residuals to criticize the adequacy of fitted model. The main idea underlying these portmanteau tests is to identify if there is any dependence structure which is yet unexplained by the fitted model. In this paper, we suggest mixed portmanteau tests based on autocorrelation and partial autocorrelation functions of the residuals. We derived the asymptotic distribution of the mixture test and studied its size and power using Monte Carlo simulations.

  10. Testing of materials and scale models for impact limiters

    International Nuclear Information System (INIS)

    Maji, A.K.; Satpathi, D.; Schryer, H.L.

    1991-01-01

    Aluminum Honeycomb and Polyurethane foam specimens were tested to obtain experimental data on the material's behavior under different loading conditions. This paper reports the dynamic tests conducted on the materials and on the design and testing of scale models made out of these open-quotes Impact Limiters,close quotes as they are used in the design of transportation casks. Dynamic tests were conducted on a modified Charpy Impact machine with associated instrumentation, and compared with static test results. A scale model testing setup was designed and used for preliminary tests on models being used by current designers of transportation casks. The paper presents preliminary results of the program. Additional information will be available and reported at the time of presentation of the paper

  11. Horizontal crash testing and analysis of model flatrols

    International Nuclear Information System (INIS)

    Dowler, H.J.; Soanes, T.P.T.

    1985-01-01

    To assess the behaviour of a full scale flask and flatrol during a proposed demonstration impact into a tunnel abutment, a mathematical modelling technique was developed and validated. The work was performed at quarter scale and comprised of both scale model tests and mathematical analysis in one and two dimensions. Good agreement between model test results of the 26.8m/s (60 mph) abutment impacts and the mathematical analysis, validated the modelling techniques. The modelling method may be used with confidence to predict the outcome of the proposed full scale demonstration. (author)

  12. Goodness-of-fit tests in mixed models

    KAUST Repository

    Claeskens, Gerda

    2009-05-12

    Mixed models, with both random and fixed effects, are most often estimated on the assumption that the random effects are normally distributed. In this paper we propose several formal tests of the hypothesis that the random effects and/or errors are normally distributed. Most of the proposed methods can be extended to generalized linear models where tests for non-normal distributions are of interest. Our tests are nonparametric in the sense that they are designed to detect virtually any alternative to normality. In case of rejection of the null hypothesis, the nonparametric estimation method that is used to construct a test provides an estimator of the alternative distribution. © 2009 Sociedad de Estadística e Investigación Operativa.

  13. Collaborative testing of turbulence models

    Science.gov (United States)

    Bradshaw, P.

    1992-12-01

    This project, funded by AFOSR, ARO, NASA, and ONR, was run by the writer with Profs. Brian E. Launder, University of Manchester, England, and John L. Lumley, Cornell University. Statistical data on turbulent flows, from lab. experiments and simulations, were circulated to modelers throughout the world. This is the first large-scale project of its kind to use simulation data. The modelers returned their predictions to Stanford, for distribution to all modelers and to additional participants ('experimenters')--over 100 in all. The object was to obtain a consensus on the capabilities of present-day turbulence models and identify which types most deserve future support. This was not completely achieved, mainly because not enough modelers could produce results for enough test cases within the duration of the project. However, a clear picture of the capabilities of various modeling groups has appeared, and the interaction has been helpful to the modelers. The results support the view that Reynolds-stress transport models are the most accurate.

  14. Rapid antigen group A streptococcus test to diagnose pharyngitis: a systematic review and meta-analysis.

    Directory of Open Access Journals (Sweden)

    Emily H Stewart

    Full Text Available BACKGROUND: Pharyngitis management guidelines include estimates of the test characteristics of rapid antigen streptococcus tests (RAST using a non-systematic approach. OBJECTIVE: To examine the sensitivity and specificity, and sources of variability, of RAST for diagnosing group A streptococcal (GAS pharyngitis. DATA SOURCES: MEDLINE, Cochrane Reviews, Centre for Reviews and Dissemination, Scopus, SciELO, CINAHL, guidelines, 2000-2012. STUDY SELECTION: Culture as reference standard, all languages. DATA EXTRACTION AND SYNTHESIS: Study characteristics, quality. MAIN OUTCOME(S AND MEASURE(S: Sensitivity, specificity. RESULTS: We included 59 studies encompassing 55,766 patients. Forty three studies (18,464 patients fulfilled the higher quality definition (at least 50 patients, prospective data collection, and no significant biases and 16 (35,634 patients did not. For the higher quality immunochromatographic methods in children (10,325 patients, heterogeneity was high for sensitivity (inconsistency [I(2] 88% and specificity (I(2 86%. For enzyme immunoassay in children (342 patients, the pooled sensitivity was 86% (95% CI, 79-92% and the pooled specificity was 92% (95% CI, 88-95%. For the higher quality immunochromatographic methods in the adult population (1,216 patients, the pooled sensitivity was 91% (95% CI, 87 to 94% and the pooled specificity was 93% (95% CI, 92 to 95%; however, heterogeneity was modest for sensitivity (I(2 61% and specificity (I(2 72%. For enzyme immunoassay in the adult population (333 patients, the pooled sensitivity was 86% (95% CI, 81-91% and the pooled specificity was 97% (95% CI, 96 to 99%; however, heterogeneity was high for sensitivity and specificity (both, I(2 88%. CONCLUSIONS: RAST immunochromatographic methods appear to be very sensitive and highly specific to diagnose group A streptococcal pharyngitis among adults but not in children. We could not identify sources of variability among higher quality studies. The

  15. Comparison between the Lactation Model and the Test-Day Model ...

    African Journals Online (AJOL)

    ARC-IRENE

    National Genetic Evaluation, using a Fixed Regression Test-day Model (TDM). This comparison is made for. Ayrshire, Guernsey, Holstein and Jersey cows participating in the South African Dairy Animal Improvement. Scheme. Specific differences between the two models were documented, with differences in statistical.

  16. Reliability of physical examination tests for the diagnosis of knee disorders: Evidence from a systematic review.

    Science.gov (United States)

    Décary, Simon; Ouellet, Philippe; Vendittoli, Pascal-André; Desmeules, François

    2016-12-01

    Clinicians often rely on physical examination tests to guide them in the diagnostic process of knee disorders. However, reliability of these tests is often overlooked and may influence the consistency of results and overall diagnostic validity. Therefore, the objective of this study was to systematically review evidence on the reliability of physical examination tests for the diagnosis of knee disorders. A structured literature search was conducted in databases up to January 2016. Included studies needed to report reliability measures of at least one physical test for any knee disorder. Methodological quality was evaluated using the QAREL checklist. A qualitative synthesis of the evidence was performed. Thirty-three studies were included with a mean QAREL score of 5.5 ± 0.5. Based on low to moderate quality evidence, the Thessaly test for meniscal injuries reached moderate inter-rater reliability (k = 0.54). Based on moderate to excellent quality evidence, the Lachman for anterior cruciate ligament injuries reached moderate to excellent inter-rater reliability (k = 0.42 to 0.81). Based on low to moderate quality evidence, the Tibiofemoral Crepitus, Joint Line and Patellofemoral Pain/Tenderness, Bony Enlargement and Joint Pain on Movement tests for knee osteoarthritis reached fair to excellent inter-rater reliability (k = 0.29 to 0.93). Based on low to moderate quality evidence, the Lateral Glide, Lateral Tilt, Lateral Pull and Quality of Movement tests for patellofemoral pain reached moderate to good inter-rater reliability (k = 0.49 to 0.73). Many physical tests appear to reach good inter-rater reliability, but this is based on low-quality and conflicting evidence. High-quality research is required to evaluate the reliability of knee physical examination tests. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. A systematic review of models to predict recruitment to multicentre clinical trials

    Directory of Open Access Journals (Sweden)

    Cook Andrew

    2010-07-01

    Full Text Available Abstract Background Less than one third of publicly funded trials managed to recruit according to their original plan often resulting in request for additional funding and/or time extensions. The aim was to identify models which might be useful to a major public funder of randomised controlled trials when estimating likely time requirements for recruiting trial participants. The requirements of a useful model were identified as usability, based on experience, able to reflect time trends, accounting for centre recruitment and contribution to a commissioning decision. Methods A systematic review of English language articles using MEDLINE and EMBASE. Search terms included: randomised controlled trial, patient, accrual, predict, enrol, models, statistical; Bayes Theorem; Decision Theory; Monte Carlo Method and Poisson. Only studies discussing prediction of recruitment to trials using a modelling approach were included. Information was extracted from articles by one author, and checked by a second, using a pre-defined form. Results Out of 326 identified abstracts, only 8 met all the inclusion criteria. Of these 8 studies examined, there are five major classes of model discussed: the unconditional model, the conditional model, the Poisson model, Bayesian models and Monte Carlo simulation of Markov models. None of these meet all the pre-identified needs of the funder. Conclusions To meet the needs of a number of research programmes, a new model is required as a matter of importance. Any model chosen should be validated against both retrospective and prospective data, to ensure the predictions it gives are superior to those currently used.

  18. Genetics of borderline personality disorder: systematic review and proposal of an integrative model.

    Science.gov (United States)

    Amad, Ali; Ramoz, Nicolas; Thomas, Pierre; Jardri, Renaud; Gorwood, Philip

    2014-03-01

    Borderline personality disorder (BPD) is one of the most common mental disorders and is characterized by a pervasive pattern of emotional lability, impulsivity, interpersonal difficulties, identity disturbances, and disturbed cognition. Here, we performed a systematic review of the literature concerning the genetics of BPD, including familial and twin studies, association studies, and gene-environment interaction studies. Moreover, meta-analyses were performed when at least two case-control studies testing the same polymorphism were available. For each gene variant, a pooled odds ratio (OR) was calculated using fixed or random effects models. Familial and twin studies largely support the potential role of a genetic vulnerability at the root of BPD, with an estimated heritability of approximately 40%. Moreover, there is evidence for both gene-environment interactions and correlations. However, association studies for BPD are sparse, making it difficult to draw clear conclusions. According to our meta-analysis, no significant associations were found for the serotonin transporter gene, the tryptophan hydroxylase 1 gene, or the serotonin 1B receptor gene. We hypothesize that such a discrepancy (negative association studies but high heritability of the disorder) could be understandable through a paradigm shift, in which "plasticity" genes (rather than "vulnerability" genes) would be involved. Such a framework postulates a balance between positive and negative events, which interact with plasticity genes in the genesis of BPD. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Utility of a Systematic Approach to Teaching Photographic Nasal Analysis to Otolaryngology Residents.

    Science.gov (United States)

    Robitschek, Jon; Dresner, Harley; Hilger, Peter

    2017-12-01

    Photographic nasal analysis constitutes a critical step along the path toward accurate diagnosis and precise surgical planning in rhinoplasty. The learned process by which one assesses photographs, analyzes relevant anatomical landmarks, and generates a global view of the nasal aesthetic is less widely described. To discern the common pitfalls in performing photographic nasal analysis and to quantify the utility of a systematic approach model in teaching photographic nasal analysis to otolaryngology residents. This prospective observational study included 20 participants from a university-based otolaryngology residency program. The control and intervention groups underwent baseline graded assessment of 3 patients. The intervention group received instruction on a systematic approach model for nasal analysis, and both groups underwent postintervention testing at 10 weeks. Data were collected from October 1, 2015, through June 1, 2016. A 10-minute, 11-slide presentation provided instruction on a systematic approach to nasal analysis to the intervention group. Graded photographic nasal analysis using a binary 18-point system. The 20 otolaryngology residents (15 men and 5 women; age range, 24-34 years) were adept at mentioning dorsal deviation and dorsal profile with focused descriptions of tip angle and contour. Areas commonly omitted by residents included verification of the Frankfort plane, position of the lower lateral crura, radix position, and ratio of the ala to tip lobule. The intervention group demonstrated immediate improvement after instruction on the teaching model, with the mean (SD) postintervention test score doubling compared with their baseline performance (7.5 [2.7] vs 10.3 [2.5]; P Otolaryngology residents demonstrated proficiency at incorporating nasal deviation, tip angle, and dorsal profile contour into their nasal analysis. They often omitted verification of the Frankfort plane, position of lower lateral crura, radix depth, and ala-to-tip lobule

  20. Diagnostic accuracy of tests to detect hepatitis B surface antigen: a systematic review of the literature and meta-analysis

    Directory of Open Access Journals (Sweden)

    Ali Amini

    2017-11-01

    Full Text Available Abstract Background Chronic Hepatitis B Virus (HBV infection is characterised by the persistence of hepatitis B surface antigen (HBsAg. Expanding HBV diagnosis and treatment programmes into low resource settings will require high quality but inexpensive rapid diagnostic tests (RDTs in addition to laboratory-based enzyme immunoassays (EIAs to detect HBsAg. The purpose of this review is to assess the clinical accuracy of available diagnostic tests to detect HBsAg to inform recommendations on testing strategies in 2017 WHO hepatitis testing guidelines. Methods The systematic review was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA guidelines using 9 databases. Two reviewers independently extracted data according to a pre-specified plan and evaluated study quality. Meta-analysis was performed. HBsAg diagnostic accuracy of rapid diagnostic tests (RDTs was compared to enzyme immunoassay (EIA and nucleic-acid test (NAT reference standards. Subanalyses were performed to determine accuracy among brands, HIV-status and specimen type. Results Of the 40 studies that met the inclusion criteria, 33 compared RDTs and/or EIAs against EIAs and 7 against NATs as reference standards. Thirty studies assessed diagnostic accuracy of 33 brands of RDTs in 23,716 individuals from 23 countries using EIA as the reference standard. The pooled sensitivity and specificity were 90.0% (95% CI: 89.1, 90.8 and 99.5% (95% CI: 99.4, 99.5 respectively, but accuracy varied widely among brands. Accuracy did not differ significantly whether serum, plasma, venous or capillary whole blood was used. Pooled sensitivity of RDTs in 5 studies of HIV-positive persons was lower at 72.3% (95% CI: 67.9, 76.4 compared to that in HIV-negative persons, but specificity remained high. Five studies evaluated 8 EIAs against a chemiluminescence immunoassay reference standard with a pooled sensitivity and specificity of 88.9% (95% CI: 87.0, 90.6 and

  1. Transonic control effectiveness for full and partial span elevon configurations on a 0.0165 scale model space shuttle orbiter tested in the LaRC 8-foot transonic wind tunnel (LA48)

    Science.gov (United States)

    1977-01-01

    A transonic pressure tunnel test is reported on an early version of the space shuttle orbiter (designated 089B-139) 0.0165 scale model to systematically determine both longitudinal and lateral control effectiveness associated with various combinations of inboard, outboard, and full span wing trailing edge controls. The test was conducted over a Mach number range from 0.6 to 1.08 at angles of attack from -2 deg to 23 deg at 0 deg sideslip.

  2. Conceptual Model for Systematic Construction Waste Management

    OpenAIRE

    Abd Rahim Mohd Hilmi Izwan; Kasim Narimah

    2017-01-01

    Development of the construction industry generated construction waste which can contribute towards environmental issues. Weaknesses of compliance in construction waste management especially in construction site have also contributed to the big issues of waste generated in landfills and illegal dumping area. This gives sign that construction projects are needed a systematic construction waste management. To date, a comprehensive criteria of construction waste management, particularly for const...

  3. Systematics of constant roll inflation

    Science.gov (United States)

    Anguelova, Lilia; Suranyi, Peter; Wijewardhana, L. C. R.

    2018-02-01

    We study constant roll inflation systematically. This is a regime, in which the slow roll approximation can be violated. It has long been thought that this approximation is necessary for agreement with observations. However, recently it was understood that there can be inflationary models with a constant, and not necessarily small, rate of roll that are both stable and compatible with the observational constraint ns ≈ 1. We investigate systematically the condition for such a constant-roll regime. In the process, we find a whole new class of inflationary models, in addition to the known solutions. We show that the new models are stable under scalar perturbations. Finally, we find a part of their parameter space, in which they produce a nearly scale-invariant scalar power spectrum, as needed for observational viability.

  4. Thurstonian models for sensory discrimination tests as generalized linear models

    DEFF Research Database (Denmark)

    Brockhoff, Per B.; Christensen, Rune Haubo Bojesen

    2010-01-01

    as a so-called generalized linear model. The underlying sensory difference 6 becomes directly a parameter of the statistical model and the estimate d' and it's standard error becomes the "usual" output of the statistical analysis. The d' for the monadic A-NOT A method is shown to appear as a standard......Sensory discrimination tests such as the triangle, duo-trio, 2-AFC and 3-AFC tests produce binary data and the Thurstonian decision rule links the underlying sensory difference 6 to the observed number of correct responses. In this paper it is shown how each of these four situations can be viewed...

  5. Radiation Belt Test Model

    Science.gov (United States)

    Freeman, John W.

    2000-10-01

    Rice University has developed a dynamic model of the Earth's radiation belts based on real-time data driven boundary conditions and full adiabaticity. The Radiation Belt Test Model (RBTM) successfully replicates the major features of storm-time behavior of energetic electrons: sudden commencement induced main phase dropout and recovery phase enhancement. It is the only known model to accomplish the latter. The RBTM shows the extent to which new energetic electrons introduced to the magnetosphere near the geostationary orbit drift inward due to relaxation of the magnetic field. It also shows the effects of substorm related rapid motion of magnetotail field lines for which the 3rd adiabatic invariant is violated. The radial extent of this violation is seen to be sharply delineated to a region outside of 5Re, although this distance is determined by the Hilmer-Voigt magnetic field model used by the RBTM. The RBTM appears to provide an excellent platform on which to build parameterized refinements to compensate for unknown acceleration processes inside 5Re where adiabaticity is seen to hold. Moreover, built within the framework of the MSFM, it offers the prospect of an operational forecast model for MeV electrons.

  6. Effects of waveform model systematics on the interpretation of GW150914

    OpenAIRE

    Abbott, B P; Abbott, R; Abbott, T D; Abernathy, M R; Acernese, F; Ackley, K; Adams, C; Adams, T; Addesso, P; Adhikari, R X; Adya, V B; Affeldt, C; Agathos, M; Agatsuma, K; Aggarwal, N

    2017-01-01

    PAPER\\ud Effects of waveform model systematics on the interpretation of GW150914\\ud B P Abbott1, R Abbott1, T D Abbott2, M R Abernathy3, F Acernese4,5, K Ackley6, C Adams7, T Adams8, P Addesso9,144, R X Adhikari1, V B Adya10, C Affeldt10, M Agathos11, K Agatsuma11, N Aggarwal12, O D Aguiar13, L Aiello14,15, A Ain16, P Ajith17, B Allen10,18,19, A Allocca20,21, P A Altin22, A Ananyeva1, S B Anderson1, W G Anderson18, S Appert1, K Arai1, M C Araya1, J S Areeda23, N Arnaud24, K G Arun25, S Ascenz...

  7. Building and Testing a Statistical Shape Model of the Human Ear Canal

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Larsen, Rasmus; Laugesen, Søren

    2002-01-01

    Today the design of custom in-the-ear hearing aids is based on personal experience and skills and not on a systematic description of the variation of the shape of the ear canal. In this paper it is described how a dense surface point distribution model of the human ear canal is built based on a t...

  8. KiDS-450: testing extensions to the standard cosmological model

    Science.gov (United States)

    Joudaki, Shahab; Mead, Alexander; Blake, Chris; Choi, Ami; de Jong, Jelte; Erben, Thomas; Fenech Conti, Ian; Herbonnet, Ricardo; Heymans, Catherine; Hildebrandt, Hendrik; Hoekstra, Henk; Joachimi, Benjamin; Klaes, Dominik; Köhlinger, Fabian; Kuijken, Konrad; McFarland, John; Miller, Lance; Schneider, Peter; Viola, Massimo

    2017-10-01

    We test extensions to the standard cosmological model with weak gravitational lensing tomography using 450 deg2 of imaging data from the Kilo Degree Survey (KiDS). In these extended cosmologies, which include massive neutrinos, non-zero curvature, evolving dark energy, modified gravity and running of the scalar spectral index, we also examine the discordance between KiDS and cosmic microwave background (CMB) measurements from Planck. The discordance between the two data sets is largely unaffected by a more conservative treatment of the lensing systematics and the removal of angular scales most sensitive to non-linear physics. The only extended cosmology that simultaneously alleviates the discordance with Planck and is at least moderately favoured by the data includes evolving dark energy with a time-dependent equation of state (in the form of the w0 - wa parametrization). In this model, the respective S_8=σ _8√{Ω m/0.3} constraints agree at the 1σ level, and there is 'substantial concordance' between the KiDS and Planck data sets when accounting for the full parameter space. Moreover, the Planck constraint on the Hubble constant is wider than in Λ cold dark matter (ΛCDM) and in agreement with the Riess et al. (2016) direct measurement of H0. The dark energy model is moderately favoured as compared to ΛCDM when combining the KiDS and Planck measurements, and marginalized constraints in the w0-wa plane are discrepant with a cosmological constant at the 3σ level. KiDS further constrains the sum of neutrino masses to 4.0 eV (95% CL), finds no preference for time or scale-dependent modifications to the metric potentials, and is consistent with flatness and no running of the spectral index.

  9. Improved animal models for testing gene therapy for atherosclerosis.

    Science.gov (United States)

    Du, Liang; Zhang, Jingwan; De Meyer, Guido R Y; Flynn, Rowan; Dichek, David A

    2014-04-01

    Gene therapy delivered to the blood vessel wall could augment current therapies for atherosclerosis, including systemic drug therapy and stenting. However, identification of clinically useful vectors and effective therapeutic transgenes remains at the preclinical stage. Identification of effective vectors and transgenes would be accelerated by availability of animal models that allow practical and expeditious testing of vessel-wall-directed gene therapy. Such models would include humanlike lesions that develop rapidly in vessels that are amenable to efficient gene delivery. Moreover, because human atherosclerosis develops in normal vessels, gene therapy that prevents atherosclerosis is most logically tested in relatively normal arteries. Similarly, gene therapy that causes atherosclerosis regression requires gene delivery to an existing lesion. Here we report development of three new rabbit models for testing vessel-wall-directed gene therapy that either prevents or reverses atherosclerosis. Carotid artery intimal lesions in these new models develop within 2-7 months after initiation of a high-fat diet and are 20-80 times larger than lesions in a model we described previously. Individual models allow generation of lesions that are relatively rich in either macrophages or smooth muscle cells, permitting testing of gene therapy strategies targeted at either cell type. Two of the models include gene delivery to essentially normal arteries and will be useful for identifying strategies that prevent lesion development. The third model generates lesions rapidly in vector-naïve animals and can be used for testing gene therapy that promotes lesion regression. These models are optimized for testing helper-dependent adenovirus (HDAd)-mediated gene therapy; however, they could be easily adapted for testing of other vectors or of different types of molecular therapies, delivered directly to the blood vessel wall. Our data also supports the promise of HDAd to deliver long

  10. Measurement of physical performance by field tests in programs of cardiac rehabilitation: a systematic review and meta-analysis.

    Science.gov (United States)

    Travensolo, Cristiane; Goessler, Karla; Poton, Roberto; Pinto, Roberta Ramos; Polito, Marcos Doederlein

    2018-04-13

    The literature concerning the effects of cardiac rehabilitation (CR) on field tests results is inconsistent. To perform a systematic review with meta-analysis on field tests results after programs of CR. Studies published in PubMed and Web of Science databases until May 2016 were analyzed. The standard difference in means correct by bias (Hedges' g) was used as effect size (g) to measure que amount of modifications in performance of field tests after CR period. Potential differences between subgroups were analyzed by Q-test based on ANOVA. Fifteen studies published between 1996 e 2016 were included in the review, 932 patients and age ranged 54,4 - 75,3 years old. Fourteen studies used the six-minutes walking test to evaluate the exercise capacity and one study used the Shuttle Walk Test. The random Hedges's g was 0.617 (P<0.001), representing a drop of 20% in the performance of field test after CR. The meta-regression showed significantly association (P=0.01) to aerobic exercise duration, i.e., for each 1-min increase in aerobic exercise duration, there is a 0.02 increase in effect size for performance in the field test. Field tests can detect physical modification after CR, and the large duration of aerobic exercise during CR was associated with a better result. Copyright © 2018 Sociedade Portuguesa de Cardiologia. Publicado por Elsevier España, S.L.U. All rights reserved.

  11. Interventions to increase testing, linkage to care and treatment of hepatitis C virus (HCV) infection among people in prisons: A systematic review.

    Science.gov (United States)

    Kronfli, Nadine; Linthwaite, Blake; Kouyoumdjian, Fiona; Klein, Marina B; Lebouché, Bertrand; Sebastiani, Giada; Cox, Joseph

    2018-04-28

    While the burden of chronic hepatitis C virus (HCV) infection is significantly higher among people in prisons compared to the general population, testing and treatment uptake remain suboptimal. The aim of this systematic review was to synthesize evidence on the effectiveness of interventions to increase HCV testing, linkage to care and treatment uptake among people in prisons. We searched Medline (Ovid 1996-present), Embase (Ovid 1996-present), and the Cochrane Central Register of Controlled Trials for English language articles published between January 2007 and November 2017. Studies evaluating interventions to enhance HCV testing, linkage to care and treatment uptake for people in prison were included. Two independent reviewers evaluated articles selected for full-text review. Disagreements were resolved by consensus. A total of 475 unique articles were identified, 29 were eligible for full text review, and six studies were included. All but one study was conducted in the pre-direct-acting antiviral (DAA) era; no studies were conducted in low- or middle-income countries. Of the six studies, all but one focused on testing. Only two were randomised controlled trials; the remaining were single arm studies. Interventions to enhance HCV testing in prison settings included combination risk-based and birth-cohort screening strategies, on-site nurse-led opt-in screening clinics with pre-test counselling and education, and systematic dried blood spot testing. All interventions increased HCV testing, but risk of study bias was high in all studies. Interventions to enhance linkage to care included facilitated referral for HCV assessment and scheduling of specialist appointments; however, risk of study bias was critical. There is a lack of recent data on interventions to improve the HCV care cascade in people in prisons. With the introduction of short-course, well-tolerated DAAs, rigorous controlled studies evaluating interventions to improve testing, linkage and treatment

  12. Modeling systematic errors: polychromatic sources of Beer-Lambert deviations in HPLC/UV and nonchromatographic spectrophotometric assays.

    Science.gov (United States)

    Galli, C

    2001-07-01

    It is well established that the use of polychromatic radiation in spectrophotometric assays leads to excursions from the Beer-Lambert limit. This Note models the resulting systematic error as a function of assay spectral width, slope of molecular extinction coefficient, and analyte concentration. The theoretical calculations are compared with recent experimental results; a parameter is introduced which can be used to estimate the magnitude of the systematic error in both chromatographic and nonchromatographic spectrophotometric assays. It is important to realize that the polychromatic radiation employed in common laboratory equipment can yield assay errors up to approximately 4%, even at absorption levels generally considered 'safe' (i.e. absorption <1). Thus careful consideration of instrumental spectral width, analyte concentration, and slope of molecular extinction coefficient is required to ensure robust analytical methods.

  13. Implementing learning organization components in Ardabil Regional Water Company based on Marquardt systematic model

    Directory of Open Access Journals (Sweden)

    Shahram Mirzaie Daryani

    2015-09-01

    Full Text Available This main purpose of this study was to survey the implementation of learning organization characteristics based on Marquardt systematic model in Ardabil Regional Water Company. Two hundred and four staff (164 employees and 40 authorities participated in the study. For data collection Marquardt questionnaire was used which its validity and reliability had been confirmed. The results of the data analysis showed that learning organization characteristics were used more than average level in some subsystems of Marquardt model and there was a significant difference between current position and excellent position based on learning organization characteristic application. The results of this study can be used to improve work processes of organizations and institutions.

  14. Correlation Results for a Mass Loaded Vehicle Panel Test Article Finite Element Models and Modal Survey Tests

    Science.gov (United States)

    Maasha, Rumaasha; Towner, Robert L.

    2012-01-01

    High-fidelity Finite Element Models (FEMs) were developed to support a recent test program at Marshall Space Flight Center (MSFC). The FEMs correspond to test articles used for a series of acoustic tests. Modal survey tests were used to validate the FEMs for five acoustic tests (a bare panel and four different mass-loaded panel configurations). An additional modal survey test was performed on the empty test fixture (orthogrid panel mounting fixture, between the reverb and anechoic chambers). Modal survey tests were used to test-validate the dynamic characteristics of FEMs used for acoustic test excitation. Modal survey testing and subsequent model correlation has validated the natural frequencies and mode shapes of the FEMs. The modal survey test results provide a basis for the analysis models used for acoustic loading response test and analysis comparisons

  15. Rehabilitation service models for people with physical and/or mental disability living in low- and middle-income countries: A systematic review.

    Science.gov (United States)

    Furlan, Andréa D; Irvin, Emma; Munhall, Claire; Giraldo-Prieto, Mario; Fullerton, Laura; McMaster, Robert; Danak, Shivang; Costante, Alicia; Pitzul, Kristen B; Bhide, Rohit P; Marchenko, Stanislav; Mahood, Quenby; David, Judy A; Flannery, John F; Bayley, Mark

    2018-04-03

    To compare models of rehabilitation services for people with mental and/or physical disability in order to determine optimal models for therapy and interventions in low- to middle-income countries. CINAHL, EMBASE, MEDLINE, CENTRAL, PsycINFO, Business Source Premier, HINARI, CEBHA and PubMed. Systematic reviews, randomized control trials and observational studies comparing >2 models of rehabilitation care in any language. Date extraction: Standardized forms were used. Methodological quality was assessed using AMSTAR and quality of evidence was assessed using GRADE. Twenty-four systematic reviews which included 578 studies and 202,307 participants were selected. In addition, four primary studies were included to complement the gaps in the systematic reviews. The studies were all done at various countries. Moderate- to high-quality evidence supports the following models of rehabilitation services: psychological intervention in primary care settings for people with major depression, admission into an inpatient, multidisciplinary, specialized rehabilitation unit for those with recent onset of a severe disabling condition; outpatient rehabilitation with multidisciplinary care in the community, hospital or home is recommended for less severe conditions; However, a model of rehabilitation service that includes early discharge is not recommended for elderly patients with severe stroke, chronic obstructive pulmonary disease, hip fracture and total joints. Models of rehabilitation care in inpatient, multidisciplinary and specialized rehabilitation units are recommended for the treatment of severe conditions with recent onset, as they reduce mortality and the need for institutionalized care, especially among elderly patients, stroke patients, or those with chronic back pain. Results are expected to be generalizable for brain/spinal cord injury and complex fractures.

  16. Dynamic epidemiological models for dengue transmission: a systematic review of structural approaches.

    Directory of Open Access Journals (Sweden)

    Mathieu Andraud

    Full Text Available Dengue is a vector-borne disease recognized as the major arbovirose with four immunologically distant dengue serotypes coexisting in many endemic areas. Several mathematical models have been developed to understand the transmission dynamics of dengue, including the role of cross-reactive antibodies for the four different dengue serotypes. We aimed to review deterministic models of dengue transmission, in order to summarize the evolution of insights for, and provided by, such models, and to identify important characteristics for future model development. We identified relevant publications using PubMed and ISI Web of Knowledge, focusing on mathematical deterministic models of dengue transmission. Model assumptions were systematically extracted from each reviewed model structure, and were linked with their underlying epidemiological concepts. After defining common terms in vector-borne disease modelling, we generally categorised fourty-two published models of interest into single serotype and multiserotype models. The multi-serotype models assumed either vector-host or direct host-to-host transmission (ignoring the vector component. For each approach, we discussed the underlying structural and parameter assumptions, threshold behaviour and the projected impact of interventions. In view of the expected availability of dengue vaccines, modelling approaches will increasingly focus on the effectiveness and cost-effectiveness of vaccination options. For this purpose, the level of representation of the vector and host populations seems pivotal. Since vector-host transmission models would be required for projections of combined vaccination and vector control interventions, we advocate their use as most relevant to advice health policy in the future. The limited understanding of the factors which influence dengue transmission as well as limited data availability remain important concerns when applying dengue models to real-world decision problems.

  17. Neural systems language: a formal modeling language for the systematic description, unambiguous communication, and automated digital curation of neural connectivity.

    Science.gov (United States)

    Brown, Ramsay A; Swanson, Larry W

    2013-09-01

    Systematic description and the unambiguous communication of findings and models remain among the unresolved fundamental challenges in systems neuroscience. No common descriptive frameworks exist to describe systematically the connective architecture of the nervous system, even at the grossest level of observation. Furthermore, the accelerating volume of novel data generated on neural connectivity outpaces the rate at which this data is curated into neuroinformatics databases to synthesize digitally systems-level insights from disjointed reports and observations. To help address these challenges, we propose the Neural Systems Language (NSyL). NSyL is a modeling language to be used by investigators to encode and communicate systematically reports of neural connectivity from neuroanatomy and brain imaging. NSyL engenders systematic description and communication of connectivity irrespective of the animal taxon described, experimental or observational technique implemented, or nomenclature referenced. As a language, NSyL is internally consistent, concise, and comprehensible to both humans and computers. NSyL is a promising development for systematizing the representation of neural architecture, effectively managing the increasing volume of data on neural connectivity and streamlining systems neuroscience research. Here we present similar precedent systems, how NSyL extends existing frameworks, and the reasoning behind NSyL's development. We explore NSyL's potential for balancing robustness and consistency in representation by encoding previously reported assertions of connectivity from the literature as examples. Finally, we propose and discuss the implications of a framework for how NSyL will be digitally implemented in the future to streamline curation of experimental results and bridge the gaps among anatomists, imagers, and neuroinformatics databases. Copyright © 2013 Wiley Periodicals, Inc.

  18. Assessment of perioperative stress in colorectal cancer by use of in vitro cell models: a systematic review

    Directory of Open Access Journals (Sweden)

    Tove Kirkegaard

    2017-11-01

    Full Text Available Background The perioperative period is important for patient outcome. Colorectal cancer surgery can lead to metastatic disease due to release of disseminated tumor cells and the induction of surgical stress response. To explore the overall effects on surgically-induced changes in serum composition, in vitro model systems are useful. Methods A systematic search in PubMed and EMBASE was performed to identify studies describing in vitro models used to investigate cancer cell growth/proliferation, cell migration, cell invasion and cell death of serum taken pre- and postoperatively from patients undergoing colorectal tumor resection. Results Two authors (MG and TK independently reviewed 984 studies and identified five studies, which fulfilled the inclusion criteria. Disagreements were solved by discussion. All studies investigated cell proliferation and cell invasion, whereas three studies investigated cell migration, and only one study investigated cell death/apoptosis. One study investigated postoperative peritoneal infection due to anastomotic leak, one study investigated mode of anesthesia (general anesthesia with volatile or intravenous anesthetics, and one study investigated preoperative intervention with granulocyte macrophage colony stimulating factor (GMCSF. In all studies an increased proliferation, cell migration and invasion was demonstrated after surgery. Anesthetics with propofol and intervention with GMCSF significantly reduced postoperative cell proliferation, whereas peritoneal infection enhanced the invasive capability of tumor cells. Conclusion This study suggests that in vitro cell models are useful and reliable tools to explore the effect of surgery on colorectal cancer cell proliferation and metastatic ability. The models should therefore be considered as additional tests to investigate the effects of perioperative interventions.

  19. Systematics of β and γ parameters of O(6)-like nuclei in the interacting boson model

    International Nuclear Information System (INIS)

    Wang Baolin

    1997-01-01

    By comparing quadrupole moments between the interacting boson model (IBM) and the collective model, a simple calculation for the triaxial deformation parameters β and γ in the O(6)-like nuclei is presented, based on the intrinsic frame in the IBM. The systematics of the β and γ are studied. The realistic cases are calculated for the even-even Xe, Ba and Ce isotopes, and the smooth dependences of the strength ratios θ 3 /κ and the effective charges e 2 on the proton and neutron boson numbers N π and N ν are discovered

  20. Interventions to Improve Follow-Up of Laboratory Test Results Pending at Discharge: A Systematic Review.

    Science.gov (United States)

    Whitehead, Nedra S; Williams, Laurina; Meleth, Sreelatha; Kennedy, Sara; Epner, Paul; Singh, Hardeep; Wooldridge, Kathleene; Dalal, Anuj K; Walz, Stacy E; Lorey, Tom; Graber, Mark L

    2018-02-28

    Failure to follow up test results pending at discharge (TPAD) from hospitals or emergency departments is a major patient safety concern. The purpose of this review is to systematically evaluate the effectiveness of interventions to improve follow-up of laboratory TPAD. We conducted literature searches in PubMed, CINAHL, Cochrane, and EMBASE using search terms for relevant health care settings, transition of patient care, laboratory tests, communication, and pending or missed tests. We solicited unpublished studies from the clinical laboratory community and excluded articles that did not address transitions between settings, did not include an intervention, or were not related to laboratory TPAD. We also excluded letters, editorials, commentaries, abstracts, case reports, and case series. Of the 9,592 abstracts retrieved, 8 met the inclusion criteria and reported the successful communication of TPAD. A team member abstracted predetermined data elements from each study, and a senior scientist reviewed the abstraction. Two experienced reviewers independently appraised the quality of each study using published LMBP™ A-6 scoring criteria. We assessed the body of evidence using the A-6 methodology, and the evidence suggested that electronic tools or one-on-one education increased documentation of pending tests in discharge summaries. We also found that automated notifications improved awareness of TPAD. The interventions were supported by suggestive evidence; this type of evidence is below the level of evidence required for LMBP™ recommendations. We encourage additional research into the impact of these interventions on key processes and health outcomes. © 2018 Society of Hospital Medicine.

  1. The value of predicting restriction of fetal growth and compromise of its wellbeing: Systematic quantitative overviews (meta-analysis) of test accuracy literature.

    Science.gov (United States)

    Morris, Rachel K; Khan, Khalid S; Coomarasamy, Aravinthan; Robson, Stephen C; Kleijnen, Jos

    2007-03-08

    Restriction of fetal growth and compromise of fetal wellbeing remain significant causes of perinatal death and childhood disability. At present, there is a lack of scientific consensus about the best strategies for predicting these conditions before birth. Therefore, there is uncertainty about the best management of pregnant women who might have a growth restricted baby. This is likely to be due to a dearth of clear collated information from individual research studies drawn from different sources on this subject. A series of systematic reviews and meta-analyses will be undertaken to determine, among pregnant women, the accuracy of various tests to predict and/or diagnose fetal growth restriction and compromise of fetal wellbeing. We will search Medline, Embase, Cochrane Library, MEDION, citation lists of review articles and eligible primary articles and will contact experts in the field. Independent reviewers will select studies, extract data and assess study quality according to established criteria. Language restrictions will not be applied. Data synthesis will involve meta-analysis (where appropriate), exploration of heterogeneity and publication bias. The project will collate and synthesise the available evidence regarding the value of the tests for predicting restriction of fetal growth and compromise of fetal wellbeing. The systematic overviews will assess the quality of the available evidence, estimate the magnitude of potential benefits, identify those tests with good predictive value and help formulate practice recommendations.

  2. Model-Based GUI Testing Using Uppaal at Novo Nordisk

    Science.gov (United States)

    Hjort, Ulrik H.; Illum, Jacob; Larsen, Kim G.; Petersen, Michael A.; Skou, Arne

    This paper details a collaboration between Aalborg University and Novo Nordiskin developing an automatic model-based test generation tool for system testing of the graphical user interface of a medical device on an embedded platform. The tool takes as input an UML Statemachine model and generates a test suite satisfying some testing criterion, such as edge or state coverage, and converts the individual test case into a scripting language that can be automatically executed against the target. The tool has significantly reduced the time required for test construction and generation, and reduced the number of test scripts while increasing the coverage.

  3. Characteristics of Indigenous primary health care service delivery models: a systematic scoping review.

    Science.gov (United States)

    Harfield, Stephen G; Davy, Carol; McArthur, Alexa; Munn, Zachary; Brown, Alex; Brown, Ngiare

    2018-01-25

    Indigenous populations have poorer health outcomes compared to their non-Indigenous counterparts. The evolution of Indigenous primary health care services arose from mainstream health services being unable to adequately meet the needs of Indigenous communities and Indigenous peoples often being excluded and marginalised from mainstream health services. Part of the solution has been to establish Indigenous specific primary health care services, for and managed by Indigenous peoples. There are a number of reasons why Indigenous primary health care services are more likely than mainstream services to improve the health of Indigenous communities. Their success is partly due to the fact that they often provide comprehensive programs that incorporate treatment and management, prevention and health promotion, as well as addressing the social determinants of health. However, there are gaps in the evidence base including the characteristics that contribute to the success of Indigenous primary health care services in providing comprehensive primary health care. This systematic scoping review aims to identify the characteristics of Indigenous primary health care service delivery models. This systematic scoping review was led by an Aboriginal researcher, using the Joanna Briggs Institute Scoping Review Methodology. All published peer-reviewed and grey literature indexed in PubMed, EBSCO CINAHL, Embase, Informit, Mednar, and Trove databases from September 1978 to May 2015 were reviewed for inclusion. Studies were included if they describe the characteristics of service delivery models implemented within an Indigenous primary health care service. Sixty-two studies met the inclusion criteria. Data were extracted and then thematically analysed to identify the characteristics of Indigenous PHC service delivery models. Culture was the most prominent characteristic underpinning all of the other seven characteristics which were identified - accessible health services, community

  4. KRAS mutation testing of tumours in adults with metastatic colorectal cancer: a systematic review and cost-effectiveness analysis.

    Science.gov (United States)

    Westwood, Marie; van Asselt, Thea; Ramaekers, Bram; Whiting, Penny; Joore, Manuela; Armstrong, Nigel; Noake, Caro; Ross, Janine; Severens, Johan; Kleijnen, Jos

    2014-10-01

    Bowel cancer is the third most common cancer in the UK. Most bowel cancers are initially treated with surgery, but around 17% spread to the liver. When this happens, sometimes the liver tumour can be treated surgically, or chemotherapy may be used to shrink the tumour to make surgery possible. Kirsten rat sarcoma viral oncogene (KRAS) mutations make some tumours less responsive to treatment with biological therapies such as cetuximab. There are a variety of tests available to detect these mutations. These vary in the specific mutations that they detect, the amount of mutation they detect, the amount of tumour cells needed, the time to give a result, the error rate and cost. To compare the performance and cost-effectiveness of KRAS mutation tests in differentiating adults with metastatic colorectal cancer whose metastases are confined to the liver and are unresectable and who may benefit from first-line treatment with cetuximab in combination with standard chemotherapy from those who should receive standard chemotherapy alone. Thirteen databases, including MEDLINE and EMBASE, research registers and conference proceedings were searched to January 2013. Additional data were obtained from an online survey of laboratories participating in the UK National External Quality Assurance Scheme pilot for KRAS mutation testing. A systematic review of the evidence was carried out using standard methods. Randomised controlled trials were assessed for quality using the Cochrane risk of bias tool. Diagnostic accuracy studies were assessed using the QUADAS-2 tool. There were insufficient data for meta-analysis. For accuracy studies we calculated sensitivity and specificity together with 95% confidence intervals (CIs). Survival data were summarised as hazard ratios and tumour response data were summarised as relative risks, with 95% CIs. The health economic analysis considered the long-term costs and quality-adjusted life-years associated with different tests followed by treatment

  5. Cumulative Effects of Concussion History on Baseline Computerized Neurocognitive Test Scores: Systematic Review and Meta-analysis.

    Science.gov (United States)

    Alsalaheen, Bara; Stockdale, Kayla; Pechumer, Dana; Giessing, Alexander; He, Xuming; Broglio, Steven P

    It is unclear whether individuals with a history of single or multiple clinically recovered concussions exhibit worse cognitive performance on baseline testing compared with individuals with no concussion history. To analyze the effects of concussion history on baseline neurocognitive performance using a computerized neurocognitive test. PubMed, CINAHL, and psycINFO were searched in November 2015. The search was supplemented by a hand search of references. Studies were included if participants completed the Immediate Post-concussion Assessment and Cognitive Test (ImPACT) at baseline (ie, preseason) and if performance was stratified by previous history of single or multiple concussions. Systematic review and meta-analysis. Level 2. Sample size, demographic characteristics of participants, as well as performance of participants on verbal memory, visual memory, visual-motor processing speed, and reaction time were extracted from each study. A random-effects pooled meta-analysis revealed that, with the exception of worsened visual memory for those with 1 previous concussion (Hedges g = 0.10), no differences were observed between participants with 1 or multiple concussions compared with participants without previous concussions. With the exception of decreased visual memory based on history of 1 concussion, history of 1 or multiple concussions was not associated with worse baseline cognitive performance.

  6. Aspergillus Polymerase Chain Reaction: Systematic Review of Evidence for Clinical Use in Comparison With Antigen Testing

    Science.gov (United States)

    White, P. Lewis; Wingard, John R.; Bretagne, Stéphane; Löffler, Jürgen; Patterson, Thomas F.; Slavin, Monica A.; Barnes, Rosemary A.; Pappas, Peter G.; Donnelly, J. Peter

    2015-01-01

    Background. Aspergillus polymerase chain reaction (PCR) was excluded from the European Organisation for the Research and Treatment of Cancer/Mycoses Study Group (EORTC/MSG) definitions of invasive fungal disease because of limited standardization and validation. The definitions are being revised. Methods. A systematic literature review was performed to identify analytical and clinical information available on inclusion of galactomannan enzyme immunoassay (GM-EIA) (2002) and β-d-glucan (2008), providing a minimal threshold when considering PCR. Categorical parameters and statistical performance were compared. Results. When incorporated, GM-EIA and β-d-glucan sensitivities and specificities for diagnosing invasive aspergillosis were 81.6% and 91.6%, and 76.9% and 89.4%, respectively. Aspergillus PCR has similar sensitivity and specificity (76.8%–88.0% and 75.0%–94.5%, respectively) and comparable utility. Methodological recommendations and commercial PCR assays assist standardization. Although all tests have limitations, currently, PCR is the only test with independent quality control. Conclusions. We propose that there is sufficient evidence that is at least equivalent to that used to include GM-EIA and β-d-glucan testing, and that PCR is now mature enough for inclusion in the EORTC/MSG definitions. PMID:26113653

  7. Conducting field studies for testing pesticide leaching models

    Science.gov (United States)

    Smith, Charles N.; Parrish, Rudolph S.; Brown, David S.

    1990-01-01

    A variety of predictive models are being applied to evaluate the transport and transformation of pesticides in the environment. These include well known models such as the Pesticide Root Zone Model (PRZM), the Risk of Unsaturated-Saturated Transport and Transformation Interactions for Chemical Concentrations Model (RUSTIC) and the Groundwater Loading Effects of Agricultural Management Systems Model (GLEAMS). The potentially large impacts of using these models as tools for developing pesticide management strategies and regulatory decisions necessitates development of sound model validation protocols. This paper offers guidance on many of the theoretical and practical problems encountered in the design and implementation of field-scale model validation studies. Recommendations are provided for site selection and characterization, test compound selection, data needs, measurement techniques, statistical design considerations and sampling techniques. A strategy is provided for quantitatively testing models using field measurements.

  8. Commercial serological antibody detection tests for the diagnosis of pulmonary tuberculosis: a systematic review.

    Directory of Open Access Journals (Sweden)

    Karen R Steingart

    2007-06-01

    Full Text Available BACKGROUND: The global tuberculosis epidemic results in nearly 2 million deaths and 9 million new cases of the disease a year. The vast majority of tuberculosis patients live in developing countries, where the diagnosis of tuberculosis relies on the identification of acid-fast bacilli on unprocessed sputum smears using conventional light microscopy. Microscopy has high specificity in tuberculosis-endemic countries, but modest sensitivity which varies among laboratories (range 20% to 80%. Moreover, the sensitivity is poor for paucibacillary disease (e.g., pediatric and HIV-associated tuberculosis. Thus, the development of rapid and accurate new diagnostic tools is imperative. Immune-based tests are potentially suitable for use in low-income countries as some test formats can be performed at the point of care without laboratory equipment. Currently, dozens of distinct commercial antibody detection tests are sold in developing countries. The question is "do they work?" METHODS AND FINDINGS: We conducted a systematic review to assess the accuracy of commercial antibody detection tests for the diagnosis of pulmonary tuberculosis. Studies from all countries using culture and/or microscopy smear for confirmation of pulmonary tuberculosis were eligible. Studies with fewer than 50 participants (25 patients and 25 control participants were excluded. In a comprehensive search, we identified 68 studies. The results demonstrate that (1 overall, commercial tests vary widely in performance; (2 sensitivity is higher in smear-positive than smear-negative samples; (3 in studies of smear-positive patients, Anda-TB IgG by enzyme-linked immunosorbent assay shows limited sensitivity (range 63% to 85% and inconsistent specificity (range 73% to 100%; (4 specificity is higher in healthy volunteers than in patients in whom tuberculosis disease is initially suspected and subsequently ruled out; and (5 there are insufficient data to determine the accuracy of most

  9. Flight Test Maneuvers for Efficient Aerodynamic Modeling

    Science.gov (United States)

    Morelli, Eugene A.

    2011-01-01

    Novel flight test maneuvers for efficient aerodynamic modeling were developed and demonstrated in flight. Orthogonal optimized multi-sine inputs were applied to aircraft control surfaces to excite aircraft dynamic response in all six degrees of freedom simultaneously while keeping the aircraft close to chosen reference flight conditions. Each maneuver was designed for a specific modeling task that cannot be adequately or efficiently accomplished using conventional flight test maneuvers. All of the new maneuvers were first described and explained, then demonstrated on a subscale jet transport aircraft in flight. Real-time and post-flight modeling results obtained using equation-error parameter estimation in the frequency domain were used to show the effectiveness and efficiency of the new maneuvers, as well as the quality of the aerodynamic models that can be identified from the resultant flight data.

  10. Development of dynamic Bayesian models for web application test management

    Science.gov (United States)

    Azarnova, T. V.; Polukhin, P. V.; Bondarenko, Yu V.; Kashirina, I. L.

    2018-03-01

    The mathematical apparatus of dynamic Bayesian networks is an effective and technically proven tool that can be used to model complex stochastic dynamic processes. According to the results of the research, mathematical models and methods of dynamic Bayesian networks provide a high coverage of stochastic tasks associated with error testing in multiuser software products operated in a dynamically changing environment. Formalized representation of the discrete test process as a dynamic Bayesian model allows us to organize the logical connection between individual test assets for multiple time slices. This approach gives an opportunity to present testing as a discrete process with set structural components responsible for the generation of test assets. Dynamic Bayesian network-based models allow us to combine in one management area individual units and testing components with different functionalities and a direct influence on each other in the process of comprehensive testing of various groups of computer bugs. The application of the proposed models provides an opportunity to use a consistent approach to formalize test principles and procedures, methods used to treat situational error signs, and methods used to produce analytical conclusions based on test results.

  11. Using humor in systematic desensitization to reduce fear.

    Science.gov (United States)

    Ventis, W L; Higbee, G; Murdock, S A

    2001-04-01

    Effectiveness of systematic desensitization for fear reduction, using humorous hierarchy scenes without relaxation, was tested. Participants were 40 students highly fearful of spiders. Using a 24-item behavioral approach test with an American tarantula, participants were matched on fear level and randomly assigned to 1 of 3 treatment groups: (a) systematic desensitization, (b) humor desensitization, and (c) untreated controls. Each participant was seen for 6 sessions, including pretest and posttest. Analyses of covariance of posttest scores revealed that the 2 treatment groups showed greater reduction in fear than the controls on 3 measures but did not differ from each other. Therefore, humor in systematic desensitization reduced fear as effectively as more traditional desensitization. This finding may have therapeutic applications; however, it may also be applicable in advertising to desensitize fear of a dangerous product, such as cigarettes.

  12. Systematic Assessment of Neutron and Gamma Backgrounds Relevant to Operational Modeling and Detection Technology Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Archer, Daniel E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hornback, Donald Eric [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Jeffrey O. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nicholson, Andrew D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Patton, Bruce W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Peplow, Douglas E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Miller, Thomas Martin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ayaz-Maierhafer, Birsen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-01-01

    This report summarizes the findings of a two year effort to systematically assess neutron and gamma backgrounds relevant to operational modeling and detection technology implementation. The first year effort focused on reviewing the origins of background sources and their impact on measured rates in operational scenarios of interest. The second year has focused on the assessment of detector and algorithm performance as they pertain to operational requirements against the various background sources and background levels.

  13. Evaluation of Simulation Models that Estimate the Effect of Dietary Strategies on Nutritional Intake: A Systematic Review.

    Science.gov (United States)

    Grieger, Jessica A; Johnson, Brittany J; Wycherley, Thomas P; Golley, Rebecca K

    2017-05-01

    Background: Dietary simulation modeling can predict dietary strategies that may improve nutritional or health outcomes. Objectives: The study aims were to undertake a systematic review of simulation studies that model dietary strategies aiming to improve nutritional intake, body weight, and related chronic disease, and to assess the methodologic and reporting quality of these models. Methods: The Preferred Reporting Items for Systematic Reviews and Meta-Analyses guided the search strategy with studies located through electronic searches [Cochrane Library, Ovid (MEDLINE and Embase), EBSCOhost (CINAHL), and Scopus]. Study findings were described and dietary modeling methodology and reporting quality were critiqued by using a set of quality criteria adapted for dietary modeling from general modeling guidelines. Results: Forty-five studies were included and categorized as modeling moderation, substitution, reformulation, or promotion dietary strategies. Moderation and reformulation strategies targeted individual nutrients or foods to theoretically improve one particular nutrient or health outcome, estimating small to modest improvements. Substituting unhealthy foods with healthier choices was estimated to be effective across a range of nutrients, including an estimated reduction in intake of saturated fatty acids, sodium, and added sugar. Promotion of fruits and vegetables predicted marginal changes in intake. Overall, the quality of the studies was moderate to high, with certain features of the quality criteria consistently reported. Conclusions: Based on the results of reviewed simulation dietary modeling studies, targeting a variety of foods rather than individual foods or nutrients theoretically appears most effective in estimating improvements in nutritional intake, particularly reducing intake of nutrients commonly consumed in excess. A combination of strategies could theoretically be used to deliver the best improvement in outcomes. Study quality was moderate to

  14. Seal-rotordynamic-coefficient Test Results for a Model SSME ATD-HPFTP Turbine Interstage Seal with and Without a Swirl Brake

    Science.gov (United States)

    Childs, Dara W.; Ramsey, Christopher

    1991-01-01

    The predictions of Scharrer's (1988) theory for rotordynamic coefficients of labyrinth gas seals were compared with measurements for a model SSME Alternate Turbopump Development High Pressure Fuel Turbopump with and without swirl brakes. Using the test apparatus described by Childs et al., tests were conducted with supply pressures up to 18.3 bars and speeds up to 16,000 rpm. Seal back pressure was controlled to provide four pressure ratios at all supply pressures. No measurable differences in leakage was detected for the seal with and without the swirl brakes. Comparisons of the measurement results for the seal without a swirl brake with the Scharrer theory showed that the theory can be used only to provide design guidelines; systematic differences were observed between theory and experiment due to changes in running speed, supply pressure, and pressure ratio.

  15. Seal-rotordynamic-coefficient test results for a model SSME ATD-HPFTP turbine interstate seal with and without a swirl brake

    Science.gov (United States)

    Childs, D. W.; Ramsey, C.

    1991-01-01

    The predictions of Scharrer's (1988) theory for rotordynamic coefficients of labyrinth gas seals were compared with measurements for a model SSME Alternate Turbopump Development High-Pressure Fuel Turbopump with and without swirl brakes. Using the test apparatus described by Childs et al. (1986, 1990), tests were conducted with supply pressures up to 18.3 bars and speeds up to 16,000 rpm. Seal back pressure was controlled to provide four pressure ratios at all supply pressures. No measurable difference in leakage was detected for the seal with and without the swirl brakes. Comparisons of the measurement results for the seal without a swirl brake with the Scharrer theory showed that the theory can be used only to provide design guidelines; systematic differences were observed between theory and experiment due to changes in running speed, supply pressure, and pressure ratio.

  16. Outcomes Definitions and Statistical Tests in Oncology Studies: A Systematic Review of the Reporting Consistency.

    Science.gov (United States)

    Rivoirard, Romain; Duplay, Vianney; Oriol, Mathieu; Tinquaut, Fabien; Chauvin, Franck; Magne, Nicolas; Bourmaud, Aurelie

    2016-01-01

    Quality of reporting for Randomized Clinical Trials (RCTs) in oncology was analyzed in several systematic reviews, but, in this setting, there is paucity of data for the outcomes definitions and consistency of reporting for statistical tests in RCTs and Observational Studies (OBS). The objective of this review was to describe those two reporting aspects, for OBS and RCTs in oncology. From a list of 19 medical journals, three were retained for analysis, after a random selection: British Medical Journal (BMJ), Annals of Oncology (AoO) and British Journal of Cancer (BJC). All original articles published between March 2009 and March 2014 were screened. Only studies whose main outcome was accompanied by a corresponding statistical test were included in the analysis. Studies based on censored data were excluded. Primary outcome was to assess quality of reporting for description of primary outcome measure in RCTs and of variables of interest in OBS. A logistic regression was performed to identify covariates of studies potentially associated with concordance of tests between Methods and Results parts. 826 studies were included in the review, and 698 were OBS. Variables were described in Methods section for all OBS studies and primary endpoint was clearly detailed in Methods section for 109 RCTs (85.2%). 295 OBS (42.2%) and 43 RCTs (33.6%) had perfect agreement for reported statistical test between Methods and Results parts. In multivariable analysis, variable "number of included patients in study" was associated with test consistency: aOR (adjusted Odds Ratio) for third group compared to first group was equal to: aOR Grp3 = 0.52 [0.31-0.89] (P value = 0.009). Variables in OBS and primary endpoint in RCTs are reported and described with a high frequency. However, statistical tests consistency between methods and Results sections of OBS is not always noted. Therefore, we encourage authors and peer reviewers to verify consistency of statistical tests in oncology studies.

  17. Model tests for prestressed concrete pressure vessels

    International Nuclear Information System (INIS)

    Stoever, R.

    1975-01-01

    Investigations with models of reactor pressure vessels are used to check results of three dimensional calculation methods and to predict the behaviour of the prototype. Model tests with 1:50 elastic pressure vessel models and with a 1:5 prestressed concrete pressure vessel are described and experimental results are presented. (orig.) [de

  18. Design, modeling and testing of data converters

    CERN Document Server

    Kiaei, Sayfe; Xu, Fang

    2014-01-01

    This book presents the a scientific discussion of the state-of-the-art techniques and designs for modeling, testing and for the performance analysis of data converters. The focus is put on sustainable data conversion. Sustainability has become a public issue that industries and users can not ignore. Devising environmentally friendly solutions for data conversion designing, modeling and testing is nowadays a requirement that researchers and practitioners must consider in their activities. This book presents the outcome of the IWADC workshop 2011, held in Orvieto, Italy.

  19. Automated Testing of Event-Driven Applications

    DEFF Research Database (Denmark)

    Jensen, Casper Svenning

    may be tested by selecting an interesting input (i.e. a sequence of events), and deciding if a failure occurs when the selected input is applied to the event-driven application under test. Automated testing promises to reduce the workload for developers by automatically selecting interesting inputs...... and detect failures. However, it is non-trivial to conduct automated testing of event-driven applications because of, for example, infinite input spaces and the absence of specifications of correct application behavior. In this PhD dissertation, we identify a number of specific challenges when conducting...... automated testing of event-driven applications, and we present novel techniques for solving these challenges. First, we present an algorithm for stateless model-checking of event-driven applications with partial-order reduction, and we show how this algorithm may be used to systematically test web...

  20. Preclinical Testing of Novel Oxytocin Receptor Activators in Models of Autism Phenotypes

    Science.gov (United States)

    2015-11-30

    higher incidence of autism observed in male versus female children . BALB/cByJ Cohort 1 (n ¼ 6 per treatment group; 6e7 weeks of age) was tested for acute...Veenstra-Vanderweele, J., 2011. A systematic review of medical treatments for children with autism spectrum disorders. Pediatrics 127, e1312ee1321. Melis...R.L., Leserman, J., Jarskog, L.F., Penn, D.L., 2011. Intranasal oxytocin reduces psychotic symptoms and improves theory of mind and social perception

  1. A Hamiltonian viewpoint in the modeling of switching power converters : A systematic modeling procedure of a large class of switching power converters using the Hamiltonian approach

    NARCIS (Netherlands)

    Escobar, Gerardo; Schaft, Arjan J. van der; Ortega, Romeo

    1999-01-01

    In this paper we show how, using the Hamiltonian formalism, we can systematically derive mathematical models that describe the behaviour of a large class of switching power converters, including the "Boost", "Buck", "Buck-Boost", "Čuk" and "Flyback" converters. We follow the approach earlier

  2. A systematic investigation of computation models for predicting Adverse Drug Reactions (ADRs).

    Science.gov (United States)

    Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong

    2014-01-01

    Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms.

  3. Bio-inspired ``jigsaw''-like interlocking sutures: Modeling, optimization, 3D printing and testing

    Science.gov (United States)

    Malik, I. A.; Mirkhalaf, M.; Barthelat, F.

    2017-05-01

    Structural biological materials such as bone, teeth or mollusk shells draw their remarkable performance from a sophisticated interplay of architectures and weak interfaces. Pushed to the extreme, this concept leads to sutured materials, which contain thin lines with complex geometries. Sutured materials are prominent in nature, and have recently served as bioinspiration for toughened ceramics and glasses. Sutures can generate large deformations, toughness and damping in otherwise all brittle systems and materials. In this study we examine the design and optimization of sutures with a jigsaw puzzle-like geometry, focusing on the non-linear traction behavior generated by the frictional pullout of the jigsaw tabs. We present analytical models which accurately predict the entire pullout response. Pullout strength and energy absorption increase with higher interlocking angles and for higher coefficients of friction, but the associated high stresses in the solid may fracture the tabs. Systematic optimization reveals a counter-intuitive result: the best pullout performance is achieved with interfaces with low coefficient of friction and high interlocking angle. We finally use 3D printing and mechanical testing to verify the accuracy of the models and of the optimization. The models and guidelines we present here can be extended to other types of geometries and sutured materials subjected to other loading/boundary conditions. The nonlinear responses of sutures are particularly attractive to augment the properties and functionalities of inherently brittle materials such as ceramics and glasses.

  4. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  5. A comprehensive, consistent and systematic mathematical model of PEM fuel cells

    International Nuclear Information System (INIS)

    Baschuk, J.J.; Li Xianguo

    2009-01-01

    This paper presents a comprehensive, consistent and systematic mathematical model for PEM fuel cells that can be used as the general formulation for the simulation and analysis of PEM fuel cells. As an illustration, the model is applied to an isothermal, steady state, two-dimensional PEM fuel cell. Water is assumed to be in either the gas phase or as a liquid phase in the pores of the polymer electrolyte. The model includes the transport of gas in the gas flow channels, electrode backing and catalyst layers; the transport of water and hydronium in the polymer electrolyte of the catalyst and polymer electrolyte layers; and the transport of electrical current in the solid phase. Water and ion transport in the polymer electrolyte was modeled using the generalized Stefan-Maxwell equations, based on non-equilibrium thermodynamics. Model simulations show that the bulk, convective gas velocity facilitates hydrogen transport from the gas flow channels to the anode catalyst layers, but inhibits oxygen transport. While some of the water required by the anode is supplied by the water produced in the cathode, the majority of water must be supplied by the anode gas phase, making operation with fully humidified reactants necessary. The length of the gas flow channel has a significant effect on the current production of the PEM fuel cell, with a longer channel length having a lower performance relative to a shorter channel length. This lower performance is caused by a greater variation in water content within the longer channel length

  6. Systematic reviews in bioethics: types, challenges, and value.

    Science.gov (United States)

    McDougall, Rosalind

    2014-02-01

    There has recently been interest in applying the techniques of systematic review to bioethics literature. In this paper, I identify the three models of systematic review proposed to date in bioethics: systematic reviews of empirical bioethics research, systematic reviews of normative bioethics literature, and systematic reviews of reasons. I argue that all three types yield information useful to scholarship in bioethics, yet they also face significant challenges particularly in relation to terminology and time. Drawing on my recent experience conducting a systematic review, I suggest that complete comprehensiveness may not always be an appropriate goal of a literature review in bioethics, depending on the research question. In some cases, all the relevant ideas may be captured without capturing all the relevant literature. I conclude that systematic reviews in bioethics have an important role to play alongside the traditional broadbrush approach to reviewing literature in bioethics.

  7. Which physical examination tests provide clinicians with the most value when examining the shoulder? Update of a systematic review with meta-analysis of individual tests.

    Science.gov (United States)

    Hegedus, Eric J; Goode, Adam P; Cook, Chad E; Michener, Lori; Myer, Cortney A; Myer, Daniel M; Wright, Alexis A

    2012-11-01

    To update our previously published systematic review and meta-analysis by subjecting the literature on shoulder physical examination (ShPE) to careful analysis in order to determine each tests clinical utility. This review is an update of previous work, therefore the terms in the Medline and CINAHL search strategies remained the same with the exception that the search was confined to the dates November, 2006 through to February, 2012. The previous study dates were 1966 - October, 2006. Further, the original search was expanded, without date restrictions, to include two new databases: EMBASE and the Cochrane Library. The Quality Assessment of Diagnostic Accuracy Studies, version 2 (QUADAS 2) tool was used to critique the quality of each new paper. Where appropriate, data from the prior review and this review were combined to perform meta-analysis using the updated hierarchical summary receiver operating characteristic and bivariate models. Since the publication of the 2008 review, 32 additional studies were identified and critiqued. For subacromial impingement, the meta-analysis revealed that the pooled sensitivity and specificity for the Neer test was 72% and 60%, respectively, for the Hawkins-Kennedy test was 79% and 59%, respectively, and for the painful arc was 53% and 76%, respectively. Also from the meta-analysis, regarding superior labral anterior to posterior (SLAP) tears, the test with the best sensitivity (52%) was the relocation test; the test with the best specificity (95%) was Yergason's test; and the test with the best positive likelihood ratio (2.81) was the compression-rotation test. Regarding new (to this series of reviews) ShPE tests, where meta-analysis was not possible because of lack of sufficient studies or heterogeneity between studies, there are some individual tests that warrant further investigation. A highly specific test (specificity >80%, LR+ ≥ 5.0) from a low bias study is the passive distraction test for a SLAP lesion. This test may

  8. Movable scour protection. Model test report

    Energy Technology Data Exchange (ETDEWEB)

    Lorenz, R.

    2002-07-01

    This report presents the results of a series of model tests with scour protection of marine structures. The objective of the model tests is to investigate the integrity of the scour protection during a general lowering of the surrounding seabed, for instance in connection with movement of a sand bank or with general subsidence. The scour protection in the tests is made out of stone material. Two different fractions have been used: 4 mm and 40 mm. Tests with current, with waves and with combined current and waves were carried out. The scour protection material was placed after an initial scour hole has evolved in the seabed around the structure. This design philosophy has been selected because the situation often is that the scour hole starts to generate immediately after the structure has been placed. It is therefore difficult to establish a scour protection at the undisturbed seabed if the scour material is placed after the main structure. Further, placing the scour material in the scour hole increases the stability of the material. Two types of structure have been used for the test, a Monopile and a Tripod foundation. Test with protection mats around the Monopile model was also carried out. The following main conclusions have emerged form the model tests with flat bed (i.e. no general seabed lowering): 1. The maximum scour depth found in steady current on sand bed was 1.6 times the cylinder diameter, 2. The minimum horizontal extension of the scour hole (upstream direction) was 2.8 times the cylinder diameter, corresponding to a slope of 30 degrees, 3. Concrete protection mats do not meet the criteria for a strongly erodible seabed. In the present test virtually no reduction in the scour depth was obtained. The main problem is the interface to the cylinder. If there is a void between the mats and the cylinder, scour will develop. Even with the protection mats that are tightly connected to the cylinder, scour is expected to develop as long as the mats allow for

  9. Automated model-based testing of hybrid systems

    NARCIS (Netherlands)

    Osch, van M.P.W.J.

    2009-01-01

    In automated model-based input-output conformance testing, tests are automati- cally generated from a speci¯cation and automatically executed on an implemen- tation. Input is applied to the implementation and output is observed from the implementation. If the observed output is allowed according to

  10. Quality of systematic reviews in pediatric oncology--a systematic review.

    Science.gov (United States)

    Lundh, Andreas; Knijnenburg, Sebastiaan L; Jørgensen, Anders W; van Dalen, Elvira C; Kremer, Leontien C M

    2009-12-01

    To ensure evidence-based decision making in pediatric oncology systematic reviews are necessary. The objective of our study was to evaluate the methodological quality of all currently existing systematic reviews in pediatric oncology. We identified eligible systematic reviews through a systematic search of the literature. Data on clinical and methodological characteristics of the included systematic reviews were extracted. The methodological quality of the included systematic reviews was assessed using the overview quality assessment questionnaire, a validated 10-item quality assessment tool. We compared the methodological quality of systematic reviews published in regular journals with that of Cochrane systematic reviews. We included 117 systematic reviews, 99 systematic reviews published in regular journals and 18 Cochrane systematic reviews. The average methodological quality of systematic reviews was low for all ten items, but the quality of Cochrane systematic reviews was significantly higher than systematic reviews published in regular journals. On a 1-7 scale, the median overall quality score for all systematic reviews was 2 (range 1-7), with a score of 1 (range 1-7) for systematic reviews in regular journals compared to 6 (range 3-7) in Cochrane systematic reviews (pmethodological flaws leading to a high risk of bias. While Cochrane systematic reviews were of higher methodological quality than systematic reviews in regular journals, some of them also had methodological problems. Therefore, the methodology of each individual systematic review should be scrutinized before accepting its results.

  11. A test for the parameters of multiple linear regression models ...

    African Journals Online (AJOL)

    A test for the parameters of multiple linear regression models is developed for conducting tests simultaneously on all the parameters of multiple linear regression models. The test is robust relative to the assumptions of homogeneity of variances and absence of serial correlation of the classical F-test. Under certain null and ...

  12. A systematic review of the psychological and social benefits of participation in sport for adults: informing development of a conceptual model of health through sport

    Science.gov (United States)

    2013-01-01

    Background The definition of health incorporates the physical, social and mental domains, however the Physical Activity (PA) guidelines do not address social health. Furthermore, there is insufficient evidence about the levels or types of PA associated specifically with psychological health. This paper first presents the results of a systematic review of the psychological and social health benefits of participation in sport by adults. Secondly, the information arising from the systematic review has been used to develop a conceptual model of Health through Sport. Methods A systematic review of 14 electronic databases was conducted in June 2012, and studies published since 1990 were considered for inclusion. Studies that addressed mental and/or social health benefits from participation in sport were included. Results A total of 3668 publications were initially identified, of which 11 met the selection criteria. There were many different psychological and social health benefits reported, with the most commonly being wellbeing and reduced distress and stress. Sport may be associated with improved psychosocial health in addition to improvements attributable to participation in PA. Specifically, club-based or team-based sport seems to be associated with improved health outcomes compared to individual activities, due to the social nature of the participation. Notwithstanding this, individuals who prefer to participate in sport by themselves can still derive mental health benefits which can enhance the development of true-self-awareness and personal growth which is essential for social health. A conceptual model, Health through Sport, is proposed. The model depicts the relationship between psychological, psychosocial and social health domains, and their positive associations with sport participation, as reported in the literature. However, it is acknowledged that the capacity to determine the existence and direction of causal links between participation and health is

  13. A systematic review of the psychological and social benefits of participation in sport for adults: informing development of a conceptual model of health through sport.

    Science.gov (United States)

    Eime, Rochelle M; Young, Janet A; Harvey, Jack T; Charity, Melanie J; Payne, Warren R

    2013-12-07

    The definition of health incorporates the physical, social and mental domains, however the Physical Activity (PA) guidelines do not address social health. Furthermore, there is insufficient evidence about the levels or types of PA associated specifically with psychological health. This paper first presents the results of a systematic review of the psychological and social health benefits of participation in sport by adults. Secondly, the information arising from the systematic review has been used to develop a conceptual model of Health through Sport. A systematic review of 14 electronic databases was conducted in June 2012, and studies published since 1990 were considered for inclusion. Studies that addressed mental and/or social health benefits from participation in sport were included. A total of 3668 publications were initially identified, of which 11 met the selection criteria. There were many different psychological and social health benefits reported, with the most commonly being wellbeing and reduced distress and stress. Sport may be associated with improved psychosocial health in addition to improvements attributable to participation in PA. Specifically, club-based or team-based sport seems to be associated with improved health outcomes compared to individual activities, due to the social nature of the participation. Notwithstanding this, individuals who prefer to participate in sport by themselves can still derive mental health benefits which can enhance the development of true-self-awareness and personal growth which is essential for social health. A conceptual model, Health through Sport, is proposed. The model depicts the relationship between psychological, psychosocial and social health domains, and their positive associations with sport participation, as reported in the literature. However, it is acknowledged that the capacity to determine the existence and direction of causal links between participation and health is limited by the cross

  14. Direct cointegration testing in error-correction models

    NARCIS (Netherlands)

    F.R. Kleibergen (Frank); H.K. van Dijk (Herman)

    1994-01-01

    textabstractAbstract An error correction model is specified having only exact identified parameters, some of which reflect a possible departure from a cointegration model. Wald, likelihood ratio, and Lagrange multiplier statistics are derived to test for the significance of these parameters. The

  15. A Systematic Review of Cost-Effectiveness Models in Type 1 Diabetes Mellitus.

    Science.gov (United States)

    Henriksson, Martin; Jindal, Ramandeep; Sternhufvud, Catarina; Bergenheim, Klas; Sörstadius, Elisabeth; Willis, Michael

    2016-06-01

    Critiques of cost-effectiveness modelling in type 1 diabetes mellitus (T1DM) are scarce and are often undertaken in combination with type 2 diabetes mellitus (T2DM) models. However, T1DM is a separate disease, and it is therefore important to appraise modelling methods in T1DM. This review identified published economic models in T1DM and provided an overview of the characteristics and capabilities of available models, thus enabling a discussion of best-practice modelling approaches in T1DM. A systematic review of Embase(®), MEDLINE(®), MEDLINE(®) In-Process, and NHS EED was conducted to identify available models in T1DM. Key conferences and health technology assessment (HTA) websites were also reviewed. The characteristics of each model (e.g. model structure, simulation method, handling of uncertainty, incorporation of treatment effect, data for risk equations, and validation procedures, based on information in the primary publication) were extracted, with a focus on model capabilities. We identified 13 unique models. Overall, the included studies varied greatly in scope as well as in the quality and quantity of information reported, but six of the models (Archimedes, CDM [Core Diabetes Model], CRC DES [Cardiff Research Consortium Discrete Event Simulation], DCCT [Diabetes Control and Complications Trial], Sheffield, and EAGLE [Economic Assessment of Glycaemic control and Long-term Effects of diabetes]) were the most rigorous and thoroughly reported. Most models were Markov based, and cohort and microsimulation methods were equally common. All of the more comprehensive models employed microsimulation methods. Model structure varied widely, with the more holistic models providing a comprehensive approach to microvascular and macrovascular events, as well as including adverse events. The majority of studies reported a lifetime horizon, used a payer perspective, and had the capability for sensitivity analysis. Several models have been developed that provide useful

  16. Model Test Bed for Evaluating Wave Models and Best Practices for Resource Assessment and Characterization

    Energy Technology Data Exchange (ETDEWEB)

    Neary, Vincent Sinclair [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Water Power Technologies; Yang, Zhaoqing [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Coastal Sciences Division; Wang, Taiping [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Coastal Sciences Division; Gunawan, Budi [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Water Power Technologies; Dallman, Ann Renee [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Water Power Technologies

    2016-03-01

    A wave model test bed is established to benchmark, test and evaluate spectral wave models and modeling methodologies (i.e., best practices) for predicting the wave energy resource parameters recommended by the International Electrotechnical Commission, IEC TS 62600-101Ed. 1.0 ©2015. Among other benefits, the model test bed can be used to investigate the suitability of different models, specifically what source terms should be included in spectral wave models under different wave climate conditions and for different classes of resource assessment. The overarching goal is to use these investigations to provide industry guidance for model selection and modeling best practices depending on the wave site conditions and desired class of resource assessment. Modeling best practices are reviewed, and limitations and knowledge gaps in predicting wave energy resource parameters are identified.

  17. 3-D CFD modeling and experimental testing of thermal behavior of a Li-Ion battery

    International Nuclear Information System (INIS)

    Gümüşsu, Emre; Ekici, Özgür; Köksal, Murat

    2017-01-01

    Highlights: • A thermally fully predictive 3-D CFD model is developed for Li-Ion batteries. • Complete flow field around the battery and conduction inside the battery are solved. • Macro-scale thermophysical properties and the entropic term are investigated. • Discharge rate and usage history of the battery are systematically investigated. • Reliability of the model was tested through experimental measurements. - Abstract: In this study, a 3-D computational fluid dynamics model was developed for investigating the thermal behavior of lithium ion batteries under natural convection. The model solves the complete flow field around the battery as well as conduction inside the battery using the well-known heat generation model of Bernardi et al. (1985). The model is thermally fully predictive so it requires only electrical performance parameters of the battery to calculate its temperature during discharging. Using the model, detailed investigation of the effects of the variation of the macro-scale thermophysical properties and the entropic term of the heat generation model was carried out. Results show that specific heat is a critical property that has a significant impact on the simulation results whereas thermal conductivity has relatively minor importance. Moreover, the experimental data can be successfully predicted without taking the entropic term into account in the calculation of the heat generation. The difference between the experimental and predicted battery surface temperature was less than 3 °C for all discharge rates and regardless of the usage history of the battery. The developed model has the potential to be used for the investigation of the thermal behavior of Li-Ion batteries in different packaging configurations under natural and forced convection.

  18. Supervised and Unsupervised Self-Testing for HIV in High- and Low-Risk Populations: A Systematic Review

    Science.gov (United States)

    Pant Pai, Nitika; Sharma, Jigyasa; Shivkumar, Sushmita; Pillay, Sabrina; Vadnais, Caroline; Joseph, Lawrence; Dheda, Keertan; Peeling, Rosanna W.

    2013-01-01

    Background Stigma, discrimination, lack of privacy, and long waiting times partly explain why six out of ten individuals living with HIV do not access facility-based testing. By circumventing these barriers, self-testing offers potential for more people to know their sero-status. Recent approval of an in-home HIV self test in the US has sparked self-testing initiatives, yet data on acceptability, feasibility, and linkages to care are limited. We systematically reviewed evidence on supervised (self-testing and counselling aided by a health care professional) and unsupervised (performed by self-tester with access to phone/internet counselling) self-testing strategies. Methods and Findings Seven databases (Medline [via PubMed], Biosis, PsycINFO, Cinahl, African Medicus, LILACS, and EMBASE) and conference abstracts of six major HIV/sexually transmitted infections conferences were searched from 1st January 2000–30th October 2012. 1,221 citations were identified and 21 studies included for review. Seven studies evaluated an unsupervised strategy and 14 evaluated a supervised strategy. For both strategies, data on acceptability (range: 74%–96%), preference (range: 61%–91%), and partner self-testing (range: 80%–97%) were high. A high specificity (range: 99.8%–100%) was observed for both strategies, while a lower sensitivity was reported in the unsupervised (range: 92.9%–100%; one study) versus supervised (range: 97.4%–97.9%; three studies) strategy. Regarding feasibility of linkage to counselling and care, 96% (n = 102/106) of individuals testing positive for HIV stated they would seek post-test counselling (unsupervised strategy, one study). No extreme adverse events were noted. The majority of data (n = 11,019/12,402 individuals, 89%) were from high-income settings and 71% (n = 15/21) of studies were cross-sectional in design, thus limiting our analysis. Conclusions Both supervised and unsupervised testing strategies were highly acceptable

  19. Model-Driven Test Generation of Distributed Systems

    Science.gov (United States)

    Easwaran, Arvind; Hall, Brendan; Schweiker, Kevin

    2012-01-01

    This report describes a novel test generation technique for distributed systems. Utilizing formal models and formal verification tools, spe cifically the Symbolic Analysis Laboratory (SAL) tool-suite from SRI, we present techniques to generate concurrent test vectors for distrib uted systems. These are initially explored within an informal test validation context and later extended to achieve full MC/DC coverage of the TTEthernet protocol operating within a system-centric context.

  20. Testing and Modeling of Machine Properties in Resistance Welding

    DEFF Research Database (Denmark)

    Wu, Pei

    The objective of this work has been to test and model the machine properties including the mechanical properties and the electrical properties in resistance welding. The results are used to simulate the welding process more accurately. The state of the art in testing and modeling machine properties...... as real projection welding tests, is easy to realize in industry, since tests may be performed in situ. In part II, an approach of characterizing the electrical properties of AC resistance welding machines is presented, involving testing and mathematical modelling of the weld current, the firing angle...... in resistance welding has been described based on a comprehensive literature study. The present thesis has been subdivided into two parts: Part I: Mechanical properties of resistance welding machines. Part II: Electrical properties of resistance welding machines. In part I, the electrode force in the squeeze...

  1. Accuracy tests of the tessellated SLBM model

    International Nuclear Information System (INIS)

    Ramirez, A L; Myers, S C

    2007-01-01

    We have compared the Seismic Location Base Model (SLBM) tessellated model (version 2.0 Beta, posted July 3, 2007) with the GNEMRE Unified Model. The comparison is done on a layer/depth-by-layer/depth and layer/velocity-by-layer/velocity comparison. The SLBM earth model is defined on a tessellation that spans the globe at a constant resolution of about 1 degree (Ballard, 2007). For the tests, we used the earth model in file ''unified( ) iasp.grid''. This model contains the top 8 layers of the Unified Model (UM) embedded in a global IASP91 grid. Our test queried the same set of nodes included in the UM model file. To query the model stored in memory, we used some of the functionality built into the SLBMInterface object. We used the method get InterpolatedPoint() to return desired values for each layer at user-specified points. The values returned include: depth to the top of each layer, layer velocity, layer thickness and (for the upper-mantle layer) velocity gradient. The SLBM earth model has an extra middle crust layer whose values are used when Pg/Lg phases are being calculated. This extra layer was not accessed by our tests. Figures 1 to 8 compare the layer depths, P velocities and P gradients in the UM and SLBM models. The figures show results for the three sediment layers, three crustal layers and the upper mantle layer defined in the UM model. Each layer in the models (sediment1, sediment2, sediment3, upper crust, middle crust, lower crust and upper mantle) is shown on a separate figure. The upper mantle P velocity and gradient distribution are shown on Figures 7 and 8. The left and center images in the top row of each figure is the rendering of depth to the top of the specified layer for the UM and SLBM models. When a layer has zero thickness, its depth is the same as that of the layer above. The right image in the top row is the difference between in layer depth for the UM and SLBM renderings. The left and center images in the bottom row of the figures are

  2. Model of ASTM Flammability Test in Microgravity: Iron Rods

    Science.gov (United States)

    Steinberg, Theodore A; Stoltzfus, Joel M.; Fries, Joseph (Technical Monitor)

    2000-01-01

    There is extensive qualitative results from burning metallic materials in a NASA/ASTM flammability test system in normal gravity. However, this data was shown to be inconclusive for applications involving oxygen-enriched atmospheres under microgravity conditions by conducting tests using the 2.2-second Lewis Research Center (LeRC) Drop Tower. Data from neither type of test has been reduced to fundamental kinetic and dynamic systems parameters. This paper reports the initial model analysis for burning iron rods under microgravity conditions using data obtained at the LERC tower and modeling the burning system after ignition. Under the conditions of the test the burning mass regresses up the rod to be detached upon deceleration at the end of the drop. The model describes the burning system as a semi-batch, well-mixed reactor with product accumulation only. This model is consistent with the 2.0-second duration of the test. Transient temperature and pressure measurements are made on the chamber volume. The rod solid-liquid interface melting rate is obtained from film records. The model consists of a set of 17 non-linear, first-order differential equations which are solved using MATLAB. This analysis confirms that a first-order rate, in oxygen concentration, is consistent for the iron-oxygen kinetic reaction. An apparent activation energy of 246.8 kJ/mol is consistent for this model.

  3. Pitfalls of Systematic Reviews and Meta-Analyses in Imaging Research

    NARCIS (Netherlands)

    McInnes, Matthew D. F.; Bossuyt, Patrick M. M.

    2015-01-01

    Systematic reviews of imaging research represent a tool to better understand test accuracy or the efficacy of interventions. Like any type of research, appropriate methods must be applied to optimize quality. The purpose of this review is to outline common pitfalls in performing systematic reviews

  4. Syndemics of psychosocial problems and HIV risk: A systematic review of empirical tests of the disease interaction concept.

    Science.gov (United States)

    Tsai, Alexander C; Burns, Bridget F O

    2015-08-01

    In the theory of syndemics, diseases co-occur in particular temporal or geographical contexts due to harmful social conditions (disease concentration) and interact at the level of populations and individuals, with mutually enhancing deleterious consequences for health (disease interaction). This theory has widespread adherents in the field, but the extent to which there is empirical support for the concept of disease interaction remains unclear. In January 2015 we systematically searched 7 bibliographic databases and tracked citations to highly cited publications associated with the theory of syndemics. Of the 783 records, we ultimately included 34 published journal articles, 5 dissertations, and 1 conference abstract. Most studies were based on a cross-sectional design (32 [80%]), were conducted in the U.S. (32 [80%]), and focused on men who have sex with men (21 [53%]). The most frequently studied psychosocial problems were related to mental health (33 [83%]), substance abuse (36 [90%]), and violence (27 [68%]); while the most frequently studied outcome variables were HIV transmission risk behaviors (29 [73%]) or HIV infection (9 [23%]). To test the disease interaction concept, 11 (28%) studies used some variation of a product term, with less than half of these (5/11 [45%]) providing sufficient information to interpret interaction both on an additive and on a multiplicative scale. The most frequently used specification (31 [78%]) to test the disease interaction concept was the sum score corresponding to the total count of psychosocial problems. Although the count variable approach does not test hypotheses about interactions between psychosocial problems, these studies were much more likely than others (14/31 [45%] vs. 0/9 [0%]; χ2 = 6.25, P = 0.01) to incorporate language about "synergy" or "interaction" that was inconsistent with the statistical models used. Therefore, more evidence is needed to assess the extent to which diseases interact, either at the

  5. Systematic Methodology for Reproducible Optimizing Batch Operation

    DEFF Research Database (Denmark)

    Bonné, Dennis; Jørgensen, Sten Bay

    2006-01-01

    This contribution presents a systematic methodology for rapid acquirement of discrete-time state space model representations of batch processes based on their historical operation data. These state space models are parsimoniously parameterized as a set of local, interdependent models. The present...

  6. Tests of the single-pion exchange model

    International Nuclear Information System (INIS)

    Treiman, S.B.; Yang, C.N.

    1983-01-01

    The single-pion exchange model (SPEM) of high-energy particle reactions provides an attractively simple picture of seemingly complex processes and has accordingly been much discussed in recent times. The purpose of this note is to call attention to the possibility of subjecting the model to certain tests precisely in the domain where the model stands the best chance of making sense

  7. Scalable Power-Component Models for Concept Testing

    Science.gov (United States)

    2011-08-17

    motor speed can be either positive or negative dependent upon the propelling or regenerative braking scenario. The simulation provides three...the machine during generation or regenerative braking . To use the model, the user modifies the motor model criteria parameters by double-clicking... SYSTEMS ENGINEERING AND TECHNOLOGY SYMPOSIUM MODELING & SIMULATION, TESTING AND VALIDATION (MSTV) MINI-SYMPOSIUM AUGUST 9-11 DEARBORN, MICHIGAN

  8. 1 Patient acceptability and feasibility of HIV testing in emergency departments in the UK - a systematic review and meta-analysis.

    Science.gov (United States)

    Lungu, Nicola

    2017-12-01

    NICE 2016 HIV testing guidelines now include the recommendation to offer HIV testing in Emergency Departments, in areas of high prevalence, 1 to everyone who is undergoing blood tests. 23% of England's local authorities are areas of high HIV prevalence (>2/1000) and are therefore eligible. 2 So far very few Emergency Departments have implemented routine HIV testing. This systematic review assesses evidence for two implementation considerations: patient acceptability (how likely a patient will accept an HIV test when offered in an Emergency Department), and feasibility, which incorporates staff training and willingness, and department capacity, (how likely Emergency Department staff will offer an HIV test to an eligible patient), both measured by surrogate quantitative markers. Three medical databases were systematically searched for reports of non-targeted HIV testing in UK Emergency Departments. A total of 1584 unique papers were found, 9 full text articles were critically appraised, and 7 studies included in meta-analysis. There is a combined patient sample of 1 01 975. The primary outcome, patient acceptability of HIV testing in Emergency Departments (number of patients accepting an HIV test, as a proportion of those offered) is 54.1% (CI 40.1, 68.2). Feasibility (number of tests offered, as a proportion of eligible patients) is 36.2% (CI 9.8, 62.4). For an Emergency Department considering introducing routine HIV testing, this review suggests an opt-out publicity-lead strategy. Utilising oral fluid and blood tests would lead to the greatest proportion of eligible patients accepting an HIV test. For individual staff who are consenting patients for HIV testing, it may be encouraging to know that there is >50% chance the patient will accept an offer of testing.emermed;34/12/A860-a/T1F1T1Table 1Summary table of data extracted from final 7 studies, with calculated acceptability and feasibility if appropriate, and GRADE score. Studies listed in chronological

  9. DKIST enclosure modeling and verification during factory assembly and testing

    Science.gov (United States)

    Larrakoetxea, Ibon; McBride, William; Marshall, Heather K.; Murga, Gaizka

    2014-08-01

    The Daniel K. Inouye Solar Telescope (DKIST, formerly the Advanced Technology Solar Telescope, ATST) is unique as, apart from protecting the telescope and its instrumentation from the weather, it holds the entrance aperture stop and is required to position it with millimeter-level accuracy. The compliance of the Enclosure design with the requirements, as of Final Design Review in January 2012, was supported by mathematical models and other analyses which included structural and mechanical analyses (FEA), control models, ventilation analysis (CFD), thermal models, reliability analysis, etc. During the Enclosure Factory Assembly and Testing the compliance with the requirements has been verified using the real hardware and the models created during the design phase have been revisited. The tests performed during shutter mechanism subsystem (crawler test stand) functional and endurance testing (completed summer 2013) and two comprehensive system-level factory acceptance testing campaigns (FAT#1 in December 2013 and FAT#2 in March 2014) included functional and performance tests on all mechanisms, off-normal mode tests, mechanism wobble tests, creation of the Enclosure pointing map, control system tests, and vibration tests. The comparison of the assumptions used during the design phase with the properties measured during the test campaign provides an interesting reference for future projects.

  10. The value of predicting restriction of fetal growth and compromise of its wellbeing: Systematic quantitative overviews (meta-analysis of test accuracy literature

    Directory of Open Access Journals (Sweden)

    Robson Stephen C

    2007-03-01

    Full Text Available Abstract Background Restriction of fetal growth and compromise of fetal wellbeing remain significant causes of perinatal death and childhood disability. At present, there is a lack of scientific consensus about the best strategies for predicting these conditions before birth. Therefore, there is uncertainty about the best management of pregnant women who might have a growth restricted baby. This is likely to be due to a dearth of clear collated information from individual research studies drawn from different sources on this subject. Methods/Design A series of systematic reviews and meta-analyses will be undertaken to determine, among pregnant women, the accuracy of various tests to predict and/or diagnose fetal growth restriction and compromise of fetal wellbeing. We will search Medline, Embase, Cochrane Library, MEDION, citation lists of review articles and eligible primary articles and will contact experts in the field. Independent reviewers will select studies, extract data and assess study quality according to established criteria. Language restrictions will not be applied. Data synthesis will involve meta-analysis (where appropriate, exploration of heterogeneity and publication bias. Discussion The project will collate and synthesise the available evidence regarding the value of the tests for predicting restriction of fetal growth and compromise of fetal wellbeing. The systematic overviews will assess the quality of the available evidence, estimate the magnitude of potential benefits, identify those tests with good predictive value and help formulate practice recommendations.

  11. Markov modeling and discrete event simulation in health care: a systematic comparison.

    Science.gov (United States)

    Standfield, Lachlan; Comans, Tracy; Scuffham, Paul

    2014-04-01

    The aim of this study was to assess if the use of Markov modeling (MM) or discrete event simulation (DES) for cost-effectiveness analysis (CEA) may alter healthcare resource allocation decisions. A systematic literature search and review of empirical and non-empirical studies comparing MM and DES techniques used in the CEA of healthcare technologies was conducted. Twenty-two pertinent publications were identified. Two publications compared MM and DES models empirically, one presented a conceptual DES and MM, two described a DES consensus guideline, and seventeen drew comparisons between MM and DES through the authors' experience. The primary advantages described for DES over MM were the ability to model queuing for limited resources, capture individual patient histories, accommodate complexity and uncertainty, represent time flexibly, model competing risks, and accommodate multiple events simultaneously. The disadvantages of DES over MM were the potential for model overspecification, increased data requirements, specialized expensive software, and increased model development, validation, and computational time. Where individual patient history is an important driver of future events an individual patient simulation technique like DES may be preferred over MM. Where supply shortages, subsequent queuing, and diversion of patients through other pathways in the healthcare system are likely to be drivers of cost-effectiveness, DES modeling methods may provide decision makers with more accurate information on which to base resource allocation decisions. Where these are not major features of the cost-effectiveness question, MM remains an efficient, easily validated, parsimonious, and accurate method of determining the cost-effectiveness of new healthcare interventions.

  12. Modelling and Testing of Friction in Forging

    DEFF Research Database (Denmark)

    Bay, Niels

    2007-01-01

    Knowledge about friction is still limited in forging. The theoretical models applied presently for process analysis are not satisfactory compared to the advanced and detailed studies possible to carry out by plastic FEM analyses and more refined models have to be based on experimental testing...

  13. Spatial abilities and anatomy knowledge assessment: A systematic review.

    Science.gov (United States)

    Langlois, Jean; Bellemare, Christian; Toulouse, Josée; Wells, George A

    2017-06-01

    Anatomy knowledge has been found to include both spatial and non-spatial components. However, no systematic evaluation of studies relating spatial abilities and anatomy knowledge has been undertaken. The objective of this study was to conduct a systematic review of the relationship between spatial abilities test and anatomy knowledge assessment. A literature search was done up to March 20, 2014 in Scopus and in several databases on the OvidSP and EBSCOhost platforms. Of the 556 citations obtained, 38 articles were identified and fully reviewed yielding 21 eligible articles and their quality were formally assessed. Non-significant relationships were found between spatial abilities test and anatomy knowledge assessment using essays and non-spatial multiple-choice questions. Significant relationships were observed between spatial abilities test and anatomy knowledge assessment using practical examination, three-dimensional synthesis from two-dimensional views, drawing of views, and cross-sections. Relationships between spatial abilities test and anatomy knowledge assessment using spatial multiple-choice questions were unclear. The results of this systematic review provide evidence for spatial and non-spatial methods of anatomy knowledge assessment. Anat Sci Educ 10: 235-241. © 2016 American Association of Anatomists. © 2016 American Association of Anatomists.

  14. An innovative approach for testing bioinformatics programs using metamorphic testing

    Directory of Open Access Journals (Sweden)

    Liu Huai

    2009-01-01

    Full Text Available Abstract Background Recent advances in experimental and computational technologies have fueled the development of many sophisticated bioinformatics programs. The correctness of such programs is crucial as incorrectly computed results may lead to wrong biological conclusion or misguide downstream experimentation. Common software testing procedures involve executing the target program with a set of test inputs and then verifying the correctness of the test outputs. However, due to the complexity of many bioinformatics programs, it is often difficult to verify the correctness of the test outputs. Therefore our ability to perform systematic software testing is greatly hindered. Results We propose to use a novel software testing technique, metamorphic testing (MT, to test a range of bioinformatics programs. Instead of requiring a mechanism to verify whether an individual test output is correct, the MT technique verifies whether a pair of test outputs conform to a set of domain specific properties, called metamorphic relations (MRs, thus greatly increases the number and variety of test cases that can be applied. To demonstrate how MT is used in practice, we applied MT to test two open-source bioinformatics programs, namely GNLab and SeqMap. In particular we show that MT is simple to implement, and is effective in detecting faults in a real-life program and some artificially fault-seeded programs. Further, we discuss how MT can be applied to test programs from various domains of bioinformatics. Conclusion This paper describes the application of a simple, effective and automated technique to systematically test a range of bioinformatics programs. We show how MT can be implemented in practice through two real-life case studies. Since many bioinformatics programs, particularly those for large scale simulation and data analysis, are hard to test systematically, their developers may benefit from using MT as part of the testing strategy. Therefore our work

  15. The microelectronics and photonics test bed (MPTB) space, ground test and modeling experiments

    International Nuclear Information System (INIS)

    Campbell, A.

    1999-01-01

    This paper is an overview of the MPTB (microelectronics and photonics test bed) experiment, a combination of a space experiment, ground test and modeling programs looking at the response of advanced electronic and photonic technologies to the natural radiation environment of space. (author)

  16. Matrix diffusion model. In situ tests using natural analogues

    International Nuclear Information System (INIS)

    Rasilainen, K.

    1997-11-01

    Matrix diffusion is an important retarding and dispersing mechanism for substances carried by groundwater in fractured bedrock. Natural analogues provide, unlike laboratory or field experiments, a possibility to test the model of matrix diffusion in situ over long periods of time. This thesis documents quantitative model tests against in situ observations, done to support modelling of matrix diffusion in performance assessments of nuclear waste repositories

  17. Quality of systematic reviews in pediatric oncology--a systematic review

    DEFF Research Database (Denmark)

    Lundh, Andreas; Knijnenburg, Sebastiaan L; Jørgensen, Anders W

    2009-01-01

    BACKGROUND: To ensure evidence-based decision making in pediatric oncology systematic reviews are necessary. The objective of our study was to evaluate the methodological quality of all currently existing systematic reviews in pediatric oncology. METHODS: We identified eligible systematic reviews...... through a systematic search of the literature. Data on clinical and methodological characteristics of the included systematic reviews were extracted. The methodological quality of the included systematic reviews was assessed using the overview quality assessment questionnaire, a validated 10-item quality...... assessment tool. We compared the methodological quality of systematic reviews published in regular journals with that of Cochrane systematic reviews. RESULTS: We included 117 systematic reviews, 99 systematic reviews published in regular journals and 18 Cochrane systematic reviews. The average methodological...

  18. A systematic review of breast cancer incidence risk prediction models with meta-analysis of their performance.

    Science.gov (United States)

    Meads, Catherine; Ahmed, Ikhlaaq; Riley, Richard D

    2012-04-01

    A risk prediction model is a statistical tool for estimating the probability that a currently healthy individual with specific risk factors will develop a condition in the future such as breast cancer. Reliably accurate prediction models can inform future disease burdens, health policies and individual decisions. Breast cancer prediction models containing modifiable risk factors, such as alcohol consumption, BMI or weight, condom use, exogenous hormone use and physical activity, are of particular interest to women who might be considering how to reduce their risk of breast cancer and clinicians developing health policies to reduce population incidence rates. We performed a systematic review to identify and evaluate the performance of prediction models for breast cancer that contain modifiable factors. A protocol was developed and a sensitive search in databases including MEDLINE and EMBASE was conducted in June 2010. Extensive use was made of reference lists. Included were any articles proposing or validating a breast cancer prediction model in a general female population, with no language restrictions. Duplicate data extraction and quality assessment were conducted. Results were summarised qualitatively, and where possible meta-analysis of model performance statistics was undertaken. The systematic review found 17 breast cancer models, each containing a different but often overlapping set of modifiable and other risk factors, combined with an estimated baseline risk that was also often different. Quality of reporting was generally poor, with characteristics of included participants and fitted model results often missing. Only four models received independent validation in external data, most notably the 'Gail 2' model with 12 validations. None of the models demonstrated consistently outstanding ability to accurately discriminate between those who did and those who did not develop breast cancer. For example, random-effects meta-analyses of the performance of the

  19. Distress in unaffected individuals who decline, delay or remain ineligible for genetic testing for hereditary diseases: a systematic review.

    Science.gov (United States)

    Heiniger, Louise; Butow, Phyllis N; Price, Melanie A; Charles, Margaret

    2013-09-01

    Reviews on the psychosocial aspects of genetic testing for hereditary diseases typically focus on outcomes for carriers and non-carriers of genetic mutations. However, the majority of unaffected individuals from high-risk families do not undergo predictive testing. The aim of this review was to examine studies on psychosocial distress in unaffected individuals who delay, decline or remain ineligible for predictive genetic testing. Systematic searches of Medline, CINAHL, PsychINFO, PubMed and handsearching of related articles published between 1990 and 2012 identified 23 articles reporting 17 different studies that were reviewed and subjected to quality assessment. Findings suggest that definitions of delaying and declining are not always straightforward, and few studies have investigated psychological distress among individuals who remain ineligible for testing. Findings related to distress in delayers and decliners have been mixed, but there is evidence to suggest that cancer-related distress is lower in those who decline genetic counselling and testing, compared with testers, and that those who remain ineligible for testing experience more anxiety than tested individuals. Psychological, personality and family history vulnerability factors were identified for decliners and individuals who are ineligible for testing. The small number of studies and methodological limitations preclude definitive conclusions. Nevertheless, subgroups of those who remain untested appear to be at increased risk for psychological morbidity. As the majority of unaffected individuals do not undergo genetic testing, further research is needed to better understand the psychological impact of being denied the option of testing, declining and delaying testing. Copyright © 2012 John Wiley & Sons, Ltd.

  20. Models and Theories of Health Education and Health Promotion in Physical Activity Interventions for Women: a Systematic Review

    Directory of Open Access Journals (Sweden)

    Seyed Mohammad Mehdi Hazavehei

    2014-09-01

    Full Text Available Introduction: The present study as a systematic review investigated and analyzed interventions based on models and theories of health education and promotion in the field of physical activity in women. Materials and Methods: Three electronic databases, including Springer, Biomed Central and Science Direct were searched systematically. Only studies were selected that were quantitative, interventional and in English language as well as those that used at least one of the models and theories of health education and health promotion. Finally, 13 studies were reviewed that met the inclusion criteria and published from 2000 to 2013. Results: Of 13 studies reviewed, 10 studies measured levels of physical activity before and after the intervention, which nine interventions increased physical activity in the intervention group compared to the control group. Studies were conducted in different settings of health promotion including health care centers, community setting and workplace. The most widely used model was the Transtheoretical Model applied in eight of investigations. Conclusion: It is suggested to focus more on physical activity and duration of interventions to increase the efficacy of interventions. It is suggested to measure changes of physical activity habits in experimental and control groups in interventions based on the transtheoretical model to prepare a complementary scale to assess the efficacy of interventions. According to the results, no study had focused on changes in institutional policies or general health or providing changes in environment related to physical activity.