WorldWideScience

Sample records for model testing systematic

  1. Testing flow diversion in animal models: a systematic review.

    Science.gov (United States)

    Fahed, Robert; Raymond, Jean; Ducroux, Célina; Gentric, Jean-Christophe; Salazkin, Igor; Ziegler, Daniela; Gevry, Guylaine; Darsaut, Tim E

    2016-04-01

    Flow diversion (FD) is increasingly used to treat intracranial aneurysms. We sought to systematically review published studies to assess the quality of reporting and summarize the results of FD in various animal models. Databases were searched to retrieve all animal studies on FD from 2000 to 2015. Extracted data included species and aneurysm models, aneurysm and neck dimensions, type of flow diverter, occlusion rates, and complications. Articles were evaluated using a checklist derived from the Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines. Forty-two articles reporting the results of FD in nine different aneurysm models were included. The rabbit elastase-induced aneurysm model was the most commonly used, with 3-month occlusion rates of 73.5%, (95%CI [61.9-82.6%]). FD of surgical sidewall aneurysms, constructed in rabbits or canines, resulted in high occlusion rates (100% [65.5-100%]). FD resulted in modest occlusion rates (15.4% [8.9-25.1%]) when tested in six complex canine aneurysm models designed to reproduce more difficult clinical contexts (large necks, bifurcation, or fusiform aneurysms). Adverse events, including branch occlusion, were rarely reported. There were no hemorrhagic complications. Articles complied with 20.8 ± 3.9 of 41 ARRIVE items; only a small number used randomization (3/42 articles [7.1%]) or a control group (13/42 articles [30.9%]). Preclinical studies on FD have shown various results. Occlusion of elastase-induced aneurysms was common after FD. The model is not challenging but standardized in many laboratories. Failures of FD can be reproduced in less standardized but more challenging surgical canine constructions. The quality of reporting could be improved.

  2. Systematic vacuum study of the ITER model cryopump by test particle Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Xueli; Haas, Horst; Day, Christian [Institute for Technical Physics, Karlsruhe Institute of Technology, P.O. Box 3640, 76021 Karlsruhe (Germany)

    2011-07-01

    The primary pumping systems on the ITER torus are based on eight tailor-made cryogenic pumps because not any standard commercial vacuum pump can meet the ITER working criteria. This kind of cryopump can provide high pumping speed, especially for light gases, by the cryosorption on activated charcoal at 4.5 K. In this paper we will present the systematic Monte Carlo simulation results of the model pump in a reduced scale by ProVac3D, a new Test Particle Monte Carlo simulation program developed by KIT. The simulation model has included the most important mechanical structures such as sixteen cryogenic panels working at 4.5 K, the 80 K radiation shield envelope with baffles, the pump housing, inlet valve and the TIMO (Test facility for the ITER Model Pump) test facility. Three typical gas species, i.e., deuterium, protium and helium are simulated. The pumping characteristics have been obtained. The result is in good agreement with the experiment data up to the gas throughput of 1000 sccm, which marks the limit for free molecular flow. This means that ProVac3D is a useful tool in the design of the prototype cryopump of ITER. Meanwhile, the capture factors at different critical positions are calculated. They can be used as the important input parameters for a follow-up Direct Simulation Monte Carlo (DSMC) simulation for higher gas throughput.

  3. A permutation test to analyse systematic bias and random measurement errors of medical devices via boosting location and scale models.

    Science.gov (United States)

    Mayr, Andreas; Schmid, Matthias; Pfahlberg, Annette; Uter, Wolfgang; Gefeller, Olaf

    2017-06-01

    Measurement errors of medico-technical devices can be separated into systematic bias and random error. We propose a new method to address both simultaneously via generalized additive models for location, scale and shape (GAMLSS) in combination with permutation tests. More precisely, we extend a recently proposed boosting algorithm for GAMLSS to provide a test procedure to analyse potential device effects on the measurements. We carried out a large-scale simulation study to provide empirical evidence that our method is able to identify possible sources of systematic bias as well as random error under different conditions. Finally, we apply our approach to compare measurements of skin pigmentation from two different devices in an epidemiological study.

  4. Evidence used in model-based economic evaluations for evaluating pharmacogenetic and pharmacogenomic tests: a systematic review protocol.

    Science.gov (United States)

    Peters, Jaime L; Cooper, Chris; Buchanan, James

    2015-11-11

    Decision models can be used to conduct economic evaluations of new pharmacogenetic and pharmacogenomic tests to ensure they offer value for money to healthcare systems. These models require a great deal of evidence, yet research suggests the evidence used is diverse and of uncertain quality. By conducting a systematic review, we aim to investigate the test-related evidence used to inform decision models developed for the economic evaluation of genetic tests. We will search electronic databases including MEDLINE, EMBASE and NHS EEDs to identify model-based economic evaluations of pharmacogenetic and pharmacogenomic tests. The search will not be limited by language or date. Title and abstract screening will be conducted independently by 2 reviewers, with screening of full texts and data extraction conducted by 1 reviewer, and checked by another. Characteristics of the decision problem, the decision model and the test evidence used to inform the model will be extracted. Specifically, we will identify the reported evidence sources for the test-related evidence used, describe the study design and how the evidence was identified. A checklist developed specifically for decision analytic models will be used to critically appraise the models described in these studies. Variations in the test evidence used in the decision models will be explored across the included studies, and we will identify gaps in the evidence in terms of both quantity and quality. The findings of this work will be disseminated via a peer-reviewed journal publication and at national and international conferences. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  5. A hybrid model for combining case-control and cohort studies in systematic reviews of diagnostic tests

    Science.gov (United States)

    Chen, Yong; Liu, Yulun; Ning, Jing; Cormier, Janice; Chu, Haitao

    2014-01-01

    Systematic reviews of diagnostic tests often involve a mixture of case-control and cohort studies. The standard methods for evaluating diagnostic accuracy only focus on sensitivity and specificity and ignore the information on disease prevalence contained in cohort studies. Consequently, such methods cannot provide estimates of measures related to disease prevalence, such as population averaged or overall positive and negative predictive values, which reflect the clinical utility of a diagnostic test. In this paper, we propose a hybrid approach that jointly models the disease prevalence along with the diagnostic test sensitivity and specificity in cohort studies, and the sensitivity and specificity in case-control studies. In order to overcome the potential computational difficulties in the standard full likelihood inference of the proposed hybrid model, we propose an alternative inference procedure based on the composite likelihood. Such composite likelihood based inference does not suffer computational problems and maintains high relative efficiency. In addition, it is more robust to model mis-specifications compared to the standard full likelihood inference. We apply our approach to a review of the performance of contemporary diagnostic imaging modalities for detecting metastases in patients with melanoma. PMID:25897179

  6. Systematic review, meta-analysis and economic modelling of molecular diagnostic tests for antibiotic resistance in tuberculosis.

    Science.gov (United States)

    Drobniewski, Francis; Cooke, Mary; Jordan, Jake; Casali, Nicola; Mugwagwa, Tendai; Broda, Agnieszka; Townsend, Catherine; Sivaramakrishnan, Anand; Green, Nathan; Jit, Mark; Lipman, Marc; Lord, Joanne; White, Peter J; Abubakar, Ibrahim

    2015-05-01

    Drug-resistant tuberculosis (TB), especially multidrug-resistant (MDR, resistance to rifampicin and isoniazid) disease, is associated with a worse patient outcome. Drug resistance diagnosed using microbiological culture takes days to weeks, as TB bacteria grow slowly. Rapid molecular tests for drug resistance detection (1 day) are commercially available and may promote faster initiation of appropriate treatment. To (1) conduct a systematic review of evidence regarding diagnostic accuracy of molecular genetic tests for drug resistance, (2) conduct a health-economic evaluation of screening and diagnostic strategies, including comparison of alternative models of service provision and assessment of the value of targeting rapid testing at high-risk subgroups, and (3) construct a transmission-dynamic mathematical model that translates the estimates of diagnostic accuracy into estimates of clinical impact. A standardised search strategy identified relevant studies from EMBASE, PubMed, MEDLINE, Bioscience Information Service (BIOSIS), System for Information on Grey Literature in Europe Social Policy & Practice (SIGLE) and Web of Science, published between 1 January 2000 and 15 August 2013. Additional 'grey' sources were included. Quality was assessed using quality assessment of diagnostic accuracy studies version 2 (QUADAS-2). For each diagnostic strategy and population subgroup, a care pathway was constructed to specify which medical treatments and health services that individuals would receive from presentation to the point where they either did or did not complete TB treatment successfully. A total cost was estimated from a health service perspective for each care pathway, and the health impact was estimated in terms of the mean discounted quality-adjusted life-years (QALYs) lost as a result of disease and treatment. Costs and QALYs were both discounted at 3.5% per year. An integrated transmission-dynamic and economic model was used to evaluate the cost-effectiveness of

  7. Systematic review, meta-analysis and economic modelling of molecular diagnostic tests for antibiotic resistance in tuberculosis.

    Science.gov (United States)

    Drobniewski, Francis; Cooke, Mary; Jordan, Jake; Casali, Nicola; Mugwagwa, Tendai; Broda, Agnieszka; Townsend, Catherine; Sivaramakrishnan, Anand; Green, Nathan; Jit, Mark; Lipman, Marc; Lord, Joanne; White, Peter J; Abubakar, Ibrahim

    2015-01-01

    BACKGROUND Drug-resistant tuberculosis (TB), especially multidrug-resistant (MDR, resistance to rifampicin and isoniazid) disease, is associated with a worse patient outcome. Drug resistance diagnosed using microbiological culture takes days to weeks, as TB bacteria grow slowly. Rapid molecular tests for drug resistance detection (1 day) are commercially available and may promote faster initiation of appropriate treatment. OBJECTIVES To (1) conduct a systematic review of evidence regarding diagnostic accuracy of molecular genetic tests for drug resistance, (2) conduct a health-economic evaluation of screening and diagnostic strategies, including comparison of alternative models of service provision and assessment of the value of targeting rapid testing at high-risk subgroups, and (3) construct a transmission-dynamic mathematical model that translates the estimates of diagnostic accuracy into estimates of clinical impact. REVIEW METHODS AND DATA SOURCES A standardised search strategy identified relevant studies from EMBASE, PubMed, MEDLINE, Bioscience Information Service (BIOSIS), System for Information on Grey Literature in Europe Social Policy & Practice (SIGLE) and Web of Science, published between 1 January 2000 and 15 August 2013. Additional 'grey' sources were included. Quality was assessed using quality assessment of diagnostic accuracy studies version 2 (QUADAS-2). For each diagnostic strategy and population subgroup, a care pathway was constructed to specify which medical treatments and health services that individuals would receive from presentation to the point where they either did or did not complete TB treatment successfully. A total cost was estimated from a health service perspective for each care pathway, and the health impact was estimated in terms of the mean discounted quality-adjusted life-years (QALYs) lost as a result of disease and treatment. Costs and QALYs were both discounted at 3.5% per year. An integrated transmission-dynamic and

  8. Testing Scientific Software: A Systematic Literature Review

    Science.gov (United States)

    Kanewala, Upulee; Bieman, James M.

    2014-01-01

    Context Scientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code. Objective This study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software. Method We conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software. Results We found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them. Conclusions Scientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques. PMID:25125798

  9. Testing Scientific Software: A Systematic Literature Review.

    Science.gov (United States)

    Kanewala, Upulee; Bieman, James M

    2014-10-01

    Scientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code. This study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software. We conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software. We found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them. Scientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques.

  10. Systematic reviews of diagnostic test accuracy

    DEFF Research Database (Denmark)

    Leeflang, Mariska M G; Deeks, Jonathan J; Gatsonis, Constantine

    2008-01-01

    More and more systematic reviews of diagnostic test accuracy studies are being published, but they can be methodologically challenging. In this paper, the authors present some of the recent developments in the methodology for conducting systematic reviews of diagnostic test accuracy studies....... Restrictive electronic search filters are discouraged, as is the use of summary quality scores. Methods for meta-analysis should take into account the paired nature of the estimates and their dependence on threshold. Authors of these reviews are advised to use the hierarchical summary receiver...

  11. Systematic Digital Forensic Investigation Model

    OpenAIRE

    Systematic Digital Forensic Investigation Model

    2011-01-01

    Law practitioners are in an uninterrupted battle with criminals in the application of digital/computertechnologies, and require the development of a proper methodology to systematically searchdigital devices for significant evidence. Computer fraud and digital crimes are growing day by dayand unfortunately less than two percent of the reported cases result in confidence. This paperexplores the development of the digital forensics process model, compares digital forensicmethodologies, and fina...

  12. Systematic review and modelling of the cost-effectiveness of cardiac magnetic resonance imaging compared with current existing testing pathways in ischaemic cardiomyopathy.

    Science.gov (United States)

    Campbell, Fiona; Thokala, Praveen; Uttley, Lesley C; Sutton, Anthea; Sutton, Alex J; Al-Mohammad, Abdallah; Thomas, Steven M

    2014-09-01

    Cardiac magnetic resonance imaging (CMR) is increasingly used to assess patients for myocardial viability prior to revascularisation. This is important to ensure that only those likely to benefit are subjected to the risk of revascularisation. To assess current evidence on the accuracy and cost-effectiveness of CMR to test patients prior to revascularisation in ischaemic cardiomyopathy; to develop an economic model to assess cost-effectiveness for different imaging strategies; and to identify areas for further primary research. Databases searched were: MEDLINE including MEDLINE In-Process & Other Non-Indexed Citations Initial searches were conducted in March 2011 in the following databases with dates: MEDLINE including MEDLINE In-Process & Other Non-Indexed Citations via Ovid (1946 to March 2011); Bioscience Information Service (BIOSIS) Previews via Web of Science (1969 to March 2011); EMBASE via Ovid (1974 to March 2011); Cochrane Database of Systematic Reviews via The Cochrane Library (1996 to March 2011); Cochrane Central Register of Controlled Trials via The Cochrane Library 1998 to March 2011; Database of Abstracts of Reviews of Effects via The Cochrane Library (1994 to March 2011); NHS Economic Evaluation Database via The Cochrane Library (1968 to March 2011); Health Technology Assessment Database via The Cochrane Library (1989 to March 2011); and the Science Citation Index via Web of Science (1900 to March 2011). Additional searches were conducted from October to November 2011 in the following databases with dates: MEDLINE including MEDLINE In-Process & Other Non-Indexed Citations via Ovid (1946 to November 2011); BIOSIS Previews via Web of Science (1969 to October 2011); EMBASE via Ovid (1974 to November 2011); Cochrane Database of Systematic Reviews via The Cochrane Library (1996 to November 2011); Cochrane Central Register of Controlled Trials via The Cochrane Library (1998 to November 2011); Database of Abstracts of Reviews of Effects via The Cochrane

  13. Model-Based Security Testing

    Directory of Open Access Journals (Sweden)

    Ina Schieferdecker

    2012-02-01

    Full Text Available Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.

  14. EMG Biofeedback Training Versus Systematic Desensitization for Test Anxiety Reduction

    Science.gov (United States)

    Romano, John L.; Cabianca, William A.

    1978-01-01

    Biofeedback training to reduce test anxiety among university students was investigated. Biofeedback training with systematic desensitization was compared to an automated systematic desensitization program not using EMG feedback. Biofeedback training is a useful technique for reducing test anxiety, but not necessarily more effective than systematic…

  15. Hypnosis Versus Systematic Desensitization in the Treatment of Test Anxiety

    Science.gov (United States)

    Melnick, Joseph; Russell, Ronald W.

    1976-01-01

    This study compared the effectiveness of systematic desensitization and the directed experience hypnotic technique in reducing self-reported test anxiety and increasing the academic performance of test-anxious undergraduates (N=36). The results are discussed as evidence for systematic desensitization as the more effective treatment in reducing…

  16. Loglinear Rasch model tests

    NARCIS (Netherlands)

    Kelderman, Hendrikus

    1984-01-01

    Existing statistical tests for the fit of the Rasch model have been criticized, because they are only sensitive to specific violations of its assumptions. Contingency table methods using loglinear models have been used to test various psychometric models. In this paper, the assumptions of the Rasch

  17. Whole bone testing in small animals: systematic characterization of the mechanical properties of different rodent bones available for rat fracture models.

    Science.gov (United States)

    Prodinger, Peter M; Foehr, Peter; Bürklein, Dominik; Bissinger, Oliver; Pilge, Hakan; Kreutzer, Kilian; von Eisenhart-Rothe, Rüdiger; Tischer, Thomas

    2018-02-14

    Rat fracture models are extensively used to characterize normal and pathological bone healing. Despite, systematic research on inter- and intra-individual differences of common rat bones examined is surprisingly not available. Thus, we studied the biomechanical behaviour and radiological characteristics of the humerus, the tibia and the femur of the male Wistar rat-all of which are potentially available in the experimental situation-to identify useful or detrimental biomechanical properties of each bone and to facilitate sample size calculations. 40 paired femura, tibiae and humeri of male Wistar rats (10-38 weeks, weight between 240 and 720 g) were analysed by DXA, pQCT scan and three-point-bending. Bearing and loading bars of the biomechanical setup were adapted percentually to the bone's length. Subgroups of light (skeletal immature) rats under 400 g (N = 11, 22 specimens of each bone) and heavy (mature) rats over 400 g (N = 9, 18 specimens of each bone) were formed and evaluated separately. Radiologically, neither significant differences between left and right bones, nor a specific side preference was evident. Mean side differences of the BMC were relatively small (1-3% measured by DXA and 2.5-5% by pQCT). Over all, bone mineral content (BMC) assessed by DXA and pQCT (TOT CNT, CORT CNT) showed high correlations between each other (BMC vs. TOT and CORT CNT: R 2  = 0.94-0.99). The load-displacement diagram showed a typical, reproducible curve for each type of bone. Tibiae were the longest bones (mean 41.8 ± 4.12 mm) followed by femurs (mean 38.9 ± 4.12 mm) and humeri (mean 29.88 ± 3.33 mm). Failure loads and stiffness ranged from 175.4 ± 45.23 N / 315.6 ± 63.00 N/mm for the femurs, 124.6 ± 41.13 N / 260.5 ± 59.97 N/mm for the humeri to 117.1 ± 33.94 N / 143.8 ± 36.99 N/mm for the tibiae. Smallest interindividual differences were observed in failure loads of the femurs (CV% 8.6) and tibiae (CV% 10.7) of heavy

  18. Antenatal HIV Testing in Sub-Saharan Africa During the Implementation of the Millennium Development Goals: A Systematic Review Using the PEN-3 Cultural Model.

    Science.gov (United States)

    Blackstone, Sarah R; Nwaozuru, Ucheoma; Iwelunmor, Juliet

    2018-01-01

    This study systematically explored the barriers and facilitators to routine antenatal HIV testing from the perspective of pregnant women in sub-Saharan Africa during the implementation period of the Millennium Development Goals. Articles published between 2000 and 2015 were selected after reviewing the title, abstract, and references. Twenty-seven studies published in 11 African countries were eligible for the current study and reviewed. The most common barriers identified include communication with male partners, patient convenience and accessibility, health system and health-care provider issues, fear of disclosure, HIV-related stigma, the burden of other responsibilities at home, and the perception of antenatal care as a "woman's job." Routine testing among pregnant women is crucial for the eradication of infant and child HIV infections. Further understanding the interplay of social and cultural factors, particularly the role of women in intimate relationships and the influence of men on antenatal care seeking behaviors, is necessary to continue the work of the Millennium Development Goals.

  19. Earthquake likelihood model testing

    Science.gov (United States)

    Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.

    2007-01-01

    INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a

  20. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  1. A SYSTEMATIC STUDY OF SOFTWARE QUALITY MODELS

    OpenAIRE

    Dr.Vilas. M. Thakare; Ashwin B. Tomar

    2011-01-01

    This paper aims to provide a basis for software quality model research, through a systematic study ofpapers. It identifies nearly seventy software quality research papers from journals and classifies paper asper research topic, estimation approach, study context and data set. The paper results combined withother knowledge provides support for recommendations in future software quality model research, toincrease the area of search for relevant studies, carefully select the papers within a set ...

  2. Testing the standard model

    International Nuclear Information System (INIS)

    Gordon, H.; Marciano, W.; Williams, H.H.

    1982-01-01

    We summarize here the results of the standard model group which has studied the ways in which different facilities may be used to test in detail what we now call the standard model, that is SU/sub c/(3) x SU(2) x U(1). The topics considered are: W +- , Z 0 mass, width; sin 2 theta/sub W/ and neutral current couplings; W + W - , Wγ; Higgs; QCD; toponium and naked quarks; glueballs; mixing angles; and heavy ions

  3. Wave Reflection Model Tests

    DEFF Research Database (Denmark)

    Burcharth, H. F.; Larsen, Brian Juul

    The investigation concerns the design of a new internal breakwater in the main port of Ibiza. The objective of the model tests was in the first hand to optimize the cross section to make the wave reflection low enough to ensure that unacceptable wave agitation will not occur in the port. Secondly...

  4. Testing the Standard Model

    CERN Document Server

    Riles, K

    1998-01-01

    The Large Electron Project (LEP) accelerator near Geneva, more than any other instrument, has rigorously tested the predictions of the Standard Model of elementary particles. LEP measurements have probed the theory from many different directions and, so far, the Standard Model has prevailed. The rigour of these tests has allowed LEP physicists to determine unequivocally the number of fundamental 'generations' of elementary particles. These tests also allowed physicists to ascertain the mass of the top quark in advance of its discovery. Recent increases in the accelerator's energy allow new measurements to be undertaken, measurements that may uncover directly or indirectly the long-sought Higgs particle, believed to impart mass to all other particles.

  5. Systematic modelling and simulation of refrigeration systems

    DEFF Research Database (Denmark)

    Rasmussen, Bjarne D.; Jakobsen, Arne

    1998-01-01

    The task of developing a simulation model of a refrigeration system can be very difficult and time consuming. In order for this process to be effective, a systematic method for developing the system model is required. This method should aim at guiding the developer to clarify the purpose...... of the simulation, to select appropriate component models and to set up the equations in a well-arranged way. In this paper the outline of such a method is proposed and examples showing the use of this method for simulation of refrigeration systems are given....

  6. Systematization of Angra-1 operation attendance - Maintenance and periodic testings

    International Nuclear Information System (INIS)

    Furieri, E.B.; Carvalho Bruno, N. de; Salaverry, N.A.

    1988-01-01

    A maintenance analysis, their types and their functions for the safety of nuclear power plants is done. Programs and present trends in the reactor maintenance, as well as the maintenance program and periodic tests of Angra I, are analysed. The necessities of safety analysis and a systematization for maintenance attendance are discussed and the periodic testing as well as the attendance of international experience. (M.C.K.) [pt

  7. Radiation Belt Test Model

    Science.gov (United States)

    Freeman, John W.

    2000-10-01

    Rice University has developed a dynamic model of the Earth's radiation belts based on real-time data driven boundary conditions and full adiabaticity. The Radiation Belt Test Model (RBTM) successfully replicates the major features of storm-time behavior of energetic electrons: sudden commencement induced main phase dropout and recovery phase enhancement. It is the only known model to accomplish the latter. The RBTM shows the extent to which new energetic electrons introduced to the magnetosphere near the geostationary orbit drift inward due to relaxation of the magnetic field. It also shows the effects of substorm related rapid motion of magnetotail field lines for which the 3rd adiabatic invariant is violated. The radial extent of this violation is seen to be sharply delineated to a region outside of 5Re, although this distance is determined by the Hilmer-Voigt magnetic field model used by the RBTM. The RBTM appears to provide an excellent platform on which to build parameterized refinements to compensate for unknown acceleration processes inside 5Re where adiabaticity is seen to hold. Moreover, built within the framework of the MSFM, it offers the prospect of an operational forecast model for MeV electrons.

  8. Model validation: a systemic and systematic approach

    International Nuclear Information System (INIS)

    Sheng, G.; Elzas, M.S.; Cronhjort, B.T.

    1993-01-01

    The term 'validation' is used ubiquitously in association with the modelling activities of numerous disciplines including social, political natural, physical sciences, and engineering. There is however, a wide range of definitions which give rise to very different interpretations of what activities the process involves. Analyses of results from the present large international effort in modelling radioactive waste disposal systems illustrate the urgent need to develop a common approach to model validation. Some possible explanations are offered to account for the present state of affairs. The methodology developed treats model validation and code verification in a systematic fashion. In fact, this approach may be regarded as a comprehensive framework to assess the adequacy of any simulation study. (author)

  9. Thermal sensation models: a systematic comparison.

    Science.gov (United States)

    Koelblen, B; Psikuta, A; Bogdan, A; Annaheim, S; Rossi, R M

    2017-05-01

    Thermal sensation models, capable of predicting human's perception of thermal surroundings, are commonly used to assess given indoor conditions. These models differ in many aspects, such as the number and type of input conditions, the range of conditions in which the models can be applied, and the complexity of equations. Moreover, the models are associated with various thermal sensation scales. In this study, a systematic comparison of seven existing thermal sensation models has been performed with regard to exposures including various air temperatures, clothing thermal insulation, and metabolic rate values after a careful investigation of the models' range of applicability. Thermo-physiological data needed as input for some of the models were obtained from a mathematical model for human physiological responses. The comparison showed differences between models' predictions for the analyzed conditions, mostly higher than typical intersubject differences in votes. Therefore, it can be concluded that the choice of model strongly influences the assessment of indoor spaces. The issue of comparing different thermal sensation scales has also been discussed. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. Validation through model testing

    International Nuclear Information System (INIS)

    1995-01-01

    Geoval-94 is the third Geoval symposium arranged jointly by the OECD/NEA and the Swedish Nuclear Power Inspectorate. Earlier symposia in this series took place in 1987 and 1990. In many countries, the ongoing programmes to site and construct deep geological repositories for high and intermediate level nuclear waste are close to realization. A number of studies demonstrates the potential barrier function of the geosphere, but also that there are many unresolved issues. A key to these problems are the possibilities to gain knowledge by model testing with experiments and to increase confidence in models used for prediction. The sessions cover conclusions from the INTRAVAL-project, experiences from integrated experimental programs and underground research laboratories as well as the integration between performance assessment and site characterisation. Technical issues ranging from waste and buffer interactions with the rock to radionuclide migration in different geological media is addressed. (J.S.)

  11. Personal utility in genomic testing: a systematic literature review.

    Science.gov (United States)

    Kohler, Jennefer N; Turbitt, Erin; Biesecker, Barbara B

    2017-06-01

    Researchers and clinicians refer to outcomes of genomic testing that extend beyond clinical utility as 'personal utility'. No systematic delineation of personal utility exists, making it challenging to appreciate its scope. Identifying empirical elements of personal utility reported in the literature offers an inventory that can be subsequently ranked for its relative value by those who have undergone genomic testing. A systematic review was conducted of the peer-reviewed literature reporting non-health-related outcomes of genomic testing from 1 January 2003 to 5 August 2016. Inclusion criteria specified English language, date of publication, and presence of empirical evidence. Identified outcomes were iteratively coded into unique domains. The search returned 551 abstracts from which 31 studies met the inclusion criteria. Study populations and type of genomic testing varied. Coding resulted in 15 distinct elements of personal utility, organized into three domains related to personal outcomes: affective, cognitive, and behavioral; and one domain related to social outcomes. The domains of personal utility may inform pre-test counseling by helping patients anticipate potential value of test results beyond clinical utility. Identified elements may also inform investigations into the prevalence and importance of personal utility to future test users.

  12. Systematic Unit Testing in a Read-eval-print Loop

    DEFF Research Database (Denmark)

    Nørmark, Kurt

    2010-01-01

    .  The process of collecting the expressions and their results imposes only little extra work on the programmer.  The use of the tool provides for creation of test repositories, and it is intended to catalyze a much more systematic approach to unit testing in a read-eval-print loop.  In the paper we also discuss...... how to use a test repository for other purposes than testing.  As a concrete contribution we show how to use test cases as examples in library interface documentation.  It is hypothesized---but not yet validated---that the tool will motivate the Lisp programmer to take the transition from casual...

  13. Composite Material Testing Data Reduction to Adjust for the Systematic 6-DOF Testing Machine Aberrations

    Science.gov (United States)

    Athanasios lliopoulos; John G. Michopoulos; John G. C. Hermanson

    2012-01-01

    This paper describes a data reduction methodology for eliminating the systematic aberrations introduced by the unwanted behavior of a multiaxial testing machine, into the massive amounts of experimental data collected from testing of composite material coupons. The machine in reference is a custom made 6-DoF system called NRL66.3 and developed at the NAval...

  14. A Unified Framework for Systematic Model Improvement

    DEFF Research Database (Denmark)

    Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay

    2003-01-01

    A unified framework for improving the quality of continuous time models of dynamic systems based on experimental data is presented. The framework is based on an interplay between stochastic differential equation (SDE) modelling, statistical tests and multivariate nonparametric regression. This co......-batch bioreactor, where it is illustrated how an incorrectly modelled biomass growth rate can be pinpointed and an estimate provided of the functional relation needed to properly describe it....

  15. Systematic model building with flavor symmetries

    Energy Technology Data Exchange (ETDEWEB)

    Plentinger, Florian

    2009-12-19

    The observation of neutrino masses and lepton mixing has highlighted the incompleteness of the Standard Model of particle physics. In conjunction with this discovery, new questions arise: why are the neutrino masses so small, which form has their mass hierarchy, why is the mixing in the quark and lepton sectors so different or what is the structure of the Higgs sector. In order to address these issues and to predict future experimental results, different approaches are considered. One particularly interesting possibility, are Grand Unified Theories such as SU(5) or SO(10). GUTs are vertical symmetries since they unify the SM particles into multiplets and usually predict new particles which can naturally explain the smallness of the neutrino masses via the seesaw mechanism. On the other hand, also horizontal symmetries, i.e., flavor symmetries, acting on the generation space of the SM particles, are promising. They can serve as an explanation for the quark and lepton mass hierarchies as well as for the different mixings in the quark and lepton sectors. In addition, flavor symmetries are significantly involved in the Higgs sector and predict certain forms of mass matrices. This high predictivity makes GUTs and flavor symmetries interesting for both, theorists and experimentalists. These extensions of the SM can be also combined with theories such as supersymmetry or extra dimensions. In addition, they usually have implications on the observed matter-antimatter asymmetry of the universe or can provide a dark matter candidate. In general, they also predict the lepton flavor violating rare decays {mu} {yields} e{gamma}, {tau} {yields} {mu}{gamma}, and {tau} {yields} e{gamma} which are strongly bounded by experiments but might be observed in the future. In this thesis, we combine all of these approaches, i.e., GUTs, the seesaw mechanism and flavor symmetries. Moreover, our request is to develop and perform a systematic model building approach with flavor symmetries and

  16. Systematic model building with flavor symmetries

    International Nuclear Information System (INIS)

    Plentinger, Florian

    2009-01-01

    The observation of neutrino masses and lepton mixing has highlighted the incompleteness of the Standard Model of particle physics. In conjunction with this discovery, new questions arise: why are the neutrino masses so small, which form has their mass hierarchy, why is the mixing in the quark and lepton sectors so different or what is the structure of the Higgs sector. In order to address these issues and to predict future experimental results, different approaches are considered. One particularly interesting possibility, are Grand Unified Theories such as SU(5) or SO(10). GUTs are vertical symmetries since they unify the SM particles into multiplets and usually predict new particles which can naturally explain the smallness of the neutrino masses via the seesaw mechanism. On the other hand, also horizontal symmetries, i.e., flavor symmetries, acting on the generation space of the SM particles, are promising. They can serve as an explanation for the quark and lepton mass hierarchies as well as for the different mixings in the quark and lepton sectors. In addition, flavor symmetries are significantly involved in the Higgs sector and predict certain forms of mass matrices. This high predictivity makes GUTs and flavor symmetries interesting for both, theorists and experimentalists. These extensions of the SM can be also combined with theories such as supersymmetry or extra dimensions. In addition, they usually have implications on the observed matter-antimatter asymmetry of the universe or can provide a dark matter candidate. In general, they also predict the lepton flavor violating rare decays μ → eγ, τ → μγ, and τ → eγ which are strongly bounded by experiments but might be observed in the future. In this thesis, we combine all of these approaches, i.e., GUTs, the seesaw mechanism and flavor symmetries. Moreover, our request is to develop and perform a systematic model building approach with flavor symmetries and to search for phenomenological

  17. Systematic test on fast time resolution parallel plate avalanche counter

    International Nuclear Information System (INIS)

    Chen Yu; Li Guangwu; Gu Xianbao; Chen Yanchao; Zhang Gang; Zhang Wenhui; Yan Guohong

    2011-01-01

    Systematic test on each detect unit of parallel plate avalanche counter (PPAC) used in the fission multi-parameter measurement was performed with a 241 Am α source to get the time resolution and position resolution. The detectors work at 600 Pa flowing isobutane and with-600 V on cathode. The time resolution was got by TOF method and the position resolution was got by delay line method. The time resolution of detect units is better than 400 ps, and the position resolution is 6 mm. The results show that the demand of measurement is fully covered. (authors)

  18. Systematic simulations of modified gravity: chameleon models

    International Nuclear Information System (INIS)

    Brax, Philippe; Davis, Anne-Christine; Li, Baojiu; Winther, Hans A.; Zhao, Gong-Bo

    2013-01-01

    In this work we systematically study the linear and nonlinear structure formation in chameleon theories of modified gravity, using a generic parameterisation which describes a large class of models using only 4 parameters. For this we have modified the N-body simulation code ecosmog to perform a total of 65 simulations for different models and parameter values, including the default ΛCDM. These simulations enable us to explore a significant portion of the parameter space. We have studied the effects of modified gravity on the matter power spectrum and mass function, and found a rich and interesting phenomenology where the difference with the ΛCDM paradigm cannot be reproduced by a linear analysis even on scales as large as k ∼ 0.05 hMpc −1 , since the latter incorrectly assumes that the modification of gravity depends only on the background matter density. Our results show that the chameleon screening mechanism is significantly more efficient than other mechanisms such as the dilaton and symmetron, especially in high-density regions and at early times, and can serve as a guidance to determine the parts of the chameleon parameter space which are cosmologically interesting and thus merit further studies in the future

  19. Systematic comparison of model polymer nanocomposite mechanics.

    Science.gov (United States)

    Xiao, Senbo; Peter, Christine; Kremer, Kurt

    2016-09-13

    Polymer nanocomposites render a range of outstanding materials from natural products such as silk, sea shells and bones, to synthesized nanoclay or carbon nanotube reinforced polymer systems. In contrast to the fast expanding interest in this type of material, the fundamental mechanisms of their mixing, phase behavior and reinforcement, especially for higher nanoparticle content as relevant for bio-inorganic composites, are still not fully understood. Although polymer nanocomposites exhibit diverse morphologies, qualitatively their mechanical properties are believed to be governed by a few parameters, namely their internal polymer network topology, nanoparticle volume fraction, particle surface properties and so on. Relating material mechanics to such elementary parameters is the purpose of this work. By taking a coarse-grained molecular modeling approach, we study an range of different polymer nanocomposites. We vary polymer nanoparticle connectivity, surface geometry and volume fraction to systematically study rheological/mechanical properties. Our models cover different materials, and reproduce key characteristics of real nanocomposites, such as phase separation, mechanical reinforcement. The results shed light on establishing elementary structure, property and function relationship of polymer nanocomposites.

  20. Systematic simulations of modified gravity: chameleon models

    Energy Technology Data Exchange (ETDEWEB)

    Brax, Philippe [Institut de Physique Theorique, CEA, IPhT, CNRS, URA 2306, F-91191Gif/Yvette Cedex (France); Davis, Anne-Christine [DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA (United Kingdom); Li, Baojiu [Institute for Computational Cosmology, Department of Physics, Durham University, Durham DH1 3LE (United Kingdom); Winther, Hans A. [Institute of Theoretical Astrophysics, University of Oslo, 0315 Oslo (Norway); Zhao, Gong-Bo, E-mail: philippe.brax@cea.fr, E-mail: a.c.davis@damtp.cam.ac.uk, E-mail: baojiu.li@durham.ac.uk, E-mail: h.a.winther@astro.uio.no, E-mail: gong-bo.zhao@port.ac.uk [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 3FX (United Kingdom)

    2013-04-01

    In this work we systematically study the linear and nonlinear structure formation in chameleon theories of modified gravity, using a generic parameterisation which describes a large class of models using only 4 parameters. For this we have modified the N-body simulation code ecosmog to perform a total of 65 simulations for different models and parameter values, including the default ΛCDM. These simulations enable us to explore a significant portion of the parameter space. We have studied the effects of modified gravity on the matter power spectrum and mass function, and found a rich and interesting phenomenology where the difference with the ΛCDM paradigm cannot be reproduced by a linear analysis even on scales as large as k ∼ 0.05 hMpc{sup −1}, since the latter incorrectly assumes that the modification of gravity depends only on the background matter density. Our results show that the chameleon screening mechanism is significantly more efficient than other mechanisms such as the dilaton and symmetron, especially in high-density regions and at early times, and can serve as a guidance to determine the parts of the chameleon parameter space which are cosmologically interesting and thus merit further studies in the future.

  1. Caffeine challenge test and panic disorder: a systematic literature review.

    Science.gov (United States)

    Vilarim, Marina Machado; Rocha Araujo, Daniele Marano; Nardi, Antonio Egidio

    2011-08-01

    This systematic review aimed to examine the results of studies that have investigated the induction of panic attacks and/or the anxiogenic effect of the caffeine challenge test in patients with panic disorder. The literature search was performed in PubMed, Biblioteca Virtual em Saúde and the ISI Web of Knowledge. The words used for the search were caffeine, caffeine challenge test, panic disorder, panic attacks and anxiety disorder. In total, we selected eight randomized, double-blind studies where caffeine was administered orally, and none of them controlled for confounding factors in the analysis. The percentage of loss during follow-up ranged between 14.3% and 73.1%. The eight studies all showed a positive association between caffeine and anxiogenic effects and/or panic disorder.

  2. Recommendations for reporting of systematic reviews and meta-analyses of diagnostic test accuracy: a systematic review

    NARCIS (Netherlands)

    McGrath, Trevor A.; Alabousi, Mostafa; Skidmore, Becky; Korevaar, Daniël A.; Bossuyt, Patrick M. M.; Moher, David; Thombs, Brett; McInnes, Matthew D. F.

    2017-01-01

    This study is to perform a systematic review of existing guidance on quality of reporting and methodology for systematic reviews of diagnostic test accuracy (DTA) in order to compile a list of potential items that might be included in a reporting guideline for such reviews: Preferred Reporting Items

  3. Absorbing systematic effects to obtain a better background model in a search for new physics

    International Nuclear Information System (INIS)

    Caron, S; Horner, S; Sundermann, J E; Cowan, G; Gross, E

    2009-01-01

    This paper presents a novel approach to estimate the Standard Model backgrounds based on modifying Monte Carlo predictions within their systematic uncertainties. The improved background model is obtained by altering the original predictions with successively more complex correction functions in signal-free control selections. Statistical tests indicate when sufficient compatibility with data is reached. In this way, systematic effects are absorbed into the new background model. The same correction is then applied on the Monte Carlo prediction in the signal region. Comparing this method to other background estimation techniques shows improvements with respect to statistical and systematic uncertainties. The proposed method can also be applied in other fields beyond high energy physics.

  4. Model-based security testing

    OpenAIRE

    Schieferdecker, Ina; Großmann, Jürgen; Schneider, Martin

    2012-01-01

    Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security...

  5. Comparison of Three Methods of Reducing Test Anxiety: Systematic Desensitization, Implosive Therapy, and Study Counseling

    Science.gov (United States)

    Cornish, Richard D.; Dilley, Josiah S.

    1973-01-01

    Systematic desensitization, implosive therapy, and study counseling have all been effective in reducing test anxiety. In addition, systematic desensitization has been compared to study counseling for effectiveness. This study compares all three methods and suggests that systematic desentization is more effective than the others, and that implosive…

  6. Systematic identification of crystallization kinetics within a generic modelling framework

    DEFF Research Database (Denmark)

    Abdul Samad, Noor Asma Fazli Bin; Meisler, Kresten Troelstrup; Gernaey, Krist

    2012-01-01

    A systematic development of constitutive models within a generic modelling framework has been developed for use in design, analysis and simulation of crystallization operations. The framework contains a tool for model identification connected with a generic crystallizer modelling tool-box, a tool...

  7. Modelling the pile load test

    Directory of Open Access Journals (Sweden)

    Prekop Ľubomír

    2017-01-01

    Full Text Available This paper deals with the modelling of the load test of horizontal resistance of reinforced concrete piles. The pile belongs to group of piles with reinforced concrete heads. The head is pressed with steel arches of a bridge on motorway D1 Jablonov - Studenec. Pile model was created in ANSYS with several models of foundation having properties found out from geotechnical survey. Finally some crucial results obtained from computer models are presented and compared with these obtained from experiment.

  8. Modelling the pile load test

    OpenAIRE

    Prekop Ľubomír

    2017-01-01

    This paper deals with the modelling of the load test of horizontal resistance of reinforced concrete piles. The pile belongs to group of piles with reinforced concrete heads. The head is pressed with steel arches of a bridge on motorway D1 Jablonov - Studenec. Pile model was created in ANSYS with several models of foundation having properties found out from geotechnical survey. Finally some crucial results obtained from computer models are presented and compared with these obtained from exper...

  9. A 'Turing' Test for Landscape Evolution Models

    Science.gov (United States)

    Parsons, A. J.; Wise, S. M.; Wainwright, J.; Swift, D. A.

    2008-12-01

    Resolving the interactions among tectonics, climate and surface processes at long timescales has benefited from the development of computer models of landscape evolution. However, testing these Landscape Evolution Models (LEMs) has been piecemeal and partial. We argue that a more systematic approach is required. What is needed is a test that will establish how 'realistic' an LEM is and thus the extent to which its predictions may be trusted. We propose a test based upon the Turing Test of artificial intelligence as a way forward. In 1950 Alan Turing posed the question of whether a machine could think. Rather than attempt to address the question directly he proposed a test in which an interrogator asked questions of a person and a machine, with no means of telling which was which. If the machine's answer could not be distinguished from those of the human, the machine could be said to demonstrate artificial intelligence. By analogy, if an LEM cannot be distinguished from a real landscape it can be deemed to be realistic. The Turing test of intelligence is a test of the way in which a computer behaves. The analogy in the case of an LEM is that it should show realistic behaviour in terms of form and process, both at a given moment in time (punctual) and in the way both form and process evolve over time (dynamic). For some of these behaviours, tests already exist. For example there are numerous morphometric tests of punctual form and measurements of punctual process. The test discussed in this paper provides new ways of assessing dynamic behaviour of an LEM over realistically long timescales. However challenges remain in developing an appropriate suite of challenging tests, in applying these tests to current LEMs and in developing LEMs that pass them.

  10. Development and pilot test of a process to identify research needs from a systematic review.

    Science.gov (United States)

    Saldanha, Ian J; Wilson, Lisa M; Bennett, Wendy L; Nicholson, Wanda K; Robinson, Karen A

    2013-05-01

    To ensure appropriate allocation of research funds, we need methods for identifying high-priority research needs. We developed and pilot tested a process to identify needs for primary clinical research using a systematic review in gestational diabetes mellitus. We conducted eight steps: abstract research gaps from a systematic review using the Population, Intervention, Comparison, Outcomes, and Settings (PICOS) framework; solicit feedback from the review authors; translate gaps into researchable questions using the PICOS framework; solicit feedback from multidisciplinary stakeholders at our institution; establish consensus among multidisciplinary external stakeholders on the importance of the research questions using the Delphi method; prioritize outcomes; develop conceptual models to highlight research needs; and evaluate the process. We identified 19 research questions. During the Delphi method, external stakeholders established consensus for 16 of these 19 questions (15 with "high" and 1 with "medium" clinical benefit/importance). We pilot tested an eight-step process to identify clinically important research needs. Before wider application of this process, it should be tested using systematic reviews of other diseases. Further evaluation should include assessment of the usefulness of the research needs generated using this process for primary researchers and funders. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Conceptual Model for Systematic Construction Waste Management

    OpenAIRE

    Abd Rahim Mohd Hilmi Izwan; Kasim Narimah

    2017-01-01

    Development of the construction industry generated construction waste which can contribute towards environmental issues. Weaknesses of compliance in construction waste management especially in construction site have also contributed to the big issues of waste generated in landfills and illegal dumping area. This gives sign that construction projects are needed a systematic construction waste management. To date, a comprehensive criteria of construction waste management, particularly for const...

  12. Maturity Models in Supply Chain Sustainability: A Systematic Literature Review

    Directory of Open Access Journals (Sweden)

    Elisabete Correia

    2017-01-01

    Full Text Available A systematic literature review of supply chain maturity models with sustainability concerns is presented. The objective is to give insights into methodological issues related to maturity models, namely the research objectives; the research methods used to develop, validate and test them; the scope; and the main characteristics associated with their design. The literature review was performed based on journal articles and conference papers from 2000 to 2015 using the SCOPUS, Emerald Insight, EBSCO and Web of Science databases. Most of the analysed papers have as main objective the development of maturity models and their validation. The case study is the methodology that is most widely used by researchers to develop and validate maturity models. From the sustainability perspective, the scope of the analysed maturity models is the Triple Bottom Line (TBL and environmental dimension, focusing on a specific process (eco-design and new product development and without a broad SC perspective. The dominant characteristics associated with the design of the maturity models are the maturity grids and a continuous representation. In addition, results do not allow identifying a trend for a specific number of maturity levels. The comprehensive review, analysis, and synthesis of the maturity model literature represent an important contribution to the organization of this research area, making possible to clarify some confusion that exists about concepts, approaches and components of maturity models in sustainability. Various aspects associated with the maturity models (i.e., research objectives, research methods, scope and characteristics of the design of models are explored to contribute to the evolution and significance of this multidimensional area.

  13. Systematic review of model-based cervical screening evaluations.

    Science.gov (United States)

    Mendes, Diana; Bains, Iren; Vanni, Tazio; Jit, Mark

    2015-05-01

    Optimising population-based cervical screening policies is becoming more complex due to the expanding range of screening technologies available and the interplay with vaccine-induced changes in epidemiology. Mathematical models are increasingly being applied to assess the impact of cervical cancer screening strategies. We systematically reviewed MEDLINE®, Embase, Web of Science®, EconLit, Health Economic Evaluation Database, and The Cochrane Library databases in order to identify the mathematical models of human papillomavirus (HPV) infection and cervical cancer progression used to assess the effectiveness and/or cost-effectiveness of cervical cancer screening strategies. Key model features and conclusions relevant to decision-making were extracted. We found 153 articles meeting our eligibility criteria published up to May 2013. Most studies (72/153) evaluated the introduction of a new screening technology, with particular focus on the comparison of HPV DNA testing and cytology (n = 58). Twenty-eight in forty of these analyses supported HPV DNA primary screening implementation. A few studies analysed more recent technologies - rapid HPV DNA testing (n = 3), HPV DNA self-sampling (n = 4), and genotyping (n = 1) - and were also supportive of their introduction. However, no study was found on emerging molecular markers and their potential utility in future screening programmes. Most evaluations (113/153) were based on models simulating aggregate groups of women at risk of cervical cancer over time without accounting for HPV infection transmission. Calibration to country-specific outcome data is becoming more common, but has not yet become standard practice. Models of cervical screening are increasingly used, and allow extrapolation of trial data to project the population-level health and economic impact of different screening policy. However, post-vaccination analyses have rarely incorporated transmission dynamics. Model calibration to country

  14. Methods Used in Economic Evaluations of Chronic Kidney Disease Testing — A Systematic Review

    Science.gov (United States)

    Sutton, Andrew J.; Breheny, Katie; Deeks, Jon; Khunti, Kamlesh; Sharpe, Claire; Ottridge, Ryan S.; Stevens, Paul E.; Cockwell, Paul; Kalra, Philp A.; Lamb, Edmund J.

    2015-01-01

    Background The prevalence of chronic kidney disease (CKD) is high in general populations around the world. Targeted testing and screening for CKD are often conducted to help identify individuals that may benefit from treatment to ameliorate or prevent their disease progression. Aims This systematic review examines the methods used in economic evaluations of testing and screening in CKD, with a particular focus on whether test accuracy has been considered, and how analysis has incorporated issues that may be important to the patient, such as the impact of testing on quality of life and the costs they incur. Methods Articles that described model-based economic evaluations of patient testing interventions focused on CKD were identified through the searching of electronic databases and the hand searching of the bibliographies of the included studies. Results The initial electronic searches identified 2,671 papers of which 21 were included in the final review. Eighteen studies focused on proteinuria, three evaluated glomerular filtration rate testing and one included both tests. The full impact of inaccurate test results was frequently not considered in economic evaluations in this setting as a societal perspective was rarely adopted. The impact of false positive tests on patients in terms of the costs incurred in re-attending for repeat testing, and the anxiety associated with a positive test was almost always overlooked. In one study where the impact of a false positive test on patient quality of life was examined in sensitivity analysis, it had a significant impact on the conclusions drawn from the model. Conclusion Future economic evaluations of kidney function testing should examine testing and monitoring pathways from the perspective of patients, to ensure that issues that are important to patients, such as the possibility of inaccurate test results, are properly considered in the analysis. PMID:26465773

  15. Effectiveness of Structured Psychodrama and Systematic Desensitization in Reducing Test Anxiety.

    Science.gov (United States)

    Kipper, David A.; Giladi, Daniel

    1978-01-01

    Students with examination anxiety took part in study of effectiveness of two kinds of treatment, structured psychodrama and systematic desensitization, in reducing test anxiety. Results showed that subjects in both treatment groups significantly reduced test-anxiety scores. Structured psychodrama is as effective as systematic desensitization in…

  16. Bayesian Network Models in Cyber Security: A Systematic Review

    OpenAIRE

    Chockalingam, S.; Pieters, W.; Herdeiro Teixeira, A.M.; van Gelder, P.H.A.J.M.; Lipmaa, Helger; Mitrokotsa, Aikaterini; Matulevicius, Raimundas

    2017-01-01

    Bayesian Networks (BNs) are an increasingly popular modelling technique in cyber security especially due to their capability to overcome data limitations. This is also instantiated by the growth of BN models development in cyber security. However, a comprehensive comparison and analysis of these models is missing. In this paper, we conduct a systematic review of the scientific literature and identify 17 standard BN models in cyber security. We analyse these models based on 9 different criteri...

  17. Methods for testing transport models

    International Nuclear Information System (INIS)

    Singer, C.; Cox, D.

    1991-01-01

    Substantial progress has been made over the past year on six aspects of the work supported by this grant. As a result, we have in hand for the first time a fairly complete set of transport models and improved statistical methods for testing them against large databases. We also have initial results of such tests. These results indicate that careful application of presently available transport theories can reasonably well produce a remarkably wide variety of tokamak data

  18. Systematic experimental based modeling of a rotary piezoelectric ultrasonic motor

    DEFF Research Database (Denmark)

    Mojallali, Hamed; Amini, Rouzbeh; Izadi-Zamanabadi, Roozbeh

    2007-01-01

    In this paper, a new method for equivalent circuit modeling of a traveling wave ultrasonic motor is presented. The free stator of the motor is modeled by an equivalent circuit containing complex circuit elements. A systematic approach for identifying the elements of the equivalent circuit is sugg...

  19. NET model coil test possibilities

    International Nuclear Information System (INIS)

    Erb, J.; Gruenhagen, A.; Herz, W.; Jentzsch, K.; Komarek, P.; Lotz, E.; Malang, S.; Maurer, W.; Noether, G.; Ulbricht, A.; Vogt, A.; Zahn, G.; Horvath, I.; Kwasnitza, K.; Marinucci, C.; Pasztor, G.; Sborchia, C.; Weymuth, P.; Peters, A.; Roeterdink, A.

    1987-11-01

    A single full size coil for NET/INTOR represents an investment of the order of 40 MUC (Million Unit Costs). Before such an amount of money or even more for the 16 TF coils is invested as much risks as possible must be eliminated by a comprehensive development programme. In the course of such a programme a coil technology verification test should finally prove the feasibility of NET/INTOR TF coils. This study report is almost exclusively dealing with such a verification test by model coil testing. These coils will be built out of two Nb 3 Sn-conductors based on two concepts already under development and investigation. Two possible coil arrangements are discussed: A cluster facility, where two model coils out of the two Nb 3 TF-conductors are used, and the already tested LCT-coils producing a background field. A solenoid arrangement, where in addition to the two TF model coils another model coil out of a PF-conductor for the central PF-coils of NET/INTOR is used instead of LCT background coils. Technical advantages and disadvantages are worked out in order to compare and judge both facilities. Costs estimates and the time schedules broaden the base for a decision about the realisation of such a facility. (orig.) [de

  20. Hydrocarbon Fuel Thermal Performance Modeling based on Systematic Measurement and Comprehensive Chromatographic Analysis

    Science.gov (United States)

    2016-07-31

    distribution unlimited Hydrocarbon Fuel Thermal Performance Modeling based on Systematic Measurement and Comprehensive Chromatographic Analysis Matthew...vital importance for hydrocarbon -fueled propulsion systems: fuel thermal performance as indicated by physical and chemical effects of cooling passage... analysis . The selection and acquisition of a set of chemically diverse fuels is pivotal for a successful outcome since test method validation and

  1. Collaborative testing of turbulence models

    Science.gov (United States)

    Bradshaw, P.

    1992-12-01

    This project, funded by AFOSR, ARO, NASA, and ONR, was run by the writer with Profs. Brian E. Launder, University of Manchester, England, and John L. Lumley, Cornell University. Statistical data on turbulent flows, from lab. experiments and simulations, were circulated to modelers throughout the world. This is the first large-scale project of its kind to use simulation data. The modelers returned their predictions to Stanford, for distribution to all modelers and to additional participants ('experimenters')--over 100 in all. The object was to obtain a consensus on the capabilities of present-day turbulence models and identify which types most deserve future support. This was not completely achieved, mainly because not enough modelers could produce results for enough test cases within the duration of the project. However, a clear picture of the capabilities of various modeling groups has appeared, and the interaction has been helpful to the modelers. The results support the view that Reynolds-stress transport models are the most accurate.

  2. Systematic parameter inference in stochastic mesoscopic modeling

    Energy Technology Data Exchange (ETDEWEB)

    Lei, Huan; Yang, Xiu [Pacific Northwest National Laboratory, Richland, WA 99352 (United States); Li, Zhen [Division of Applied Mathematics, Brown University, Providence, RI 02912 (United States); Karniadakis, George Em, E-mail: george_karniadakis@brown.edu [Division of Applied Mathematics, Brown University, Providence, RI 02912 (United States)

    2017-02-01

    We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are “sparse”. The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.

  3. Cognitive Modification and Systematic Desensitization with Test Anxious High School Students.

    Science.gov (United States)

    Leal, Lois L.; And Others

    1981-01-01

    Compares the relative effectiveness of cognitive modification and systematic desensitization with test anxious high school students (N=30). The systematic desensitization treatment appeared to be significantly more effective on the performance measure while cognitive modification was more effective on one of the self-report measures. (Author/JAC)

  4. HIV Testing and Counseling Among Female Sex Workers : A Systematic Literature Review

    NARCIS (Netherlands)

    Tokar, Anna; Broerse, Jacqueline E.W.; Blanchard, James; Roura, Maria

    2018-01-01

    HIV testing uptake continues to be low among Female Sex Workers (FSWs). We synthesizes evidence on barriers and facilitators to HIV testing among FSW as well as frequencies of testing, willingness to test, and return rates to collect results. We systematically searched the MEDLINE/PubMed, EMBASE,

  5. Background model systematics for the Fermi GeV excess

    Energy Technology Data Exchange (ETDEWEB)

    Calore, Francesca; Cholis, Ilias; Weniger, Christoph

    2015-03-01

    The possible gamma-ray excess in the inner Galaxy and the Galactic center (GC) suggested by Fermi-LAT observations has triggered a large number of studies. It has been interpreted as a variety of different phenomena such as a signal from WIMP dark matter annihilation, gamma-ray emission from a population of millisecond pulsars, or emission from cosmic rays injected in a sequence of burst-like events or continuously at the GC. We present the first comprehensive study of model systematics coming from the Galactic diffuse emission in the inner part of our Galaxy and their impact on the inferred properties of the excess emission at Galactic latitudes 2° < |b| < 20° and 300 MeV to 500 GeV. We study both theoretical and empirical model systematics, which we deduce from a large range of Galactic diffuse emission models and a principal component analysis of residuals in numerous test regions along the Galactic plane. We show that the hypothesis of an extended spherical excess emission with a uniform energy spectrum is compatible with the Fermi-LAT data in our region of interest at 95% CL. Assuming that this excess is the extended counterpart of the one seen in the inner few degrees of the Galaxy, we derive a lower limit of 10.0° (95% CL) on its extension away from the GC. We show that, in light of the large correlated uncertainties that affect the subtraction of the Galactic diffuse emission in the relevant regions, the energy spectrum of the excess is equally compatible with both a simple broken power-law of break energy E(break) = 2.1 ± 0.2 GeV, and with spectra predicted by the self-annihilation of dark matter, implying in the case of bar bb final states a dark matter mass of m(χ)=49(+6.4)(-)(5.4)  GeV.

  6. Methods for testing transport models

    International Nuclear Information System (INIS)

    Singer, C.; Cox, D.

    1993-01-01

    This report documents progress to date under a three-year contract for developing ''Methods for Testing Transport Models.'' The work described includes (1) choice of best methods for producing ''code emulators'' for analysis of very large global energy confinement databases, (2) recent applications of stratified regressions for treating individual measurement errors as well as calibration/modeling errors randomly distributed across various tokamaks, (3) Bayesian methods for utilizing prior information due to previous empirical and/or theoretical analyses, (4) extension of code emulator methodology to profile data, (5) application of nonlinear least squares estimators to simulation of profile data, (6) development of more sophisticated statistical methods for handling profile data, (7) acquisition of a much larger experimental database, and (8) extensive exploratory simulation work on a large variety of discharges using recently improved models for transport theories and boundary conditions. From all of this work, it has been possible to define a complete methodology for testing new sets of reference transport models against much larger multi-institutional databases

  7. Causal judgment from contingency information: a systematic test of the pCI rule.

    Science.gov (United States)

    White, Peter A

    2004-04-01

    Contingency information is information about the occurrence or nonoccurrence of an effect when a possible cause is present or absent. Under the evidential evaluation model, instances of contingency information are transformed into evidence and causal judgment is based on the proportion of relevant instances evaluated as confirmatory for the candidate cause. In this article, two experiments are reported that were designed to test systematic manipulations of the proportion of confirming instances in relation to other variables: the proportion of instances on which the candidate cause is present, the proportion of instances in which the effect occurs when the cause is present, and the objective contingency. Results showed that both unweighted and weighted versions of the proportion-of-confirmatory-instances rule successfully predicted the main features of the results, with the weighted version proving more successful. Other models, including the power PC theory, failed to predict the results.

  8. Systematic Review of Health Economic Evaluations of Diagnostic Tests in Brazil: How accurate are the results?

    Science.gov (United States)

    Oliveira, Maria Regina Fernandes; Leandro, Roseli; Decimoni, Tassia Cristina; Rozman, Luciana Martins; Novaes, Hillegonda Maria Dutilh; De Soárez, Patrícia Coelho

    2017-08-01

    The aim of this study is to identify and characterize the health economic evaluations (HEEs) of diagnostic tests conducted in Brazil, in terms of their adherence to international guidelines for reporting economic studies and specific questions in test accuracy reports. We systematically searched multiple databases, selecting partial and full HEEs of diagnostic tests, published between 1980 and 2013. Two independent reviewers screened articles for relevance and extracted the data. We performed a qualitative narrative synthesis. Forty-three articles were reviewed. The most frequently studied diagnostic tests were laboratory tests (37.2%) and imaging tests (32.6%). Most were non-invasive tests (51.2%) and were performed in the adult population (48.8%). The intended purposes of the technologies evaluated were mostly diagnostic (69.8%), but diagnosis and treatment and screening, diagnosis, and treatment accounted for 25.6% and 4.7%, respectively. Of the reviewed studies, 12.5% described the methods used to estimate the quantities of resources, 33.3% reported the discount rate applied, and 29.2% listed the type of sensitivity analysis performed. Among the 12 cost-effectiveness analyses, only two studies (17%) referred to the application of formal methods to check the quality of the accuracy studies that provided support for the economic model. The existing Brazilian literature on the HEEs of diagnostic tests exhibited reasonably good performance. However, the following points still require improvement: 1) the methods used to estimate resource quantities and unit costs, 2) the discount rate, 3) descriptions of sensitivity analysis methods, 4) reporting of conflicts of interest, 5) evaluations of the quality of the accuracy studies considered in the cost-effectiveness models, and 6) the incorporation of accuracy measures into sensitivity analyses.

  9. Test model of WWER core

    International Nuclear Information System (INIS)

    Tikhomirov, A. V.; Gorokhov, A. K.

    2007-01-01

    The objective of this paper is creation of precision test model for WWER RP neutron-physics calculations. The model is considered as a tool for verification of deterministic computer codes that enables to reduce conservatism of design calculations and enhance WWER RP competitiveness. Precision calculations were performed using code MCNP5/1/ (Monte Carlo method). Engineering computer package Sapfir 9 5andRC V VER/2/ is used in comparative analysis of the results, it was certified for design calculations of WWER RU neutron-physics characteristic. The object of simulation is the first fuel loading of Volgodon NPP RP. Peculiarities of transition in calculation using MCNP5 from 2D geometry to 3D geometry are shown on the full-scale model. All core components as well as radial and face reflectors, automatic regulation in control and protection system control rod are represented in detail description according to the design. The first stage of application of the model is assessment of accuracy of calculation of the core power. At the second stage control and protection system control rod worth was assessed. Full scale RP representation in calculation using code MCNP5 is time consuming that calls for parallelization of computational problem on multiprocessing computer (Authors)

  10. A Systematic Identification Method for Thermodynamic Property Modelling

    DEFF Research Database (Denmark)

    Ana Perederic, Olivia; Cunico, Larissa; Sarup, Bent

    2017-01-01

    In this work, a systematic identification method for thermodynamic property modelling is proposed. The aim of the method is to improve the quality of phase equilibria prediction by group contribution based property prediction models. The method is applied to lipid systems where the Original UNIFAC...... model is used. Using the proposed method for estimating the interaction parameters using only VLE data, a better phase equilibria prediction for both VLE and SLE was obtained. The results were validated and compared with the original model performance...

  11. Hospitality and Tourism Online Review Research: A Systematic Analysis and Heuristic-Systematic Model

    Directory of Open Access Journals (Sweden)

    Sunyoung Hlee

    2018-04-01

    Full Text Available With tremendous growth and potential of online consumer reviews, online reviews of hospitality and tourism are now playing a significant role in consumer attitude and buying behaviors. This study reviewed and analyzed hospitality and tourism related articles published in academic journals. The systematic approach was used to analyze 55 research articles between January 2008 and December 2017. This study presented a brief synthesis of research by investigating content-related characteristics of hospitality and tourism online reviews (HTORs in different market segments. Two research questions were addressed. Building upon our literature analysis, we used the heuristic-systematic model (HSM to summarize and classify the characteristics affecting consumer perception in previous HTOR studies. We believe that the framework helps researchers to identify the research topic in extended HTORs literature and to point out possible direction for future studies.

  12. Systematic approach in protection and ergonomics testing personal protective equipment

    NARCIS (Netherlands)

    Hartog. E.A. den

    2009-01-01

    In the area of personal protection against chemical and biological (CB) agents there is a strong focus on testing the materials against the relevant threats. The testing programs in this area are elaborate and are aimed to guarantee that the material protects according to specifications. This

  13. Economic Evaluations of Pharmacogenetic and Pharmacogenomic Screening Tests: A Systematic Review. Second Update of the Literature.

    Directory of Open Access Journals (Sweden)

    Elizabeth J J Berm

    Full Text Available Due to extended application of pharmacogenetic and pharmacogenomic screening (PGx tests it is important to assess whether they provide good value for money. This review provides an update of the literature.A literature search was performed in PubMed and papers published between August 2010 and September 2014, investigating the cost-effectiveness of PGx screening tests, were included. Papers from 2000 until July 2010 were included via two previous systematic reviews. Studies' overall quality was assessed with the Quality of Health Economic Studies (QHES instrument.We found 38 studies, which combined with the previous 42 studies resulted in a total of 80 included studies. An average QHES score of 76 was found. Since 2010, more studies were funded by pharmaceutical companies. Most recent studies performed cost-utility analysis, univariate and probabilistic sensitivity analyses, and discussed limitations of their economic evaluations. Most studies indicated favorable cost-effectiveness. Majority of evaluations did not provide information regarding the intrinsic value of the PGx test. There were considerable differences in the costs for PGx testing. Reporting of the direction and magnitude of bias on the cost-effectiveness estimates as well as motivation for the chosen economic model and perspective were frequently missing.Application of PGx tests was mostly found to be a cost-effective or cost-saving strategy. We found that only the minority of recent pharmacoeconomic evaluations assessed the intrinsic value of the PGx tests. There was an increase in the number of studies and in the reporting of quality associated characteristics. To improve future evaluations, scenario analysis including a broad range of PGx tests costs and equal costs of comparator drugs to assess the intrinsic value of the PGx tests, are recommended. In addition, robust clinical evidence regarding PGx tests' efficacy remains of utmost importance.

  14. The air forces on a systematic series of biplane and triplane cellule models

    Science.gov (United States)

    Munk, Max M

    1927-01-01

    The air forces on a systematic series of biplane and triplane cellule models are the subject of this report. The test consist in the determination of the lift, drag, and moment of each individual airfoil in each cellule, mostly with the same wing section. The magnitude of the gap and of the stagger is systematically varied; not, however, the decalage, which is zero throughout the tests. Certain check tests with a second wing section make the tests more complete and conclusions more convincing. The results give evidence that the present army and navy specifications for the relative lifts of biplanes are good. They furnish material for improving such specifications for the relative lifts of triplanes. A larger number of factors can now be prescribed to take care of different cases.

  15. Model test of boson mappings

    International Nuclear Information System (INIS)

    Navratil, P.; Dobes, J.

    1992-01-01

    Methods of boson mapping are tested in calculations for a simple model system of four protons and four neutrons in single-j distinguishable orbits. Two-body terms in the boson images of the fermion operators are considered. Effects of the seniority v=4 states are thus included. The treatment of unphysical states and the influence of boson space truncation are particularly studied. Both the Dyson boson mapping and the seniority boson mapping as dictated by the similarity transformed Dyson mapping do not seem to be simply amenable to truncation. This situation improves when the one-body form of the seniority image of the quadrupole operator is employed. Truncation of the boson space is addressed by using the effective operator theory with a notable improvement of results

  16. Business model framework applications in health care: A systematic review.

    Science.gov (United States)

    Fredriksson, Jens Jacob; Mazzocato, Pamela; Muhammed, Rafiq; Savage, Carl

    2017-11-01

    It has proven to be a challenge for health care organizations to achieve the Triple Aim. In the business literature, business model frameworks have been used to understand how organizations are aligned to achieve their goals. We conducted a systematic literature review with an explanatory synthesis approach to understand how business model frameworks have been applied in health care. We found a large increase in applications of business model frameworks during the last decade. E-health was the most common context of application. We identified six applications of business model frameworks: business model description, financial assessment, classification based on pre-defined typologies, business model analysis, development, and evaluation. Our synthesis suggests that the choice of business model framework and constituent elements should be informed by the intent and context of application. We see a need for harmonization in the choice of elements in order to increase generalizability, simplify application, and help organizations realize the Triple Aim.

  17. Simulation models in population breast cancer screening: A systematic review.

    Science.gov (United States)

    Koleva-Kolarova, Rositsa G; Zhan, Zhuozhao; Greuter, Marcel J W; Feenstra, Talitha L; De Bock, Geertruida H

    2015-08-01

    The aim of this review was to critically evaluate published simulation models for breast cancer screening of the general population and provide a direction for future modeling. A systematic literature search was performed to identify simulation models with more than one application. A framework for qualitative assessment which incorporated model type; input parameters; modeling approach, transparency of input data sources/assumptions, sensitivity analyses and risk of bias; validation, and outcomes was developed. Predicted mortality reduction (MR) and cost-effectiveness (CE) were compared to estimates from meta-analyses of randomized control trials (RCTs) and acceptability thresholds. Seven original simulation models were distinguished, all sharing common input parameters. The modeling approach was based on tumor progression (except one model) with internal and cross validation of the resulting models, but without any external validation. Differences in lead times for invasive or non-invasive tumors, and the option for cancers not to progress were not explicitly modeled. The models tended to overestimate the MR (11-24%) due to screening as compared to optimal RCTs 10% (95% CI - 2-21%) MR. Only recently, potential harms due to regular breast cancer screening were reported. Most scenarios resulted in acceptable cost-effectiveness estimates given current thresholds. The selected models have been repeatedly applied in various settings to inform decision making and the critical analysis revealed high risk of bias in their outcomes. Given the importance of the models, there is a need for externally validated models which use systematical evidence for input data to allow for more critical evaluation of breast cancer screening. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Acceptability of HIV self-testing: a systematic literature review.

    Science.gov (United States)

    Krause, Janne; Subklew-Sehume, Friederike; Kenyon, Chris; Colebunders, Robert

    2013-08-08

    The uptake of HIV testing and counselling services remains low in risk groups around the world. Fear of stigmatisation, discrimination and breach of confidentiality results in low service usage among risk groups. HIV self-testing (HST) is a confidential HIV testing option that enables people to find out their status in the privacy of their homes. We evaluated the acceptability of HST and the benefits and challenges linked to the introduction of HST. A literature review was conducted on the acceptability of HST in projects in which HST was offered to study participants. Besides acceptability rates of HST, accuracy rates of self-testing, referral rates of HIV-positive individuals into medical care, disclosure rates and rates of first-time testers were assessed. In addition, the utilisation rate of a telephone hotline for counselling issues and clients` attitudes towards HST were extracted. Eleven studies met the inclusion criteria (HST had been offered effectively to study participants and had been administered by participants themselves) and demonstrated universally high acceptability of HST among study populations. Studies included populations from resource poor settings (Kenya and Malawi) and from high-income countries (USA, Spain and Singapore). The majority of study participants were able to perform HST accurately with no or little support from trained staff. Participants appreciated the confidentiality and privacy but felt that the provision of adequate counselling services was inadequate. The review demonstrates that HST is an acceptable testing alternative for risk groups and can be performed accurately by the majority of self-testers. Clients especially value the privacy and confidentiality of HST. Linkage to counselling as well as to treatment and care services remain major challenges.

  19. ANALYSIS AND CORRECTION OF SYSTEMATIC HEIGHT MODEL ERRORS

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2016-06-01

    Full Text Available The geometry of digital height models (DHM determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC. Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3 has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP, but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM digital surface model (DSM or the new AW3D30 DSM, based on ALOS

  20. Systematic model development for partial nitrification of landfill leachate in a SBR

    DEFF Research Database (Denmark)

    Ganigue, R.; Volcke, E.I.P.; Puig, S.

    2010-01-01

    ), confirmed by statistical tests. Good model fits were also obtained for pH, despite a slight bias in pH prediction, probably caused by the high salinity of the leachate. Future work will be addressed to the model-based evaluation of the interaction of different factors (aeration, stripping, pH, inhibitions....... Following a systematic procedure, the model was successfully constructed, calibrated and validated using data from short-term (one cycle) operation of the PN-SBR. The evaluation of the model revealed a good fit to the main physical-chemical measurements (ammonium, nitrite, nitrate and inorganic carbon......, among others) and their impact on the process performance....

  1. In-vitro orthodontic bond strength testing : A systematic review and meta-analysis

    NARCIS (Netherlands)

    Finnema, K.J.; Ozcan, M.; Post, W.J.; Ren, Y.J.; Dijkstra, P.U.

    INTRODUCTION: The aims of this study were to systematically review the available literature regarding in-vitro orthodontic shear bond strength testing and to analyze the influence of test conditions on bond strength. METHODS: Our data sources were Embase and Medline. Relevant studies were selected

  2. Systematic Testing should not be a Topic in the Computer Science Curriculum!

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak

    2003-01-01

    of high quality. We point out that we, as teachers, are partly to blame that many software products are of low quality. We describe a set of teaching guidelines that conveys our main pedagogical point to the students: that systematic testing is important, rewarding, and fun, and that testing should...

  3. Group Systematic Desensitization Versus Covert Positive Reinforcement in the Reduction of Test Anxiety

    Science.gov (United States)

    Kostka, Marion P.; Galassi, John P.

    1974-01-01

    The study compared modified versions of systematic desensitization and covert positive reinforcement to a no-treatment control condition in the reduction of test anxiety. On an anagrams performance test, the covert reinforcement and control groups were superior to the desensitization group. (Author)

  4. Treatment of Test Anxiety by Cue-Controlled Relaxation and Systematic Desensitization

    Science.gov (United States)

    Russell, Richard K.; And Others

    1976-01-01

    Test-anxious subjects (N=19) participated in an outcome study comparing systematic desensitization, cue-controlled relaxation, and no treatment. The treatment groups demonstrated significant improvement on the self-report measures of test and state anxiety but not on the behavioral indices. The potential advantages of this technique over…

  5. The Six-Minute Walk Test in Chronic Pediatric Conditions: A Systematic Review of Measurement Properties

    NARCIS (Netherlands)

    Bart Bartels; Janke de Groot; Caroline Terwee

    2013-01-01

    Background The Six-Minute Walk Test (6MWT) is increasingly being used as a functional outcome measure for chronic pediatric conditions. Knowledge about its measurement properties is needed to determine whether it is an appropriate test to use. Purpose The purpose of this study was to systematically

  6. Accuracy of clinical tests in the diagnosis of anterior cruciate ligament injury: A systematic review

    NARCIS (Netherlands)

    M.S. Swain (Michael S.); N. Henschke (Nicholas); S.J. Kamper (Steven); A.S. Downie (Aron S.); B.W. Koes (Bart); C. Maher (Chris)

    2014-01-01

    textabstractBackground: Numerous clinical tests are used in the diagnosis of anterior cruciate ligament (ACL) injury but their accuracy is unclear. The purpose of this study is to evaluate the diagnostic accuracy of clinical tests for the diagnosis of ACL injury.Methods: Study Design: Systematic

  7. Testing the Structure of Hydrological Models using Genetic Programming

    Science.gov (United States)

    Selle, B.; Muttil, N.

    2009-04-01

    Genetic Programming is able to systematically explore many alternative model structures of different complexity from available input and response data. We hypothesised that genetic programming can be used to test the structure hydrological models and to identify dominant processes in hydrological systems. To test this, genetic programming was used to analyse a data set from a lysimeter experiment in southeastern Australia. The lysimeter experiment was conducted to quantify the deep percolation response under surface irrigated pasture to different soil types, water table depths and water ponding times during surface irrigation. Using genetic programming, a simple model of deep percolation was consistently evolved in multiple model runs. This simple and interpretable model confirmed the dominant process contributing to deep percolation represented in a conceptual model that was published earlier. Thus, this study shows that genetic programming can be used to evaluate the structure of hydrological models and to gain insight about the dominant processes in hydrological systems.

  8. Towards eliminating systematic errors caused by the experimental conditions in Biochemical Methane Potential (BMP) tests

    International Nuclear Information System (INIS)

    Strömberg, Sten; Nistor, Mihaela; Liu, Jing

    2014-01-01

    Highlights: • The evaluated factors introduce significant systematic errors (10–38%) in BMP tests. • Ambient temperature (T) has the most substantial impact (∼10%) at low altitude. • Ambient pressure (p) has the most substantial impact (∼68%) at high altitude. • Continuous monitoring of T and p is not necessary for kinetic calculations. - Abstract: The Biochemical Methane Potential (BMP) test is increasingly recognised as a tool for selecting and pricing biomass material for production of biogas. However, the results for the same substrate often differ between laboratories and much work to standardise such tests is still needed. In the current study, the effects from four environmental factors (i.e. ambient temperature and pressure, water vapour content and initial gas composition of the reactor headspace) on the degradation kinetics and the determined methane potential were evaluated with a 2 4 full factorial design. Four substrates, with different biodegradation profiles, were investigated and the ambient temperature was found to be the most significant contributor to errors in the methane potential. Concerning the kinetics of the process, the environmental factors’ impact on the calculated rate constants was negligible. The impact of the environmental factors on the kinetic parameters and methane potential from performing a BMP test at different geographical locations around the world was simulated by adjusting the data according to the ambient temperature and pressure of some chosen model sites. The largest effect on the methane potential was registered from tests performed at high altitudes due to a low ambient pressure. The results from this study illustrate the importance of considering the environmental factors’ influence on volumetric gas measurement in BMP tests. This is essential to achieve trustworthy and standardised results that can be used by researchers and end users from all over the world

  9. Towards eliminating systematic errors caused by the experimental conditions in Biochemical Methane Potential (BMP) tests

    Energy Technology Data Exchange (ETDEWEB)

    Strömberg, Sten, E-mail: sten.stromberg@biotek.lu.se [Department of Biotechnology, Lund University, Getingevägen 60, 221 00 Lund (Sweden); Nistor, Mihaela, E-mail: mn@bioprocesscontrol.com [Bioprocess Control, Scheelevägen 22, 223 63 Lund (Sweden); Liu, Jing, E-mail: jing.liu@biotek.lu.se [Department of Biotechnology, Lund University, Getingevägen 60, 221 00 Lund (Sweden); Bioprocess Control, Scheelevägen 22, 223 63 Lund (Sweden)

    2014-11-15

    Highlights: • The evaluated factors introduce significant systematic errors (10–38%) in BMP tests. • Ambient temperature (T) has the most substantial impact (∼10%) at low altitude. • Ambient pressure (p) has the most substantial impact (∼68%) at high altitude. • Continuous monitoring of T and p is not necessary for kinetic calculations. - Abstract: The Biochemical Methane Potential (BMP) test is increasingly recognised as a tool for selecting and pricing biomass material for production of biogas. However, the results for the same substrate often differ between laboratories and much work to standardise such tests is still needed. In the current study, the effects from four environmental factors (i.e. ambient temperature and pressure, water vapour content and initial gas composition of the reactor headspace) on the degradation kinetics and the determined methane potential were evaluated with a 2{sup 4} full factorial design. Four substrates, with different biodegradation profiles, were investigated and the ambient temperature was found to be the most significant contributor to errors in the methane potential. Concerning the kinetics of the process, the environmental factors’ impact on the calculated rate constants was negligible. The impact of the environmental factors on the kinetic parameters and methane potential from performing a BMP test at different geographical locations around the world was simulated by adjusting the data according to the ambient temperature and pressure of some chosen model sites. The largest effect on the methane potential was registered from tests performed at high altitudes due to a low ambient pressure. The results from this study illustrate the importance of considering the environmental factors’ influence on volumetric gas measurement in BMP tests. This is essential to achieve trustworthy and standardised results that can be used by researchers and end users from all over the world.

  10. Modelling the transuranic contamination in soils by using a generic model and systematic sampling

    International Nuclear Information System (INIS)

    Breitenecker, Katharina; Brandl, Alexander; Bock, Helmut; Villa, Mario

    2008-01-01

    Full text: In the course of the decommissioning the former ASTRA Research Reactor, the Seibersdorf site is to be surveyed for possible contamination by radioactive materials, including transuranium elements. To limit costs due to systematic sampling and time consuming laboratory analyses, a mathematical model that describes the migration of transuranium elements and that includes the local topography of the area where deposition has occurred, was established.The project basis is to find a mathematical function that determines the contamination by modelling the pathways of transuranium elements. The model approach chosen is cellular automata (CA). For this purpose, a hypothetical activity of transuranium elements is released on the ground in the centre of a simulated area. Under the assumption that migration of these elements only takes place by diffusion, transport and sorption, their equations are modelled in the CA-model by a simple discretization for the existing problem. To include local topography, most of the simulated area consists of a green corridor, where migration proceeds quite slowly; streets, where the migrational behaviour is different, and migration velocities in ditches are also modelled. The Migration of three different plutonium isotopes ( 238P u, 239+240P u, 241P u), the migration of one americium isotope ( 241A m), the radioactive decay of 241P u via 241A m to 237N p and the radioactive decay of 238P u to 234U were considered in this model. Due to the special modelling approach of CA, the physical necessity of conservation of the amount of substance is always fulfilled. The entire system was implemented in MATLAB. Systematic sampling onto a featured test site, followed by detailed laboratory analyses were done to compare the underlying CA-model to real data. On this account a nuclide vector with 241A m as the reference nuclide was established. As long as the initial parameters (e.g. meteorological data) are well known, the model describes the

  11. Systematic reviews of diagnostic tests in endocrinology: an audit of methods, reporting, and performance.

    Science.gov (United States)

    Spencer-Bonilla, Gabriela; Singh Ospina, Naykky; Rodriguez-Gutierrez, Rene; Brito, Juan P; Iñiguez-Ariza, Nicole; Tamhane, Shrikant; Erwin, Patricia J; Murad, M Hassan; Montori, Victor M

    2017-07-01

    Systematic reviews provide clinicians and policymakers estimates of diagnostic test accuracy and their usefulness in clinical practice. We identified all available systematic reviews of diagnosis in endocrinology, summarized the diagnostic accuracy of the tests included, and assessed the credibility and clinical usefulness of the methods and reporting. We searched Ovid MEDLINE, EMBASE, and Cochrane CENTRAL from inception to December 2015 for systematic reviews and meta-analyses reporting accuracy measures of diagnostic tests in endocrinology. Experienced reviewers independently screened for eligible studies and collected data. We summarized the results, methods, and reporting of the reviews. We performed subgroup analyses to categorize diagnostic tests as most useful based on their accuracy. We identified 84 systematic reviews; half of the tests included were classified as helpful when positive, one-fourth as helpful when negative. Most authors adequately reported how studies were identified and selected and how their trustworthiness (risk of bias) was judged. Only one in three reviews, however, reported an overall judgment about trustworthiness and one in five reported using adequate meta-analytic methods. One in four reported contacting authors for further information and about half included only patients with diagnostic uncertainty. Up to half of the diagnostic endocrine tests in which the likelihood ratio was calculated or provided are likely to be helpful in practice when positive as are one-quarter when negative. Most diagnostic systematic reviews in endocrine lack methodological rigor, protection against bias, and offer limited credibility. Substantial efforts, therefore, seem necessary to improve the quality of diagnostic systematic reviews in endocrinology.

  12. A Model for Quantifying Sources of Variation in Test-day Milk Yield ...

    African Journals Online (AJOL)

    A cow's test-day milk yield is influenced by several systematic environmental effects, which have to be removed when estimating the genetic potential of an animal. The present study quantified the variation due to test date and month of test in test-day lactation yield records using full and reduced models. The data consisted ...

  13. Testing the structure of a hydrological model using Genetic Programming

    Science.gov (United States)

    Selle, Benny; Muttil, Nitin

    2011-01-01

    SummaryGenetic Programming is able to systematically explore many alternative model structures of different complexity from available input and response data. We hypothesised that Genetic Programming can be used to test the structure of hydrological models and to identify dominant processes in hydrological systems. To test this, Genetic Programming was used to analyse a data set from a lysimeter experiment in southeastern Australia. The lysimeter experiment was conducted to quantify the deep percolation response under surface irrigated pasture to different soil types, watertable depths and water ponding times during surface irrigation. Using Genetic Programming, a simple model of deep percolation was recurrently evolved in multiple Genetic Programming runs. This simple and interpretable model supported the dominant process contributing to deep percolation represented in a conceptual model that was published earlier. Thus, this study shows that Genetic Programming can be used to evaluate the structure of hydrological models and to gain insight about the dominant processes in hydrological systems.

  14. Model-based testing for software safety

    NARCIS (Netherlands)

    Gurbuz, Havva Gulay; Tekinerdogan, Bedir

    2017-01-01

    Testing safety-critical systems is crucial since a failure or malfunction may result in death or serious injuries to people, equipment, or environment. An important challenge in testing is the derivation of test cases that can identify the potential faults. Model-based testing adopts models of a

  15. 46 CFR 154.431 - Model test.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Model test. 154.431 Section 154.431 Shipping COAST GUARD... Model test. (a) The primary and secondary barrier of a membrane tank, including the corners and joints...(c). (b) Analyzed data of a model test for the primary and secondary barrier of the membrane tank...

  16. 46 CFR 154.449 - Model test.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Model test. 154.449 Section 154.449 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES SAFETY STANDARDS FOR SELF... § 154.449 Model test. The following analyzed data of a model test of structural elements for independent...

  17. Systematic testing of flood adaptation options in urban areas through simulations

    Science.gov (United States)

    Löwe, Roland; Urich, Christian; Sto. Domingo, Nina; Mark, Ole; Deletic, Ana; Arnbjerg-Nielsen, Karsten

    2016-04-01

    While models can quantify flood risk in great detail, the results are subject to a number of deep uncertainties. Climate dependent drivers such as sea level and rainfall intensities, population growth and economic development all have a strong influence on future flood risk, but future developments can only be estimated coarsely. In such a situation, robust decision making frameworks call for the systematic evaluation of mitigation measures against ensembles of potential futures. We have coupled the urban development software DAnCE4Water and the 1D-2D hydraulic simulation package MIKE FLOOD to create a framework that allows for such systematic evaluations, considering mitigation measures under a variety of climate futures and urban development scenarios. A wide spectrum of mitigation measures can be considered in this setup, ranging from structural measures such as modifications of the sewer network over local retention of rainwater and the modification of surface flow paths to policy measures such as restrictions on urban development in flood prone areas or master plans that encourage compact development. The setup was tested in a 300 ha residential catchment in Melbourne, Australia. The results clearly demonstrate the importance of considering a range of potential futures in the planning process. For example, local rainwater retention measures strongly reduce flood risk a scenario with moderate increase of rain intensities and moderate urban growth, but their performance strongly varies, yielding very little improvement in situations with pronounced climate change. The systematic testing of adaptation measures further allows for the identification of so-called adaptation tipping points, i.e. levels for the drivers of flood risk where the desired level of flood risk is exceeded despite the implementation of (a combination of) mitigation measures. Assuming a range of development rates for the drivers of flood risk, such tipping points can be translated into

  18. Effects of waveform model systematics on the interpretation of GW150914

    Science.gov (United States)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Ananyeva, A.; Anderson, S. B.; Anderson, W. G.; Appert, S.; Arai, K.; Araya, M. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Avila-Alvarez, A.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; E Barclay, S.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Beer, C.; Bejger, M.; Belahcene, I.; Belgin, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Billman, C. R.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackman, J.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bohe, A.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; E Brau, J.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; E Broida, J.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, H.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, H.-P.; Chincarini, A.; Chiummo, A.; Chmiel, T.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, A. J. K.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Cocchieri, C.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conti, L.; Cooper, S. J.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Covas, P. B.; E Cowan, E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; E Creighton, J. D.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cullen, T. J.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Davis, D.; Daw, E. J.; Day, B.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devenson, J.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Doctor, Z.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorrington, I.; Douglas, R.; Dovale Álvarez, M.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; E Dwyer, S.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Eisenstein, R. A.; Essick, R. C.; Etienne, Z.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fernández Galiana, A.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fong, H.; Forsyth, S. S.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fries, E. M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H.; Gadre, B. U.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gayathri, V.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghonge, S.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gorodetsky, M. L.; E Gossan, S.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; E Gushwa, K.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; E Holz, D.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Junker, J.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Keitel, D.; Kelley, D. B.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J. C.; Kim, Whansun; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kirchhoff, R.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koch, P.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Krämer, C.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lang, R. N.; Lange, J.; Lantz, B.; Lanza, R. K.; Lartaux-Vollard, A.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lehmann, J.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Liu, J.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; E Lord, J.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lovelace, G.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macfoy, S.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; E McClelland, D.; McCormick, S.; McGrath, C.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; E Mikhailov, E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Muniz, E. A. M.; Murray, P. G.; Mytidis, A.; Napier, K.; Nardecchia, I.; Naticchioni, L.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Nery, M.; Neunzert, A.; Newport, J. M.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Noack, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; E Pace, A.; Page, J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perez, C. J.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Pratt, J. W. W.; Predoi, V.; Prestegard, T.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Rhoades, E.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L. M.; Sanchez, E. J.; Sandberg, V.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Scheuer, J.; Schmidt, E.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Schwalbe, S. G.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T. J.; Shahriar, M. S.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, B.; Smith, J. R.; E Smith, R. J.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Spencer, A. P.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson, S. P.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; E Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Taracchini, A.; Taylor, R.; Theeg, T.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tippens, T.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Trinastic, J.; Tringali, M. C.; Trozzo, L.; Tse, M.; Tso, R.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Varma, V.; Vass, S.; Vasúth, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Venugopalan, G.; Verkindt, D.; Vetrano, F.; Viceré, A.; Viets, A. D.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; E Wade, L.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Watchi, J.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Whittle, C.; Williams, D.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, Hang; Yu, Haocun; Yvert, M.; Zadrożny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, T.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, S. J.; Zhu, X. J.; E Zucker, M.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration; Boyle, M.; Chu, T.; Hemberger, D.; Hinder, I.; E Kidder, L.; Ossokine, S.; Scheel, M.; Szilagyi, B.; Teukolsky, S.; Vano Vinuales, A.

    2017-05-01

    Parameter estimates of GW150914 were obtained using Bayesian inference, based on three semi-analytic waveform models for binary black hole coalescences. These waveform models differ from each other in their treatment of black hole spins, and all three models make some simplifying assumptions, notably to neglect sub-dominant waveform harmonic modes and orbital eccentricity. Furthermore, while the models are calibrated to agree with waveforms obtained by full numerical solutions of Einstein’s equations, any such calibration is accurate only to some non-zero tolerance and is limited by the accuracy of the underlying phenomenology, availability, quality, and parameter-space coverage of numerical simulations. This paper complements the original analyses of GW150914 with an investigation of the effects of possible systematic errors in the waveform models on estimates of its source parameters. To test for systematic errors we repeat the original Bayesian analysis on mock signals from numerical simulations of a series of binary configurations with parameters similar to those found for GW150914. Overall, we find no evidence for a systematic bias relative to the statistical error of the original parameter recovery of GW150914 due to modeling approximations or modeling inaccuracies. However, parameter biases are found to occur for some configurations disfavored by the data of GW150914: for binaries inclined edge-on to the detector over a small range of choices of polarization angles, and also for eccentricities greater than  ˜0.05. For signals with higher signal-to-noise ratio than GW150914, or in other regions of the binary parameter space (lower masses, larger mass ratios, or higher spins), we expect that systematic errors in current waveform models may impact gravitational-wave measurements, making more accurate models desirable for future observations.

  19. Vehicle rollover sensor test modeling

    NARCIS (Netherlands)

    McCoy, R.W.; Chou, C.C.; Velde, R. van de; Twisk, D.; Schie, C. van

    2007-01-01

    A computational model of a mid-size sport utility vehicle was developed using MADYMO. The model includes a detailed description of the suspension system and tire characteristics that incorporated the Delft-Tyre magic formula description. The model was correlated by simulating a vehicle suspension

  20. Modelling the transmission of healthcare associated infections: a systematic review

    Science.gov (United States)

    2013-01-01

    Background Dynamic transmission models are increasingly being used to improve our understanding of the epidemiology of healthcare-associated infections (HCAI). However, there has been no recent comprehensive review of this emerging field. This paper summarises how mathematical models have informed the field of HCAI and how methods have developed over time. Methods MEDLINE, EMBASE, Scopus, CINAHL plus and Global Health databases were systematically searched for dynamic mathematical models of HCAI transmission and/or the dynamics of antimicrobial resistance in healthcare settings. Results In total, 96 papers met the eligibility criteria. The main research themes considered were evaluation of infection control effectiveness (64%), variability in transmission routes (7%), the impact of movement patterns between healthcare institutes (5%), the development of antimicrobial resistance (3%), and strain competitiveness or co-colonisation with different strains (3%). Methicillin-resistant Staphylococcus aureus was the most commonly modelled HCAI (34%), followed by vancomycin resistant enterococci (16%). Other common HCAIs, e.g. Clostridum difficile, were rarely investigated (3%). Very few models have been published on HCAI from low or middle-income countries. The first HCAI model has looked at antimicrobial resistance in hospital settings using compartmental deterministic approaches. Stochastic models (which include the role of chance in the transmission process) are becoming increasingly common. Model calibration (inference of unknown parameters by fitting models to data) and sensitivity analysis are comparatively uncommon, occurring in 35% and 36% of studies respectively, but their application is increasing. Only 5% of models compared their predictions to external data. Conclusions Transmission models have been used to understand complex systems and to predict the impact of control policies. Methods have generally improved, with an increased use of stochastic models, and

  1. Systematic integration of experimental data and models in systems biology.

    Science.gov (United States)

    Li, Peter; Dada, Joseph O; Jameson, Daniel; Spasic, Irena; Swainston, Neil; Carroll, Kathleen; Dunn, Warwick; Khan, Farid; Malys, Naglis; Messiha, Hanan L; Simeonidis, Evangelos; Weichart, Dieter; Winder, Catherine; Wishart, Jill; Broomhead, David S; Goble, Carole A; Gaskell, Simon J; Kell, Douglas B; Westerhoff, Hans V; Mendes, Pedro; Paton, Norman W

    2010-11-29

    The behaviour of biological systems can be deduced from their mathematical models. However, multiple sources of data in diverse forms are required in the construction of a model in order to define its components and their biochemical reactions, and corresponding parameters. Automating the assembly and use of systems biology models is dependent upon data integration processes involving the interoperation of data and analytical resources. Taverna workflows have been developed for the automated assembly of quantitative parameterised metabolic networks in the Systems Biology Markup Language (SBML). A SBML model is built in a systematic fashion by the workflows which starts with the construction of a qualitative network using data from a MIRIAM-compliant genome-scale model of yeast metabolism. This is followed by parameterisation of the SBML model with experimental data from two repositories, the SABIO-RK enzyme kinetics database and a database of quantitative experimental results. The models are then calibrated and simulated in workflows that call out to COPASIWS, the web service interface to the COPASI software application for analysing biochemical networks. These systems biology workflows were evaluated for their ability to construct a parameterised model of yeast glycolysis. Distributed information about metabolic reactions that have been described to MIRIAM standards enables the automated assembly of quantitative systems biology models of metabolic networks based on user-defined criteria. Such data integration processes can be implemented as Taverna workflows to provide a rapid overview of the components and their relationships within a biochemical system.

  2. Clinical tests to diagnose lumbar spondylolysis and spondylolisthesis: A systematic review.

    Science.gov (United States)

    Alqarni, Abdullah M; Schneiders, Anthony G; Cook, Chad E; Hendrick, Paul A

    2015-08-01

    The aim of this paper was to systematically review the diagnostic ability of clinical tests to detect lumbar spondylolysis and spondylolisthesis. A systematic literature search of six databases, with no language restrictions, from 1950 to 2014 was concluded on February 1, 2014. Clinical tests were required to be compared against imaging reference standards and report, or allow computation, of common diagnostic values. The systematic search yielded a total of 5164 articles with 57 retained for full-text examination, from which 4 met the full inclusion criteria for the review. Study heterogeneity precluded a meta-analysis of included studies. Fifteen different clinical tests were evaluated for their ability to diagnose lumbar spondylolisthesis and one test for its ability to diagnose lumbar spondylolysis. The one-legged hyperextension test demonstrated low to moderate sensitivity (50%-73%) and low specificity (17%-32%) to diagnose lumbar spondylolysis, while the lumbar spinous process palpation test was the optimal diagnostic test for lumbar spondylolisthesis; returning high specificity (87%-100%) and moderate to high sensitivity (60-88) values. Lumbar spondylolysis and spondylolisthesis are identifiable causes of LBP in athletes. There appears to be utility to lumbar spinous process palpation for the diagnosis of lumbar spondylolisthesis, however the one-legged hyperextension test has virtually no value in diagnosing patients with spondylolysis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Agent-based modeling of noncommunicable diseases: a systematic review.

    Science.gov (United States)

    Nianogo, Roch A; Arah, Onyebuchi A

    2015-03-01

    We reviewed the use of agent-based modeling (ABM), a systems science method, in understanding noncommunicable diseases (NCDs) and their public health risk factors. We systematically reviewed studies in PubMed, ScienceDirect, and Web of Sciences published from January 2003 to July 2014. We retrieved 22 relevant articles; each had an observational or interventional design. Physical activity and diet were the most-studied outcomes. Often, single agent types were modeled, and the environment was usually irrelevant to the studied outcome. Predictive validation and sensitivity analyses were most used to validate models. Although increasingly used to study NCDs, ABM remains underutilized and, where used, is suboptimally reported in public health studies. Its use in studying NCDs will benefit from clarified best practices and improved rigor to establish its usefulness and facilitate replication, interpretation, and application.

  4. Engineering model cryocooler test results

    International Nuclear Information System (INIS)

    Skimko, M.A.; Stacy, W.D.; McCormick, J.A.

    1992-01-01

    This paper reports that recent testing of diaphragm-defined, Stirling-cycle machines and components has demonstrated cooling performance potential, validated the design code, and confirmed several critical operating characteristics. A breadboard cryocooler was rebuilt and tested from cryogenic to near-ambient cold end temperatures. There was a significant increase in capacity at cryogenic temperatures and the performance results compared will with code predictions at all temperatures. Further testing on a breadboard diaphragm compressor validated the calculated requirement for a minimum axial clearance between diaphragms and mating heads

  5. HIV Testing among Men Who Have Sex with Men (MSM): Systematic Review of Qualitative Evidence

    Science.gov (United States)

    Lorenc, Theo; Marrero-Guillamon, Isaac; Llewellyn, Alexis; Aggleton, Peter; Cooper, Chris; Lehmann, Angela; Lindsay, Catriona

    2011-01-01

    We conducted a systematic review of qualitative evidence relating to the views and attitudes of men who have sex with men (MSM) concerning testing for HIV. Studies conducted in high-income countries (Organisation for Economic Co-operation and Development members) since 1996 were included. Seventeen studies were identified, most of gay or bisexual…

  6. Accuracy of monofilament testing to diagnose peripheral neuropathy: a systematic review

    NARCIS (Netherlands)

    Dros, Jacquelien; Wewerinke, Astrid; Bindels, Patrick J.; van Weert, Henk C.

    2009-01-01

    We wanted to summarize evidence about the diagnostic accuracy of the 5.07/10-g monofilament test in peripheral neuropathy. We conducted a systematic review of studies in which the accuracy of the 5.07/10-g monofilament was evaluated to detect peripheral neuropathy of any cause using nerve conduction

  7. Accuracy of Monofilament Testing to Diagnose Peripheral Neuropathy: A Systematic Review

    NARCIS (Netherlands)

    Dros, J.; Wewerinke, A.; Bindels, P.J.; van Weert, H.C.

    2009-01-01

    PURPOSE We wanted to summarize evidence about the diagnostic accuracy of the 5.07/10-g monofilament test in peripheral neuropathy. METHODS We conducted a systematic review of studies in which the accuracy of the 5.07/10-g monofilament was evaluated to detect peripheral neuropathy of any cause using

  8. Cue-Controlled Relaxation and Systematic Desensitization versus Nonspecific Factors in Treating Test Anxiety.

    Science.gov (United States)

    Russell, Richard K.; Lent, Robert W.

    1982-01-01

    Compared the efficacy of two behavioral anxiety reduction techniques against "subconscious reconditioning," an empirically derived placebo method. Examination of within-group changes showed systematic desensitization produced significant reductions in test and trait anxiety, and remaining treatments and the placebo demonstrated…

  9. Diagnostic accuracy of scapular physical examination tests for shoulder disorders: a systematic review.

    Science.gov (United States)

    Wright, Alexis A; Wassinger, Craig A; Frank, Mason; Michener, Lori A; Hegedus, Eric J

    2013-09-01

    To systematically review and critique the evidence regarding the diagnostic accuracy of physical examination tests for the scapula in patients with shoulder disorders. A systematic, computerised literature search of PubMED, EMBASE, CINAHL and the Cochrane Library databases (from database inception through January 2012) using keywords related to diagnostic accuracy of physical examination tests of the scapula. The Quality Assessment of Diagnostic Accuracy Studies tool was used to critique the quality of each paper. Eight articles met the inclusion criteria; three were considered to be of high quality. Of the three high-quality studies, two were in reference to a 'diagnosis' of shoulder pain. Only one high-quality article referenced specific shoulder pathology of acromioclavicular dislocation with reported sensitivity of 71% and 41% for the scapular dyskinesis and SICK scapula test, respectively. Overall, no physical examination test of the scapula was found to be useful in differentially diagnosing pathologies of the shoulder.

  10. Testing homogeneity in Weibull-regression models.

    Science.gov (United States)

    Bolfarine, Heleno; Valença, Dione M

    2005-10-01

    In survival studies with families or geographical units it may be of interest testing whether such groups are homogeneous for given explanatory variables. In this paper we consider score type tests for group homogeneity based on a mixing model in which the group effect is modelled as a random variable. As opposed to hazard-based frailty models, this model presents survival times that conditioned on the random effect, has an accelerated failure time representation. The test statistics requires only estimation of the conventional regression model without the random effect and does not require specifying the distribution of the random effect. The tests are derived for a Weibull regression model and in the uncensored situation, a closed form is obtained for the test statistic. A simulation study is used for comparing the power of the tests. The proposed tests are applied to real data sets with censored data.

  11. HIV Testing and Counseling Among Female Sex Workers: A Systematic Literature Review.

    Science.gov (United States)

    Tokar, Anna; Broerse, Jacqueline E W; Blanchard, James; Roura, Maria

    2018-02-20

    HIV testing uptake continues to be low among Female Sex Workers (FSWs). We synthesizes evidence on barriers and facilitators to HIV testing among FSW as well as frequencies of testing, willingness to test, and return rates to collect results. We systematically searched the MEDLINE/PubMed, EMBASE, SCOPUS databases for articles published in English between January 2000 and November 2017. Out of 5036 references screened, we retained 36 papers. The two barriers to HIV testing most commonly reported were financial and time costs-including low income, transportation costs, time constraints, and formal/informal payments-as well as the stigma and discrimination ascribed to HIV positive people and sex workers. Social support facilitated testing with consistently higher uptake amongst married FSWs and women who were encouraged to test by peers and managers. The consistent finding that social support facilitated HIV testing calls for its inclusion into current HIV testing strategies addressed at FSW.

  12. Model Checking and Model-based Testing in the Railway Domain

    DEFF Research Database (Denmark)

    Haxthausen, Anne Elisabeth; Peleska, Jan

    2015-01-01

    This chapter describes some approaches and emerging trends for verification and model-based testing of railway control systems. We describe state-of-the-art methods and associated tools for verifying interlocking systems and their configuration data, using bounded model checking and k...... with good test strength are explained. Interlocking systems represent just one class of many others, where concrete system instances are created from generic representations, using configuration data for determining the behaviour of the instances. We explain how the systematic transition from generic...... to concrete instances in the development path is complemented by associated transitions in the verification and testing paths....

  13. Systematic review of prognostic models in traumatic brain injury

    Directory of Open Access Journals (Sweden)

    Roberts Ian

    2006-11-01

    Full Text Available Abstract Background Traumatic brain injury (TBI is a leading cause of death and disability world-wide. The ability to accurately predict patient outcome after TBI has an important role in clinical practice and research. Prognostic models are statistical models that combine two or more items of patient data to predict clinical outcome. They may improve predictions in TBI patients. Multiple prognostic models for TBI have accumulated for decades but none of them is widely used in clinical practice. The objective of this systematic review is to critically assess existing prognostic models for TBI Methods Studies that combine at least two variables to predict any outcome in patients with TBI were searched in PUBMED and EMBASE. Two reviewers independently examined titles, abstracts and assessed whether each met the pre-defined inclusion criteria. Results A total of 53 reports including 102 models were identified. Almost half (47% were derived from adult patients. Three quarters of the models included less than 500 patients. Most of the models (93% were from high income countries populations. Logistic regression was the most common analytical strategy to derived models (47%. In relation to the quality of the derivation models (n:66, only 15% reported less than 10% pf loss to follow-up, 68% did not justify the rationale to include the predictors, 11% conducted an external validation and only 19% of the logistic models presented the results in a clinically user-friendly way Conclusion Prognostic models are frequently published but they are developed from small samples of patients, their methodological quality is poor and they are rarely validated on external populations. Furthermore, they are not clinically practical as they are not presented to physicians in a user-friendly way. Finally because only a few are developed using populations from low and middle income countries, where most of trauma occurs, the generalizability to these setting is limited.

  14. The Model Identification Test: A Limited Verbal Science Test

    Science.gov (United States)

    McIntyre, P. J.

    1972-01-01

    Describes the production of a test with a low verbal load for use with elementary school science students. Animated films were used to present appropriate and inappropriate models of the behavior of particles of matter. (AL)

  15. Theoretical Models, Assessment Frameworks and Test Construction.

    Science.gov (United States)

    Chalhoub-Deville, Micheline

    1997-01-01

    Reviews the usefulness of proficiency models influencing second language testing. Findings indicate that several factors contribute to the lack of congruence between models and test construction and make a case for distinguishing between theoretical models. Underscores the significance of an empirical, contextualized and structured approach to the…

  16. Geochemical Testing And Model Development - Residual Tank Waste Test Plan

    International Nuclear Information System (INIS)

    Cantrell, K.J.; Connelly, M.P.

    2010-01-01

    This Test Plan describes the testing and chemical analyses release rate studies on tank residual samples collected following the retrieval of waste from the tank. This work will provide the data required to develop a contaminant release model for the tank residuals from both sludge and salt cake single-shell tanks. The data are intended for use in the long-term performance assessment and conceptual model development.

  17. Hydraulic Model Tests on Modified Wave Dragon

    DEFF Research Database (Denmark)

    Hald, Tue; Lynggaard, Jakob

    A floating model of the Wave Dragon (WD) was built in autumn 1998 by the Danish Maritime Institute in scale 1:50, see Sørensen and Friis-Madsen (1999) for reference. This model was subjected to a series of model tests and subsequent modifications at Aalborg University and in the following...... are found in Hald and Lynggaard (2001). Model tests and reconstruction are carried out during the phase 3 project: ”Wave Dragon. Reconstruction of an existing model in scale 1:50 and sequentiel tests of changes to the model geometry and mass distribution parameters” sponsored by the Danish Energy Agency...

  18. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    2001-01-01

    A model for constrained computerized adaptive testing is proposed in which the information on the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  19. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    1997-01-01

    A model for constrained computerized adaptive testing is proposed in which the information in the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  20. Traceability in Model-Based Testing

    Directory of Open Access Journals (Sweden)

    Mathew George

    2012-11-01

    Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.

  1. A systematic review of tests for lymph node status in primary endometrial cancer.

    Science.gov (United States)

    Selman, Tara J; Mann, Christopher H; Zamora, Javier; Khan, Khalid S

    2008-05-05

    The lymph node status of a patient is a key determinate in staging, prognosis and adjuvant treatment of endometrial cancer. Despite this, the potential additional morbidity associated with lymphadenectomy makes its role controversial. This study systematically reviews the accuracy literature on sentinel node biopsy; ultra sound scanning, magnetic resonance imaging (MRI) and computer tomography (CT) for determining lymph node status in endometrial cancer. Relevant articles were identified form MEDLINE (1966-2006), EMBASE (1980-2006), MEDION, the Cochrane library, hand searching of reference lists from primary articles and reviews, conference abstracts and contact with experts in the field. The review included 18 relevant primary studies (693 women). Data was extracted for study characteristics and quality. Bivariate random-effect model meta-analysis was used to estimate diagnostic accuracy of the various index tests. MRI (pooled positive LR 26.7, 95% CI 10.6 - 67.6 and negative LR 0.29 95% CI 0.17 - 0.49) and successful sentinel node biopsy (pooled positive LR 18.9 95% CI 6.7 - 53.2 and negative LR 0.22, 95% CI 0.1 - 0.48) were the most accurate tests. CT was not as accurate a test (pooled positive LR 3.8, 95% CI 2.0 - 7.3 and negative LR of 0.62, 95% CI 0.45 - 0.86. There was only one study that reported the use of ultrasound scanning. MRI and sentinel node biopsy have shown similar diagnostic accuracy in confirming lymph node status among women with primary endometrial cancer than CT scanning, although the comparisons made are indirect and hence subject to bias. MRI should be used in preference, in light of the ASTEC trial, because of its non invasive nature.

  2. A systematic review of tests for lymph node status in primary endometrial cancer

    Directory of Open Access Journals (Sweden)

    Zamora Javier

    2008-05-01

    Full Text Available Abstract Background The lymph node status of a patient is a key determinate in staging, prognosis and adjuvant treatment of endometrial cancer. Despite this, the potential additional morbidity associated with lymphadenectomy makes its role controversial. This study systematically reviews the accuracy literature on sentinel node biopsy; ultra sound scanning, magnetic resonance imaging (MRI and computer tomography (CT for determining lymph node status in endometrial cancer. Methods Relevant articles were identified form MEDLINE (1966–2006, EMBASE (1980–2006, MEDION, the Cochrane library, hand searching of reference lists from primary articles and reviews, conference abstracts and contact with experts in the field. The review included 18 relevant primary studies (693 women. Data was extracted for study characteristics and quality. Bivariate random-effect model meta-analysis was used to estimate diagnostic accuracy of the various index tests. Results MRI (pooled positive LR 26.7, 95% CI 10.6 – 67.6 and negative LR 0.29 95% CI 0.17 – 0.49 and successful sentinel node biopsy (pooled positive LR 18.9 95% CI 6.7 – 53.2 and negative LR 0.22, 95% CI 0.1 – 0.48 were the most accurate tests. CT was not as accurate a test (pooled positive LR 3.8, 95% CI 2.0 – 7.3 and negative LR of 0.62, 95% CI 0.45 – 0.86. There was only one study that reported the use of ultrasound scanning. Conclusion MRI and sentinel node biopsy have shown similar diagnostic accuracy in confirming lymph node status among women with primary endometrial cancer than CT scanning, although the comparisons made are indirect and hence subject to bias. MRI should be used in preference, in light of the ASTEC trial, because of its non invasive nature.

  3. A Systematic Literature Review of Agile Maturity Model Research

    Directory of Open Access Journals (Sweden)

    Vaughan Henriques

    2017-02-01

    Full Text Available Background/Aim/Purpose: A commonly implemented software process improvement framework is the capability maturity model integrated (CMMI. Existing literature indicates higher levels of CMMI maturity could result in a loss of agility due to its organizational focus. To maintain agility, research has focussed attention on agile maturity models. The objective of this paper is to find the common research themes and conclusions in agile maturity model research. Methodology: This research adopts a systematic approach to agile maturity model research, using Google Scholar, Science Direct, and IEEE Xplore as sources. In total 531 articles were initially found matching the search criteria, which was filtered to 39 articles by applying specific exclusion criteria. Contribution:: The article highlights the trends in agile maturity model research, specifically bringing to light the lack of research providing validation of such models. Findings: Two major themes emerge, being the coexistence of agile and CMMI and the development of agile principle based maturity models. The research trend indicates an increase in agile maturity model articles, particularly in the latter half of the last decade, with concentrations of research coinciding with version updates of CMMI. While there is general consensus around higher CMMI maturity levels being incompatible with true agility, there is evidence of the two coexisting when agile is introduced into already highly matured environments. Future Research:\tFuture research direction for this topic should include how to attain higher levels of CMMI maturity using only agile methods, how governance is addressed in agile environments, and whether existing agile maturity models relate to improved project success.

  4. Anaerobic exercise testing in rehabilitation : A systematic review of available tests and protocols

    NARCIS (Netherlands)

    Krops, Leonie A.; Albada, Trijntje; van der Woude, Lucas H. V.; Hijmans, Juha M.; Dekker, Rienk

    Objective: Anaerobic capacity assessment in rehabilitation has received increasing scientific attention in recent years. However, anaerobic capacity is not tested consistently in clinical rehabilitation practice. This study reviews tests and protocols for anaerobic capacity in adults with various

  5. User testing of an adaptation of fishbone diagrams to depict results of systematic reviews.

    Science.gov (United States)

    Gartlehner, Gerald; Schultes, Marie-Therese; Titscher, Viktoria; Morgan, Laura C; Bobashev, Georgiy V; Williams, Peyton; West, Suzanne L

    2017-12-12

    Summary of findings tables in systematic reviews are highly informative but require epidemiological training to be interpreted correctly. The usage of fishbone diagrams as graphical displays could offer researchers an effective approach to simplify content for readers with limited epidemiological training. In this paper we demonstrate how fishbone diagrams can be applied to systematic reviews and present the results of an initial user testing. Findings from two systematic reviews were graphically depicted in the form of the fishbone diagram. To test the utility of fishbone diagrams compared with summary of findings tables, we developed and pilot-tested an online survey using Qualtrics. Respondents were randomized to the fishbone diagram or a summary of findings table presenting the same body of evidence. They answered questions in both open-ended and closed-answer formats; all responses were anonymous. Measures of interest focused on first and second impressions, the ability to find and interpret critical information, as well as user experience with both displays. We asked respondents about the perceived utility of fishbone diagrams compared to summary of findings tables. We analyzed quantitative data by conducting t-tests and comparing descriptive statistics. Based on real world systematic reviews, we provide two different fishbone diagrams to show how they might be used to display complex information in a clear and succinct manner. User testing on 77 students with basic epidemiological training revealed that participants preferred summary of findings tables over fishbone diagrams. Significantly more participants liked the summary of findings table than the fishbone diagram (71.8% vs. 44.8%; p testing, however, did not support the utility of such graphical displays.

  6. Software Testing and Verification in Climate Model Development

    Science.gov (United States)

    Clune, Thomas L.; Rood, RIchard B.

    2011-01-01

    Over the past 30 years most climate models have grown from relatively simple representations of a few atmospheric processes to a complex multi-disciplinary system. Computer infrastructure over that period has gone from punch card mainframes to modem parallel clusters. Model implementations have become complex, brittle, and increasingly difficult to extend and maintain. Existing verification processes for model implementations rely almost exclusively upon some combination of detailed analysis of output from full climate simulations and system-level regression tests. In additional to being quite costly in terms of developer time and computing resources, these testing methodologies are limited in terms of the types of defects that can be detected, isolated and diagnosed. Mitigating these weaknesses of coarse-grained testing with finer-grained "unit" tests has been perceived as cumbersome and counter-productive. In the commercial software sector, recent advances in tools and methodology have led to a renaissance for systematic fine-grained testing. We discuss the availability of analogous tools for scientific software and examine benefits that similar testing methodologies could bring to climate modeling software. We describe the unique challenges faced when testing complex numerical algorithms and suggest techniques to minimize and/or eliminate the difficulties.

  7. Test facility TIMO for testing the ITER model cryopump

    International Nuclear Information System (INIS)

    Haas, H.; Day, C.; Mack, A.; Methe, S.; Boissin, J.C.; Schummer, P.; Murdoch, D.K.

    2001-01-01

    Within the framework of the European Fusion Technology Programme, FZK is involved in the research and development process for a vacuum pump system of a future fusion reactor. As a result of these activities, the concept and the necessary requirements for the primary vacuum system of the ITER fusion reactor were defined. Continuing that development process, FZK has been preparing the test facility TIMO (Test facility for ITER Model pump) since 1996. This test facility provides for testing a cryopump all needed infrastructure as for example a process gas supply including a metering system, a test vessel, the cryogenic supply for the different temperature levels and a gas analysing system. For manufacturing the ITER model pump an order was given to the company L' Air Liquide in the form of a NET contract. (author)

  8. Test facility TIMO for testing the ITER model cryopump

    International Nuclear Information System (INIS)

    Haas, H.; Day, C.; Mack, A.; Methe, S.; Boissin, J.C.; Schummer, P.; Murdoch, D.K.

    1999-01-01

    Within the framework of the European Fusion Technology Programme, FZK is involved in the research and development process for a vacuum pump system of a future fusion reactor. As a result of these activities, the concept and the necessary requirements for the primary vacuum system of the ITER fusion reactor were defined. Continuing that development process, FZK has been preparing the test facility TIMO (Test facility for ITER Model pump) since 1996. This test facility provides for testing a cryopump all needed infrastructure as for example a process gas supply including a metering system, a test vessel, the cryogenic supply for the different temperature levels and a gas analysing system. For manufacturing the ITER model pump an order was given to the company L'Air Liquide in the form of a NET contract. (author)

  9. Statistical Tests for Mixed Linear Models

    CERN Document Server

    Khuri, André I; Sinha, Bimal K

    2011-01-01

    An advanced discussion of linear models with mixed or random effects. In recent years a breakthrough has occurred in our ability to draw inferences from exact and optimum tests of variance component models, generating much research activity that relies on linear models with mixed and random effects. This volume covers the most important research of the past decade as well as the latest developments in hypothesis testing. It compiles all currently available results in the area of exact and optimum tests for variance component models and offers the only comprehensive treatment for these models a

  10. Results of steel containment vessel model test

    International Nuclear Information System (INIS)

    Luk, V.K.; Ludwigsen, J.S.; Hessheimer, M.F.; Komine, Kuniaki; Matsumoto, Tomoyuki; Costello, J.F.

    1998-05-01

    A series of static overpressurization tests of scale models of nuclear containment structures is being conducted by Sandia National Laboratories for the Nuclear Power Engineering Corporation of Japan and the US Nuclear Regulatory Commission. Two tests are being conducted: (1) a test of a model of a steel containment vessel (SCV) and (2) a test of a model of a prestressed concrete containment vessel (PCCV). This paper summarizes the conduct of the high pressure pneumatic test of the SCV model and the results of that test. Results of this test are summarized and are compared with pretest predictions performed by the sponsoring organizations and others who participated in a blind pretest prediction effort. Questions raised by this comparison are identified and plans for posttest analysis are discussed

  11. lmerTest Package: Tests in Linear Mixed Effects Models

    DEFF Research Database (Denmark)

    Kuznetsova, Alexandra; Brockhoff, Per B.; Christensen, Rune Haubo Bojesen

    2017-01-01

    One of the frequent questions by users of the mixed model function lmer of the lme4 package has been: How can I get p values for the F and t tests for objects returned by lmer? The lmerTest package extends the 'lmerMod' class of the lme4 package, by overloading the anova and summary functions...... by providing p values for tests for fixed effects. We have implemented the Satterthwaite's method for approximating degrees of freedom for the t and F tests. We have also implemented the construction of Type I - III ANOVA tables. Furthermore, one may also obtain the summary as well as the anova table using...

  12. Diagnostic accuracy of physical examination tests of the ankle/foot complex: a systematic review.

    Science.gov (United States)

    Schwieterman, Braun; Haas, Deniele; Columber, Kirby; Knupp, Darren; Cook, Chad

    2013-08-01

    Orthopedic special tests of the ankle/foot complex are routinely used during the physical examination process in order to help diagnose ankle/lower leg pathologies. The purpose of this systematic review was to investigate the diagnostic accuracy of ankle/lower leg special tests. A search of the current literature was conducted using PubMed, CINAHL, SPORTDiscus, ProQuest Nursing and Allied Health Sources, Scopus, and Cochrane Library. Studies were eligible if they included the following: 1) a diagnostic clinical test of musculoskeletal pathology in the ankle/foot complex, 2) description of the clinical test or tests, 3) a report of the diagnostic accuracy of the clinical test (e.g. sensitivity and specificity), and 4) an acceptable reference standard for comparison. The quality of included studies was determined by two independent reviewers using the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) tool. Nine diagnostic accuracy studies met the inclusion criteria for this systematic review; analyzing a total of 16 special tests of the ankle/foot complex. After assessment using the QUADAS-2, only one study had low risk of bias and low concerns regarding applicability. Most ankle/lower leg orthopedic special tests are confirmatory in nature and are best utilized at the end of the physical examination. Most of the studies included in this systematic review demonstrate notable biases, which suggest that results and recommendations in this review should be taken as a guide rather than an outright standard. There is need for future research with more stringent study design criteria so that more accurate diagnostic power of ankle/lower leg special tests can be determined. 3a.

  13. Field testing of bioenergetic models

    International Nuclear Information System (INIS)

    Nagy, K.A.

    1985-01-01

    Doubly labeled water provides a direct measure of the rate of carbon dioxide production by free-living animals. With appropriate conversion factors, based on chemical composition of the diet and assimilation efficiency, field metabolic rate (FMR), in units of energy expenditure, and field feeding rate can be estimated. Validation studies indicate that doubly labeled water measurements of energy metabolism are accurate to within 7% in reptiles, birds, and mammals. This paper discusses the use of doubly labeled water to generate empirical models for FMR and food requirements for a variety of animals

  14. Measurement properties of the craniocervical flexion test: a systematic review protocol.

    Science.gov (United States)

    Araujo, Francisco Xavier de; Ferreira, Giovanni Esteves; Scholl Schell, Maurício; Castro, Marcelo Peduzzi de; Silva, Marcelo Faria; Ribeiro, Daniel Cury

    2018-02-22

    Neck pain is the leading cause of years lived with disability worldwide and it accounts for high economic and societal burden. Altered activation of the neck muscles is a common musculoskeletal impairment presented by patients with neck pain. The craniocervical flexion test with pressure biofeedback unit has been widely used in clinical practice to assess function of deep neck flexor muscles. This systematic review will assess the measurement properties of the craniocervical flexion test for assessing deep cervical flexor muscles. This is a protocol for a systematic review that will follow the Preferred Reporting Items for Systematic Review and Meta-Analysis statement. MEDLINE (via PubMed), EMBASE, PEDro, Cochrane Central Register of Controlled Trials (CENTRAL), Scopus and Science Direct will be systematically searched from inception. Studies of any design that have investigated and reported at least one measurement property of the craniocervical flexion test for assessing the deep cervical flexor muscles will be included. All measurement properties will be considered as outcomes. Two reviewers will independently rate the risk of bias of individual studies using the updated COnsensus-based Standards for the selection of health Measurement Instruments risk of bias checklist. A structured narrative synthesis will be used for data analysis. Quantitative findings for each measurement property will be summarised. The overall rating for a measurement property will be classified as 'positive', 'indeterminate' or 'negative'. The overall rating will be accompanied with a level of evidence. Ethical approval and patient consent are not required since this is a systematic review based on published studies. Findings will be submitted to a peer-reviewed journal for publication. CRD42017062175. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  15. Linear Logistic Test Modeling with R

    Science.gov (United States)

    Baghaei, Purya; Kubinger, Klaus D.

    2015-01-01

    The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…

  16. Systematic evaluation of non-animal test methods for skin sensitisation safety assessment.

    Science.gov (United States)

    Reisinger, Kerstin; Hoffmann, Sebastian; Alépée, Nathalie; Ashikaga, Takao; Barroso, Joao; Elcombe, Cliff; Gellatly, Nicola; Galbiati, Valentina; Gibbs, Susan; Groux, Hervé; Hibatallah, Jalila; Keller, Donald; Kern, Petra; Klaric, Martina; Kolle, Susanne; Kuehnl, Jochen; Lambrechts, Nathalie; Lindstedt, Malin; Millet, Marion; Martinozzi-Teissier, Silvia; Natsch, Andreas; Petersohn, Dirk; Pike, Ian; Sakaguchi, Hitoshi; Schepky, Andreas; Tailhardat, Magalie; Templier, Marie; van Vliet, Erwin; Maxwell, Gavin

    2015-02-01

    The need for non-animal data to assess skin sensitisation properties of substances, especially cosmetics ingredients, has spawned the development of many in vitro methods. As it is widely believed that no single method can provide a solution, the Cosmetics Europe Skin Tolerance Task Force has defined a three-phase framework for the development of a non-animal testing strategy for skin sensitization potency prediction. The results of the first phase – systematic evaluation of 16 test methods – are presented here. This evaluation involved generation of data on a common set of ten substances in all methods and systematic collation of information including the level of standardisation, existing test data,potential for throughput, transferability and accessibility in cooperation with the test method developers.A workshop was held with the test method developers to review the outcome of this evaluation and to discuss the results. The evaluation informed the prioritisation of test methods for the next phase of the non-animal testing strategy development framework. Ultimately, the testing strategy – combined with bioavailability and skin metabolism data and exposure consideration – is envisaged to allow establishment of a data integration approach for skin sensitisation safety assessment of cosmetic ingredients.

  17. TESTING GARCH-X TYPE MODELS

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    2017-01-01

    We present novel theory for testing for reduction of GARCH-X type models with an exogenous (X) covariate to standard GARCH type models. To deal with the problems of potential nuisance parameters on the boundary of the parameter space as well as lack of identification under the null, we exploit...... a noticeable property of specific zero-entries in the inverse information of the GARCH-X type models. Specifically, we consider sequential testing based on two likelihood ratio tests and as demonstrated the structure of the inverse information implies that the proposed test neither depends on whether...... the nuisance parameters lie on the boundary of the parameter space, nor on lack of identification. Our general results on GARCH-X type models are applied to Gaussian based GARCH-X models, GARCH-X models with Student's t-distributed innovations as well as the integer-valued GARCH-X (PAR-X) models....

  18. Model- and calibration-independent test of cosmic acceleration

    International Nuclear Information System (INIS)

    Seikel, Marina; Schwarz, Dominik J.

    2009-01-01

    We present a calibration-independent test of the accelerated expansion of the universe using supernova type Ia data. The test is also model-independent in the sense that no assumptions about the content of the universe or about the parameterization of the deceleration parameter are made and that it does not assume any dynamical equations of motion. Yet, the test assumes the universe and the distribution of supernovae to be statistically homogeneous and isotropic. A significant reduction of systematic effects, as compared to our previous, calibration-dependent test, is achieved. Accelerated expansion is detected at significant level (4.3σ in the 2007 Gold sample, 7.2σ in the 2008 Union sample) if the universe is spatially flat. This result depends, however, crucially on supernovae with a redshift smaller than 0.1, for which the assumption of statistical isotropy and homogeneity is less well established

  19. The Couplex test cases: models and lessons

    International Nuclear Information System (INIS)

    Bourgeat, A.; Kern, M.; Schumacher, S.; Talandier, J.

    2003-01-01

    The Couplex test cases are a set of numerical test models for nuclear waste deep geological disposal simulation. They are centered around the numerical issues arising in the near and far field transport simulation. They were used in an international contest, and are now becoming a reference in the field. We present the models used in these test cases, and show sample results from the award winning teams. (authors)

  20. Test models for improving filtering with model errors through stochastic parameter estimation

    International Nuclear Information System (INIS)

    Gershgorin, B.; Harlim, J.; Majda, A.J.

    2010-01-01

    The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

  1. Physical examination tests for the diagnosis of femoroacetabular impingement. A systematic review.

    Science.gov (United States)

    Pacheco-Carrillo, Aitana; Medina-Porqueres, Ivan

    2016-09-01

    Numerous clinical tests have been proposed to diagnose FAI, but little is known about their diagnostic accuracy. To summarize and evaluate research on the accuracy of physical examination tests for diagnosis of FAI. A search of the PubMed, SPORTDiscus and CINAHL databases was performed. Studies were considered eligible if they compared the results of physical examination tests to those of a reference standard. Methodological quality and internal validity assessment was performed by two independent reviewers using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool. The systematic search strategy revealed 298 potential articles, five of which articles met the inclusion criteria. After assessment using the QUADAS score, four of the five articles were of high quality. Clinical tests included were Impingement sign, IROP test (Internal Rotation Over Pressure), FABER test (Flexion-Abduction-External Rotation), Stinchfield/RSRL (Resisted Straight Leg Raise) test, Scour test, Maximal squat test, and the Anterior Impingement test. IROP test, impingement sign, and FABER test showed the most sensitive values to identify FAI. The diagnostic accuracy of physical examination tests to assess FAI is limited due to its heterogenecity. There is a strong need for sound research of high methodological quality in this area. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. A systematic hub loads model of a horizontal wind turbine

    International Nuclear Information System (INIS)

    Kazacoks, Romans; Jamieson, Peter

    2014-01-01

    The wind turbine industry has focused offshore on increasing the capacity of a single unit through up-scaling their machines. There is however a lack of systematic studies on how loads vary due to properties of a wind turbine and scaling of wind turbines. The purpose of this paper is to study how applied blade modifications, with similarities such as mass, stiffness and dimensions, influence blade root moments and lifetime damage equivalent loads (DELs) of the rotor blades. In order to produce fatigue load blade root moment trends based on the applied modifications. It was found that a linear trend of lifetime DELs based on the applied modifications of blades, which have effect on the natural frequency of blade of the original or reference model. As the control system was tuned for the specific frequency of the reference model. The linear trend of lifetime DELs was generated as long as the natural frequency of the reference model was preserved. For larger modifications of the wind turbine the controller would need retuning

  3. Simulation Models for Socioeconomic Inequalities in Health: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Niko Speybroeck

    2013-11-01

    Full Text Available Background: The emergence and evolution of socioeconomic inequalities in health involves multiple factors interacting with each other at different levels. Simulation models are suitable for studying such complex and dynamic systems and have the ability to test the impact of policy interventions in silico. Objective: To explore how simulation models were used in the field of socioeconomic inequalities in health. Methods: An electronic search of studies assessing socioeconomic inequalities in health using a simulation model was conducted. Characteristics of the simulation models were extracted and distinct simulation approaches were identified. As an illustration, a simple agent-based model of the emergence of socioeconomic differences in alcohol abuse was developed. Results: We found 61 studies published between 1989 and 2013. Ten different simulation approaches were identified. The agent-based model illustration showed that multilevel, reciprocal and indirect effects of social determinants on health can be modeled flexibly. Discussion and Conclusions: Based on the review, we discuss the utility of using simulation models for studying health inequalities, and refer to good modeling practices for developing such models. The review and the simulation model example suggest that the use of simulation models may enhance the understanding and debate about existing and new socioeconomic inequalities of health frameworks.

  4. Tree-Based Global Model Tests for Polytomous Rasch Models

    Science.gov (United States)

    Komboz, Basil; Strobl, Carolin; Zeileis, Achim

    2018-01-01

    Psychometric measurement models are only valid if measurement invariance holds between test takers of different groups. Global model tests, such as the well-established likelihood ratio (LR) test, are sensitive to violations of measurement invariance, such as differential item functioning and differential step functioning. However, these…

  5. User testing of an adaptation of fishbone diagrams to depict results of systematic reviews

    Directory of Open Access Journals (Sweden)

    Gerald Gartlehner

    2017-12-01

    Full Text Available Abstract Background Summary of findings tables in systematic reviews are highly informative but require epidemiological training to be interpreted correctly. The usage of fishbone diagrams as graphical displays could offer researchers an effective approach to simplify content for readers with limited epidemiological training. In this paper we demonstrate how fishbone diagrams can be applied to systematic reviews and present the results of an initial user testing. Methods Findings from two systematic reviews were graphically depicted in the form of the fishbone diagram. To test the utility of fishbone diagrams compared with summary of findings tables, we developed and pilot-tested an online survey using Qualtrics. Respondents were randomized to the fishbone diagram or a summary of findings table presenting the same body of evidence. They answered questions in both open-ended and closed-answer formats; all responses were anonymous. Measures of interest focused on first and second impressions, the ability to find and interpret critical information, as well as user experience with both displays. We asked respondents about the perceived utility of fishbone diagrams compared to summary of findings tables. We analyzed quantitative data by conducting t-tests and comparing descriptive statistics. Results Based on real world systematic reviews, we provide two different fishbone diagrams to show how they might be used to display complex information in a clear and succinct manner. User testing on 77 students with basic epidemiological training revealed that participants preferred summary of findings tables over fishbone diagrams. Significantly more participants liked the summary of findings table than the fishbone diagram (71.8% vs. 44.8%; p < .01; significantly more participants found the fishbone diagram confusing (63.2% vs. 35.9%, p < .05 or indicated that it was difficult to find information (65.8% vs. 45%; p < .01. However, more than half

  6. Reliability of physical functioning tests in patients with low back pain: a systematic review.

    Science.gov (United States)

    Denteneer, Lenie; Van Daele, Ulrike; Truijen, Steven; De Hertogh, Willem; Meirte, Jill; Stassijns, Gaetane

    2018-01-01

    The aim of this study was to provide a comprehensive overview of physical functioning tests in patients with low back pain (LBP) and to investigate their reliability. A systematic computerized search was finalized in four different databases on June 24, 2017: PubMed, Web of Science, Embase, and MEDLINE. Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines were followed during all stages of this review. Clinical studies that investigate the reliability of physical functioning tests in patients with LBP were eligible. The methodological quality of the included studies was assessed with the use of the Consensus-based Standards for the selection of health Measurement Instruments (COSMIN) checklist. To come to final conclusions on the reliability of the identified clinical tests, the current review assessed three factors, namely, outcome assessment, methodological quality, and consistency of description. A total of 20 studies were found eligible and 38 clinical tests were identified. Good overall test-retest reliability was concluded for the extensor endurance test (intraclass correlation coefficient [ICC]=0.93-0.97), the flexor endurance test (ICC=0.90-0.97), the 5-minute walking test (ICC=0.89-0.99), the 50-ft walking test (ICC=0.76-0.96), the shuttle walk test (ICC=0.92-0.99), the sit-to-stand test (ICC=0.91-0.99), and the loaded forward reach test (ICC=0.74-0.98). For inter-rater reliability, only one test, namely, the Biering-Sörensen test (ICC=0.88-0.99), could be concluded to have an overall good inter-rater reliability. None of the identified clinical tests could be concluded to have a good intrarater reliability. Further investigation should focus on a better overall study methodology and the use of identical protocols for the description of clinical tests. The assessment of reliability is only a first step in the recommendation process for the use of clinical tests. In future research, the identified clinical tests in the

  7. Disentangling dark energy and cosmic tests of gravity from weak lensing systematics

    Science.gov (United States)

    Laszlo, Istvan; Bean, Rachel; Kirk, Donnacha; Bridle, Sarah

    2012-06-01

    We consider the impact of key astrophysical and measurement systematics on constraints on dark energy and modifications to gravity on cosmic scales. We focus on upcoming photometric ‘stage III’ and ‘stage IV’ large-scale structure surveys such as the Dark Energy Survey (DES), the Subaru Measurement of Images and Redshifts survey, the Euclid survey, the Large Synoptic Survey Telescope (LSST) and Wide Field Infra-Red Space Telescope (WFIRST). We illustrate the different redshift dependencies of gravity modifications compared to intrinsic alignments, the main astrophysical systematic. The way in which systematic uncertainties, such as galaxy bias and intrinsic alignments, are modelled can change dark energy equation-of-state parameter and modified gravity figures of merit by a factor of 4. The inclusion of cross-correlations of cosmic shear and galaxy position measurements helps reduce the loss of constraining power from the lensing shear surveys. When forecasts for Planck cosmic microwave background and stage IV surveys are combined, constraints on the dark energy equation-of-state parameter and modified gravity model are recovered, relative to those from shear data with no systematic uncertainties, provided fewer than 36 free parameters in total are used to describe the galaxy bias and intrinsic alignment models as a function of scale and redshift. While some uncertainty in the intrinsic alignment (IA) model can be tolerated, it is going to be important to be able to parametrize IAs well in order to realize the full potential of upcoming surveys. To facilitate future investigations, we also provide a fitting function for the matter power spectrum arising from the phenomenological modified gravity model we consider.

  8. Barriers to workplace HIV testing in South Africa: a systematic review of the literature.

    Science.gov (United States)

    Weihs, Martin; Meyer-Weitz, Anna

    2016-01-01

    Low workplace HIV testing uptake makes effective management of HIV and AIDS difficult for South African organisations. Identifying barriers to workplace HIV testing is therefore crucial to inform urgently needed interventions aimed at increasing workplace HIV testing. This study reviewed literature on workplace HIV testing barriers in South Africa. Pubmed, ScienceDirect, PsycInfo and SA Publications were systematically researched. Studies needed to include measures to assess perceived or real barriers to participate in HIV Counselling and Testing (HCT) at the workplace or discuss perceived or real barriers of HIV testing at the workplace based on collected data, provide qualitative or quantitative evidence related to the research topic and needed to refer to workplaces in South Africa. Barriers were defined as any factor on economic, social, personal, environmental or organisational level preventing employees from participating in workplace HIV testing. Four peer-reviewed studies were included, two with quantitative and two with qualitative study designs. The overarching barriers across the studies were fear of compromised confidentiality, being stigmatised or discriminated in the event of testing HIV positive or being observed participating in HIV testing, and a low personal risk perception. Furthermore, it appeared that an awareness of an HIV-positive status hindered HIV testing at the workplace. Further research evidence of South African workplace barriers to HIV testing will enhance related interventions. This systematic review only found very little and contextualised evidence about workplace HCT barriers in South Africa, making it difficult to generalise, and not really sufficient to inform new interventions aimed at increasing workplace HCT uptake.

  9. Physical examination tests for screening and diagnosis of cervicogenic headache: A systematic review.

    Science.gov (United States)

    Rubio-Ochoa, J; Benítez-Martínez, J; Lluch, E; Santacruz-Zaragozá, S; Gómez-Contreras, P; Cook, C E

    2016-02-01

    It has been suggested that differential diagnosis of headaches should consist of a robust subjective examination and a detailed physical examination of the cervical spine. Cervicogenic headache (CGH) is a form of headache that involves referred pain from the neck. To our knowledge, no studies have summarized the reliability and diagnostic accuracy of physical examination tests for CGH. The aim of this study was to summarize the reliability and diagnostic accuracy of physical examination tests used to diagnose CGH. A systematic review following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines was performed in four electronic databases (MEDLINE, Web of Science, Embase and Scopus). Full text reports concerning physical tests for the diagnosis of CGH which reported the clinometric properties for assessment of CGH, were included and screened for methodological quality. Quality Appraisal for Reliability Studies (QAREL) and Quality Assessment of Studies of Diagnostic Accuracy (QUADAS-2) scores were completed to assess article quality. Eight articles were retrieved for quality assessment and data extraction. Studies investigating diagnostic reliability of physical examination tests for CGH scored poorer on methodological quality (higher risk of bias) than those of diagnostic accuracy. There is sufficient evidence showing high levels of reliability and diagnostic accuracy of the selected physical examination tests for the diagnosis of CGH. The cervical flexion-rotation test (CFRT) exhibited both the highest reliability and the strongest diagnostic accuracy for the diagnosis of CGH. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Diagnostic tests and algorithms used in the investigation of haematuria: systematic reviews and economic evaluation.

    Science.gov (United States)

    Rodgers, M; Nixon, J; Hempel, S; Aho, T; Kelly, J; Neal, D; Duffy, S; Ritchie, G; Kleijnen, J; Westwood, M

    2006-06-01

    To determine the most effective diagnostic strategy for the investigation of microscopic and macroscopic haematuria in adults. Electronic databases from inception to October 2003, updated in August 2004. A systematic review was undertaken according to published guidelines. Decision analytic modelling was undertaken, based on the findings of the review, expert opinion and additional information from the literature, to assess the relative cost-effectiveness of plausible alternative tests that are part of diagnostic algorithms for haematuria. A total of 118 studies met the inclusion criteria. No studies that evaluated the effectiveness of diagnostic algorithms for haematuria or the effectiveness of screening for haematuria or investigating its underlying cause were identified. Eighteen out of 19 identified studies evaluated dipstick tests and data from these suggested that these are moderately useful in establishing the presence of, but cannot be used to rule out, haematuria. Six studies using haematuria as a test for the presence of a disease indicated that the detection of microhaematuria cannot alone be considered a useful test either to rule in or rule out the presence of a significant underlying pathology (urinary calculi or bladder cancer). Forty-eight of 80 studies addressed methods to localise the source of bleeding (renal or lower urinary tract). The methods and thresholds described in these studies varied greatly, precluding any estimate of a 'best performance' threshold that could be applied across patient groups. However, studies of red blood cell morphology that used a cut-off value of 80% dysmorphic cells for glomerular disease reported consistently high specificities (potentially useful in ruling in a renal cause for haematuria). The reported sensitivities were generally low. Twenty-eight studies included data on the accuracy of laboratory tests (tumour markers, cytology) for the diagnosis of bladder cancer. The majority of tumour marker studies

  11. Model tests for prestressed concrete pressure vessels

    International Nuclear Information System (INIS)

    Stoever, R.

    1975-01-01

    Investigations with models of reactor pressure vessels are used to check results of three dimensional calculation methods and to predict the behaviour of the prototype. Model tests with 1:50 elastic pressure vessel models and with a 1:5 prestressed concrete pressure vessel are described and experimental results are presented. (orig.) [de

  12. Physical examination tests for the diagnosis of posterior cruciate ligament rupture: a systematic review.

    Science.gov (United States)

    Kopkow, Christian; Freiberg, Alice; Kirschner, Stephan; Seidler, Andreas; Schmitt, Jochen

    2013-11-01

    Systematic literature review. To summarize and evaluate research on the accuracy of physical examination tests for diagnosis of posterior cruciate ligament (PCL) tear. Rupture of the PCL is a severe knee injury that can lead to delayed rehabilitation, instability, or chronic knee pathologies. To our knowledge, there is currently no systematic review of studies on the diagnostic accuracy of clinical examination tests to evaluate the integrity of the PCL. A comprehensive systematic literature search was conducted in MEDLINE from 1946, Embase from 1974, and the Allied and Complementary Medicine Database from 1985 until April 30, 2012. Studies were considered eligible if they compared the results of physical examination tests performed in the context of a PCL physical examination to those of a reference standard (arthroscopy, arthrotomy, magnetic resonance imaging). Methodological quality assessment was performed by 2 independent reviewers using the revised version of the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. The search strategy revealed 1307 articles, of which 11 met the inclusion criteria for this review. In these studies, 11 different physical examination tests were identified. Due to differences in study types, different patient populations, and methodological quality, meta-analysis was not indicated. Presently, most physical examination tests have not been evaluated sufficiently enough to be confident in their ability to either confirm or rule out a PCL tear. The diagnostic accuracy of physical examination tests to assess the integrity of the PCL is largely unknown. There is a strong need for further research in this area. Level of Evidence Diagnosis, level 3a.

  13. A systematic review of the diagnostic performance of orthopedic physical examination tests of the hip.

    Science.gov (United States)

    Rahman, Labib Ataur; Adie, Sam; Naylor, Justine Maree; Mittal, Rajat; So, Sarah; Harris, Ian Andrew

    2013-08-30

    Previous reviews of the diagnostic performances of physical tests of the hip in orthopedics have drawn limited conclusions because of the low to moderate quality of primary studies published in the literature. This systematic review aims to build on these reviews by assessing a broad range of hip pathologies, and employing a more selective approach to the inclusion of studies in order to accurately gauge diagnostic performance for the purposes of making recommendations for clinical practice and future research. It specifically identifies tests which demonstrate strong and moderate diagnostic performance. A systematic search of Medline, Embase, Embase Classic and CINAHL was conducted to identify studies of hip tests. Our selection criteria included an analysis of internal and external validity. We reported diagnostic performance in terms of sensitivity, specificity, predictive values and likelihood ratios. Likelihood ratios were used to identify tests with strong and moderate diagnostic utility. Only a small proportion of tests reported in the literature have been assessed in methodologically valid primary studies. 16 studies were included in our review, producing 56 independent test-pathology combinations. Two tests demonstrated strong clinical utility, the patellar-pubic percussion test for excluding radiologically occult hip fractures (negative LR 0.05, 95% Confidence Interval [CI] 0.03-0.08) and the hip abduction sign for diagnosing sarcoglycanopathies in patients with known muscular dystrophies (positive LR 34.29, 95% CI 10.97-122.30). Fifteen tests demonstrated moderate diagnostic utility for diagnosing and/or excluding hip fractures, symptomatic osteoarthritis and loosening of components post-total hip arthroplasty. We have identified a number of tests demonstrating strong and moderate diagnostic performance. These findings must be viewed with caution as there are concerns over the methodological quality of the primary studies from which we have extracted our

  14. Evaluating test-retest reliability in patient-reported outcome measures for older people: A systematic review.

    Science.gov (United States)

    Park, Myung Sook; Kang, Kyung Ja; Jang, Sun Joo; Lee, Joo Yun; Chang, Sun Ju

    2018-03-01

    This study aimed to evaluate the components of test-retest reliability including time interval, sample size, and statistical methods used in patient-reported outcome measures in older people and to provide suggestions on the methodology for calculating test-retest reliability for patient-reported outcomes in older people. This was a systematic literature review. MEDLINE, Embase, CINAHL, and PsycINFO were searched from January 1, 2000 to August 10, 2017 by an information specialist. This systematic review was guided by both the Preferred Reporting Items for Systematic Reviews and Meta-Analyses checklist and the guideline for systematic review published by the National Evidence-based Healthcare Collaborating Agency in Korea. The methodological quality was assessed by the Consensus-based Standards for the selection of health Measurement Instruments checklist box B. Ninety-five out of 12,641 studies were selected for the analysis. The median time interval for test-retest reliability was 14days, and the ratio of sample size for test-retest reliability to the number of items in each measure ranged from 1:1 to 1:4. The most frequently used statistical methods for continuous scores was intraclass correlation coefficients (ICCs). Among the 63 studies that used ICCs, 21 studies presented models for ICC calculations and 30 studies reported 95% confidence intervals of the ICCs. Additional analyses using 17 studies that reported a strong ICC (>0.09) showed that the mean time interval was 12.88days and the mean ratio of the number of items to sample size was 1:5.37. When researchers plan to assess the test-retest reliability of patient-reported outcome measures for older people, they need to consider an adequate time interval of approximately 13days and the sample size of about 5 times the number of items. Particularly, statistical methods should not only be selected based on the types of scores of the patient-reported outcome measures, but should also be described clearly in

  15. A test of systematic coarse-graining of molecular dynamics simulations: Thermodynamic properties

    Science.gov (United States)

    Fu, Chia-Chun; Kulkarni, Pandurang M.; Scott Shell, M.; Gary Leal, L.

    2012-10-01

    Coarse-graining (CG) techniques have recently attracted great interest for providing descriptions at a mesoscopic level of resolution that preserve fluid thermodynamic and transport behaviors with a reduced number of degrees of freedom and hence less computational effort. One fundamental question arises: how well and to what extent can a "bottom-up" developed mesoscale model recover the physical properties of a molecular scale system? To answer this question, we explore systematically the properties of a CG model that is developed to represent an intermediate mesoscale model between the atomistic and continuum scales. This CG model aims to reduce the computational cost relative to a full atomistic simulation, and we assess to what extent it is possible to preserve both the thermodynamic and transport properties of an underlying reference all-atom Lennard-Jones (LJ) system. In this paper, only the thermodynamic properties are considered in detail. The transport properties will be examined in subsequent work. To coarse-grain, we first use the iterative Boltzmann inversion (IBI) to determine a CG potential for a (1-ϕ)N mesoscale particle system, where ϕ is the degree of coarse-graining, so as to reproduce the radial distribution function (RDF) of an N atomic particle system. Even though the uniqueness theorem guarantees a one to one relationship between the RDF and an effective pairwise potential, we find that RDFs are insensitive to the long-range part of the IBI-determined potentials, which provides some significant flexibility in further matching other properties. We then propose a reformulation of IBI as a robust minimization procedure that enables simultaneous matching of the RDF and the fluid pressure. We find that this new method mainly changes the attractive tail region of the CG potentials, and it improves the isothermal compressibility relative to pure IBI. We also find that there are optimal interaction cutoff lengths for the CG system, as a function of

  16. Social Media Interventions to Promote HIV Testing, Linkage, Adherence, and Retention: Systematic Review and Meta-Analysis

    Science.gov (United States)

    Gupta, Somya; Wang, Jiangtao; Hightow-Weidman, Lisa B; Muessig, Kathryn E; Tang, Weiming; Pan, Stephen; Pendse, Razia; Tucker, Joseph D

    2017-01-01

    Background Social media is increasingly used to deliver HIV interventions for key populations worldwide. However, little is known about the specific uses and effects of social media on human immunodeficiency virus (HIV) interventions. Objective This systematic review examines the effectiveness of social media interventions to promote HIV testing, linkage, adherence, and retention among key populations. Methods We used the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist and Cochrane guidelines for this review and registered it on the International Prospective Register of Systematic Reviews, PROSPERO. We systematically searched six databases and three conference websites using search terms related to HIV, social media, and key populations. We included studies where (1) the intervention was created or implemented on social media platforms, (2) study population included men who have sex with men (MSM), transgender individuals, people who inject drugs (PWID), and/or sex workers, and (3) outcomes included promoting HIV testing, linkage, adherence, and/or retention. Meta-analyses were conducted by Review Manager, version 5.3. Pooled relative risk (RR) and 95% confidence intervals were calculated by random-effects models. Results Among 981 manuscripts identified, 26 studies met the inclusion criteria. We found 18 studies from high-income countries, 8 in middle-income countries, and 0 in low-income countries. Eight were randomized controlled trials, and 18 were observational studies. All studies (n=26) included MSM; five studies also included transgender individuals. The focus of 21 studies was HIV testing, four on HIV testing and linkage to care, and one on antiretroviral therapy adherence. Social media interventions were used to do the following: build online interactive communities to encourage HIV testing/adherence (10 studies), provide HIV testing services (9 studies), disseminate HIV information (9 studies), and develop

  17. Social Media Interventions to Promote HIV Testing, Linkage, Adherence, and Retention: Systematic Review and Meta-Analysis.

    Science.gov (United States)

    Cao, Bolin; Gupta, Somya; Wang, Jiangtao; Hightow-Weidman, Lisa B; Muessig, Kathryn E; Tang, Weiming; Pan, Stephen; Pendse, Razia; Tucker, Joseph D

    2017-11-24

    Social media is increasingly used to deliver HIV interventions for key populations worldwide. However, little is known about the specific uses and effects of social media on human immunodeficiency virus (HIV) interventions. This systematic review examines the effectiveness of social media interventions to promote HIV testing, linkage, adherence, and retention among key populations. We used the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist and Cochrane guidelines for this review and registered it on the International Prospective Register of Systematic Reviews, PROSPERO. We systematically searched six databases and three conference websites using search terms related to HIV, social media, and key populations. We included studies where (1) the intervention was created or implemented on social media platforms, (2) study population included men who have sex with men (MSM), transgender individuals, people who inject drugs (PWID), and/or sex workers, and (3) outcomes included promoting HIV testing, linkage, adherence, and/or retention. Meta-analyses were conducted by Review Manager, version 5.3. Pooled relative risk (RR) and 95% confidence intervals were calculated by random-effects models. Among 981 manuscripts identified, 26 studies met the inclusion criteria. We found 18 studies from high-income countries, 8 in middle-income countries, and 0 in low-income countries. Eight were randomized controlled trials, and 18 were observational studies. All studies (n=26) included MSM; five studies also included transgender individuals. The focus of 21 studies was HIV testing, four on HIV testing and linkage to care, and one on antiretroviral therapy adherence. Social media interventions were used to do the following: build online interactive communities to encourage HIV testing/adherence (10 studies), provide HIV testing services (9 studies), disseminate HIV information (9 studies), and develop intervention materials (1 study). Of the

  18. Model-based testing for embedded systems

    CERN Document Server

    Zander, Justyna; Mosterman, Pieter J

    2011-01-01

    What the experts have to say about Model-Based Testing for Embedded Systems: "This book is exactly what is needed at the exact right time in this fast-growing area. From its beginnings over 10 years ago of deriving tests from UML statecharts, model-based testing has matured into a topic with both breadth and depth. Testing embedded systems is a natural application of MBT, and this book hits the nail exactly on the head. Numerous topics are presented clearly, thoroughly, and concisely in this cutting-edge book. The authors are world-class leading experts in this area and teach us well-used

  19. Test-driven modeling of embedded systems

    DEFF Research Database (Denmark)

    Munck, Allan; Madsen, Jan

    2015-01-01

    To benefit maximally from model-based systems engineering (MBSE) trustworthy high quality models are required. From the software disciplines it is known that test-driven development (TDD) can significantly increase the quality of the products. Using a test-driven approach with MBSE may have...... a similar positive effect on the quality of the system models and the resulting products and may therefore be desirable. To define a test-driven model-based systems engineering (TD-MBSE) approach, we must define this approach for numerous sub disciplines such as modeling of requirements, use cases...... suggest that our method provides a sound foundation for rapid development of high quality system models....

  20. Testing the effectiveness of simplified search strategies for updating systematic reviews.

    Science.gov (United States)

    Rice, Maureen; Ali, Muhammad Usman; Fitzpatrick-Lewis, Donna; Kenny, Meghan; Raina, Parminder; Sherifali, Diana

    2017-08-01

    The objective of the study was to test the overall effectiveness of a simplified search strategy (SSS) for updating systematic reviews. We identified nine systematic reviews undertaken by our research group for which both comprehensive and SSS updates were performed. Three relevant performance measures were estimated, that is, sensitivity, precision, and number needed to read (NNR). The update reference searches for all nine included systematic reviews identified a total of 55,099 citations that were screened resulting in final inclusion of 163 randomized controlled trials. As compared with reference search, the SSS resulted in 8,239 hits and had a median sensitivity of 83.3%, while precision and NNR were 4.5 times better. During analysis, we found that the SSS performed better for clinically focused topics, with a median sensitivity of 100% and precision and NNR 6 times better than for the reference searches. For broader topics, the sensitivity of the SSS was 80% while precision and NNR were 5.4 times better compared with reference search. SSS performed well for clinically focused topics and, with a median sensitivity of 100%, could be a viable alternative to a conventional comprehensive search strategy for updating this type of systematic reviews particularly considering the budget constraints and the volume of new literature being published. For broader topics, 80% sensitivity is likely to be considered too low for a systematic review update in most cases, although it might be acceptable if updating a scoping or rapid review. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Empirical tests of natural selection-based evolutionary accounts of ADHD: a systematic review.

    Science.gov (United States)

    Thagaard, Marthe S; Faraone, Stephen V; Sonuga-Barke, Edmund J; Østergaard, Søren D

    2016-10-01

    ADHD is a prevalent and highly heritable mental disorder associated with significant impairment, morbidity and increased rates of mortality. This combination of high prevalence and high morbidity/mortality seen in ADHD and other mental disorders presents a challenge to natural selection-based models of human evolution. Several hypotheses have been proposed in an attempt to resolve this apparent paradox. The aim of this study was to review the evidence for these hypotheses. We conducted a systematic review of the literature on empirical investigations of natural selection-based evolutionary accounts for ADHD in adherence with the PRISMA guideline. The PubMed, Embase, and PsycINFO databases were screened for relevant publications, by combining search terms covering evolution/selection with search terms covering ADHD. The search identified 790 records. Of these, 15 full-text articles were assessed for eligibility, and three were included in the review. Two of these reported on the evolution of the seven-repeat allele of the ADHD-associated dopamine receptor D4 gene, and one reported on the results of a simulation study of the effect of suggested ADHD-traits on group survival. The authors of the three studies interpreted their findings as favouring the notion that ADHD-traits may have been associated with increased fitness during human evolution. However, we argue that none of the three studies really tap into the core symptoms of ADHD, and that their conclusions therefore lack validity for the disorder. This review indicates that the natural selection-based accounts of ADHD have not been subjected to empirical test and therefore remain hypothetical.

  2. 1/3-scale model testing program

    International Nuclear Information System (INIS)

    Yoshimura, H.R.; Attaway, S.W.; Bronowski, D.R.; Uncapher, W.L.; Huerta, M.; Abbott, D.G.

    1989-01-01

    This paper describes the drop testing of a one-third scale model transport cask system. Two casks were supplied by Transnuclear, Inc. (TN) to demonstrate dual purpose shipping/storage casks. These casks will be used to ship spent fuel from DOEs West Valley demonstration project in New York to the Idaho National Engineering Laboratory (INEL) for long term spent fuel dry storage demonstration. As part of the certification process, one-third scale model tests were performed to obtain experimental data. Two 9-m (30-ft) drop tests were conducted on a mass model of the cask body and scaled balsa and redwood filled impact limiters. In the first test, the cask system was tested in an end-on configuration. In the second test, the system was tested in a slap-down configuration where the axis of the cask was oriented at a 10 degree angle with the horizontal. Slap-down occurs for shallow angle drops where the primary impact at one end of the cask is followed by a secondary impact at the other end. The objectives of the testing program were to (1) obtain deceleration and displacement information for the cask and impact limiter system, (2) obtain dynamic force-displacement data for the impact limiters, (3) verify the integrity of the impact limiter retention system, and (4) examine the crush behavior of the limiters. This paper describes both test results in terms of measured deceleration, post test deformation measurements, and the general structural response of the system

  3. Superconducting solenoid model magnet test results

    Energy Technology Data Exchange (ETDEWEB)

    Carcagno, R.; Dimarco, J.; Feher, S.; Ginsburg, C.M.; Hess, C.; Kashikhin, V.V.; Orris, D.F.; Pischalnikov, Y.; Sylvester, C.; Tartaglia, M.A.; Terechkine, I.; /Fermilab

    2006-08-01

    Superconducting solenoid magnets suitable for the room temperature front end of the Fermilab High Intensity Neutrino Source (formerly known as Proton Driver), an 8 GeV superconducting H- linac, have been designed and fabricated at Fermilab, and tested in the Fermilab Magnet Test Facility. We report here results of studies on the first model magnets in this program, including the mechanical properties during fabrication and testing in liquid helium at 4.2 K, quench performance, and magnetic field measurements. We also describe new test facility systems and instrumentation that have been developed to accomplish these tests.

  4. Superconducting solenoid model magnet test results

    International Nuclear Information System (INIS)

    Carcagno, R.; Dimarco, J.; Feher, S.; Ginsburg, C.M.; Hess, C.; Kashikhin, V.V.; Orris, D.F.; Pischalnikov, Y.; Sylvester, C.; Tartaglia, M.A.; Terechkine, I.; Tompkins, J.C.; Wokas, T.; Fermilab

    2006-01-01

    Superconducting solenoid magnets suitable for the room temperature front end of the Fermilab High Intensity Neutrino Source (formerly known as Proton Driver), an 8 GeV superconducting H- linac, have been designed and fabricated at Fermilab, and tested in the Fermilab Magnet Test Facility. We report here results of studies on the first model magnets in this program, including the mechanical properties during fabrication and testing in liquid helium at 4.2 K, quench performance, and magnetic field measurements. We also describe new test facility systems and instrumentation that have been developed to accomplish these tests

  5. Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing

    Science.gov (United States)

    Nance, Donald; Liever, Peter; Nielsen, Tanner

    2015-01-01

    The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test, conducted at Marshall Space Flight Center. The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.

  6. Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing

    Science.gov (United States)

    Nance, Donald K.; Liever, Peter A.

    2015-01-01

    The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test (SMAT), conducted at Marshall Space Flight Center (MSFC). The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.

  7. Do Test Design and Uses Influence Test Preparation? Testing a Model of Washback with Structural Equation Modeling

    Science.gov (United States)

    Xie, Qin; Andrews, Stephen

    2013-01-01

    This study introduces Expectancy-value motivation theory to explain the paths of influences from perceptions of test design and uses to test preparation as a special case of washback on learning. Based on this theory, two conceptual models were proposed and tested via Structural Equation Modeling. Data collection involved over 870 test takers of…

  8. Systematic review of studies on cost-effectiveness of cystic fibrosis carrier testing

    Directory of Open Access Journals (Sweden)

    Ernesto Andrade-Cerquera

    2016-10-01

    Full Text Available Introduction: Cystic fibrosis is considered the most common autosomal disease with multisystem complications in non-Hispanic white population. Objective: To review the available evidence on cost-effectiveness of the cystic fibrosis carrier testing compared to no intervention. Materials and methods: The databases of MEDLINE, Embase, NHS, EBM Reviews - Cochrane Database of Systematic Reviews, LILACS, Health Technology Assessment, Genetests.org, Genetsickkids.org and Web of Science were used to conduct a systematic review of the cost-effectiveness of performing the genetic test in cystic fibrosis patients. Cost-effectiveness studies were included without language or date of publication restrictions. Results: Only 13 studies were relevant for full review. Prenatal, preconception and mixed screening strategies were found. Health perspective was the most used; the discount rate applied was heterogeneous between 3.5% and 5%; the main analysis unit was the cost per detected carrier couple, followed by cost per averted birth with cystic fibrosis. It was evident that the most cost-effective strategy was preconception screening associated with prenatal test. Conclusions: A marked heterogeneity in the methodology was found, which led to incomparable results and to conclude that there are different approaches to this genetic test.

  9. Sample Size Determination for Rasch Model Tests

    Science.gov (United States)

    Draxler, Clemens

    2010-01-01

    This paper is concerned with supplementing statistical tests for the Rasch model so that additionally to the probability of the error of the first kind (Type I probability) the probability of the error of the second kind (Type II probability) can be controlled at a predetermined level by basing the test on the appropriate number of observations.…

  10. Is the standard model really tested?

    International Nuclear Information System (INIS)

    Takasugi, E.

    1989-01-01

    It is discussed how the standard model is really tested. Among various tests, I concentrate on the CP violation phenomena in K and B meson system. Especially, the resent hope to overcome the theoretical uncertainty in the evaluation on the CP violation of K meson system is discussed. (author)

  11. A Systematic Review of Point of Care Testing for Chlamydia trachomatis, Neisseria gonorrhoeae, and Trichomonas vaginalis

    Directory of Open Access Journals (Sweden)

    Sasha Herbst de Cortina

    2016-01-01

    Full Text Available Objectives. Systematic review of point of care (POC diagnostic tests for sexually transmitted infections: Chlamydia trachomatis (CT, Neisseria gonorrhoeae (NG, and Trichomonas vaginalis (TV. Methods. Literature search on PubMed for articles from January 2010 to August 2015, including original research in English on POC diagnostics for sexually transmitted CT, NG, and/or TV. Results. We identified 33 publications with original research on POC diagnostics for CT, NG, and/or TV. Thirteen articles evaluated test performance, yielding at least one test for each infection with sensitivity and specificity ≥90%. Each infection also had currently available tests with sensitivities <60%. Three articles analyzed cost effectiveness, and five publications discussed acceptability and feasibility. POC testing was acceptable to both providers and patients and was also demonstrated to be cost effective. Fourteen proof of concept articles introduced new tests. Conclusions. Highly sensitive and specific POC tests are available for CT, NG, and TV, but improvement is possible. Future research should focus on acceptability, feasibility, and cost of POC testing. While pregnant women specifically have not been studied, the results available in nonpregnant populations are encouraging for the ability to test and treat women in antenatal care to prevent adverse pregnancy and neonatal outcomes.

  12. An interoceptive model of bulimia nervosa: A neurobiological systematic review.

    Science.gov (United States)

    Klabunde, Megan; Collado, Danielle; Bohon, Cara

    2017-11-01

    The objective of our study was to examine the neurobiological support for an interoceptive sensory processing model of bulimia nervosa (BN). To do so, we conducted a systematic review of interoceptive sensory processing in BN, using the PRISMA guidelines. We searched PsychInfo, Pubmed, and Web of Knowledge databases to identify biological and behavioral studies that examine interoceptive detection in BN. After screening 390 articles for inclusion and conducting a quality assessment of articles that met inclusion criteria, we reviewed 41 articles. We found that global interoceptive sensory processing deficits may be present in BN. Specifically there is evidence of abnormal brain function, structure and connectivity in the interoceptive neural network, in addition to gastric and pain processing disturbances. These results suggest that there may be a neurobiological basis for global interoceptive sensory processing deficits in BN that remain after recovery. Data from taste and heart beat detection studies were inconclusive; some studies suggest interoceptive disturbances in these sensory domains. Discrepancies in findings appear to be due to methodological differences. In conclusion, interoceptive sensory processing deficits may directly contribute to and explain a variety of symptoms present in those with BN. Further examination of interoceptive sensory processing deficits could inform the development of treatments for those with BN. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. FROM ATOMISTIC TO SYSTEMATIC COARSE-GRAINED MODELS FOR MOLECULAR SYSTEMS

    KAUST Repository

    Harmandaris, Vagelis; Kalligiannaki, Evangelia; Katsoulakis, Markos; Plechac, Petr

    2017-01-01

    The development of systematic (rigorous) coarse-grained mesoscopic models for complex molecular systems is an intense research area. Here we first give an overview of methods for obtaining optimal parametrized coarse-grained models, starting from

  14. Modelling and Testing of Friction in Forging

    DEFF Research Database (Denmark)

    Bay, Niels

    2007-01-01

    Knowledge about friction is still limited in forging. The theoretical models applied presently for process analysis are not satisfactory compared to the advanced and detailed studies possible to carry out by plastic FEM analyses and more refined models have to be based on experimental testing...

  15. Modeling Systematic Change in Stopover Duration Does Not Improve Bias in Trends Estimated from Migration Counts.

    Directory of Open Access Journals (Sweden)

    Tara L Crewe

    Full Text Available The use of counts of unmarked migrating animals to monitor long term population trends assumes independence of daily counts and a constant rate of detection. However, migratory stopovers often last days or weeks, violating the assumption of count independence. Further, a systematic change in stopover duration will result in a change in the probability of detecting individuals once, but also in the probability of detecting individuals on more than one sampling occasion. We tested how variation in stopover duration influenced accuracy and precision of population trends by simulating migration count data with known constant rate of population change and by allowing daily probability of survival (an index of stopover duration to remain constant, or to vary randomly, cyclically, or increase linearly over time by various levels. Using simulated datasets with a systematic increase in stopover duration, we also tested whether any resulting bias in population trend could be reduced by modeling the underlying source of variation in detection, or by subsampling data to every three or five days to reduce the incidence of recounting. Mean bias in population trend did not differ significantly from zero when stopover duration remained constant or varied randomly over time, but bias and the detection of false trends increased significantly with a systematic increase in stopover duration. Importantly, an increase in stopover duration over time resulted in a compounding effect on counts due to the increased probability of detection and of recounting on subsequent sampling occasions. Under this scenario, bias in population trend could not be modeled using a covariate for stopover duration alone. Rather, to improve inference drawn about long term population change using counts of unmarked migrants, analyses must include a covariate for stopover duration, as well as incorporate sampling modifications (e.g., subsampling to reduce the probability that individuals will

  16. A framework for testing and comparing binaural models.

    Science.gov (United States)

    Dietz, Mathias; Lestang, Jean-Hugues; Majdak, Piotr; Stern, Richard M; Marquardt, Torsten; Ewert, Stephan D; Hartmann, William M; Goodman, Dan F M

    2018-03-01

    Auditory research has a rich history of combining experimental evidence with computational simulations of auditory processing in order to deepen our theoretical understanding of how sound is processed in the ears and in the brain. Despite significant progress in the amount of detail and breadth covered by auditory models, for many components of the auditory pathway there are still different model approaches that are often not equivalent but rather in conflict with each other. Similarly, some experimental studies yield conflicting results which has led to controversies. This can be best resolved by a systematic comparison of multiple experimental data sets and model approaches. Binaural processing is a prominent example of how the development of quantitative theories can advance our understanding of the phenomena, but there remain several unresolved questions for which competing model approaches exist. This article discusses a number of current unresolved or disputed issues in binaural modelling, as well as some of the significant challenges in comparing binaural models with each other and with the experimental data. We introduce an auditory model framework, which we believe can become a useful infrastructure for resolving some of the current controversies. It operates models over the same paradigms that are used experimentally. The core of the proposed framework is an interface that connects three components irrespective of their underlying programming language: The experiment software, an auditory pathway model, and task-dependent decision stages called artificial observers that provide the same output format as the test subject. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. A systematic review and meta-analysis of tests to predict wound healing in diabetic foot.

    Science.gov (United States)

    Wang, Zhen; Hasan, Rim; Firwana, Belal; Elraiyah, Tarig; Tsapas, Apostolos; Prokop, Larry; Mills, Joseph L; Murad, Mohammad Hassan

    2016-02-01

    This systematic review summarized the evidence on noninvasive screening tests for the prediction of wound healing and the risk of amputation in diabetic foot ulcers. We searched MEDLINE In-Process & Other Non-Indexed Citations, MEDLINE, Embase, Cochrane Database of Systematic Reviews, Cochrane Central Register of Controlled Trials, and Scopus from database inception to October 2011. We pooled sensitivity, specificity, and diagnostic odds ratio (DOR) and compared test performance. Thirty-seven studies met the inclusion criteria. Eight tests were used to predict wound healing in this setting, including ankle-brachial index (ABI), ankle peak systolic velocity, transcutaneous oxygen measurement (TcPo2), toe-brachial index, toe systolic blood pressure, microvascular oxygen saturation, skin perfusion pressure, and hyperspectral imaging. For the TcPo2 test, the pooled DOR was 15.81 (95% confidence interval [CI], 3.36-74.45) for wound healing and 4.14 (95% CI, 2.98-5.76) for the risk of amputation. ABI was also predictive but to a lesser degree of the risk of amputations (DOR, 2.89; 95% CI, 1.65-5.05) but not of wound healing (DOR, 1.02; 95% CI, 0.40-2.64). It was not feasible to perform meta-analysis comparing the remaining tests. The overall quality of evidence was limited by the risk of bias and imprecision (wide CIs due to small sample size). Several tests may predict wound healing in the setting of diabetic foot ulcer; however, most of the available evidence evaluates only TcPo2 and ABI. The overall quality of the evidence is low, and further research is needed to provide higher quality comparative effectiveness evidence. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  18. Raising the standards of the calf-raise test: a systematic review.

    Science.gov (United States)

    Hébert-Losier, Kim; Newsham-West, Richard J; Schneiders, Anthony G; Sullivan, S John

    2009-11-01

    The calf-raise test is used by clinicians and researchers in sports medicine to assess properties of the calf muscle-tendon unit. The test generally involves repetitive concentric-eccentric muscle action of the plantar-flexors in unipedal stance and is quantified by the number of raises performed. Although the calf-raise test appears to have acceptable reliability and face validity, and is commonly used for medical assessment and rehabilitation of injuries, no universally acceptable test parameters have been published to date. A systematic review of the existing literature was conducted to investigate the consistency as well as universal acceptance of the evaluation purposes, test parameters, outcome measurements and psychometric properties of the calf-raise test. Nine electronic databases were searched during the period May 30th to September 21st 2008. Forty-nine articles met the inclusion criteria and were quality assessed. Information on study characteristics and calf-raise test parameters, as well as quantitative data, were extracted; tabulated; and statistically analysed. The average quality score of the reviewed articles was 70.4+/-12.2% (range 44-90%). Articles provided various test parameters; however, a consensus was not ascertained. Key testing parameters varied, were often unstated, and few studies reported reliability or validity values, including sensitivity and specificity. No definitive normative values could be established and the utility of the test in subjects with pathologies remained unclear. Although adapted for use in several disciplines and traditionally recommended for clinical assessment, there is no uniform description of the calf-raise test in the literature. Further investigation is recommended to ensure consistent use and interpretation of the test by researchers and clinicians.

  19. Impact of Enterovirus Testing on Resource Use in Febrile Young Infants: A Systematic Review.

    Science.gov (United States)

    Wallace, Sowdhamini S; Lopez, Michelle A; Caviness, A Chantal

    2017-02-01

    Enterovirus infection commonly causes fever in infants aged 0 to 90 days and, without testing, is difficult to differentiate from serious bacterial infection. To determine the cost savings of routine enterovirus testing and identify subgroups of infants with greater potential impact from testing among infants 0 to 90 days old with fever. Studies were identified systematically from published and unpublished literature by using Embase, Medline, the Cochrane database, and conference proceedings. Inclusion criteria were original studies, in any language, of enterovirus infection including the outcomes of interest in infants aged 0 to 90 days. Standardized instruments were used to appraise each study. The evidence quality was evaluated using Grading of Recommendations Assessment, Development, and Evaluation criteria. Two investigators independently searched the literature, screened and critically appraised the studies, extracted the data, and applied the Grading of Recommendations Assessment, Development, and Evaluation criteria. Of the 257 unique studies identified and screened, 32 were completely reviewed and 8 were included. Routine enterovirus testing was associated with reduced hospital length of stay and cost savings during peak enterovirus season. Cerebrospinal fluid pleocytosis was a poor predictor of enterovirus meningitis. The studies were all observational and the evidence was of low quality. Enterovirus polymerase chain reaction testing, independent of cerebrospinal fluid pleocytosis, can reduce length of stay and achieve cost savings, especially during times of high enterovirus prevalence. Additional study is needed to identify subgroups that may achieve greater cost savings from testing to additionally enhance the efficiency of testing. Copyright © 2017 by the American Academy of Pediatrics.

  20. Measurement properties of maximal cardiopulmonary exercise tests protocols in persons after stroke: A systematic review.

    Science.gov (United States)

    Wittink, Harriet; Verschuren, Olaf; Terwee, Caroline; de Groot, Janke; Kwakkel, Gert; van de Port, Ingrid

    2017-11-21

    To systematically review and critically appraise the literature on measurement properties of cardiopulmonary exercise test protocols for measuring aerobic capacity, VO2max, in persons after stroke. PubMed, Embase and Cinahl were searched from inception up to 15 June 2016. A total of 9 studies were identified reporting on 9 different cardiopulmonary exercise test protocols. VO2max measured with cardiopulmonary exercise test and open spirometry was the construct of interest. The target population was adult persons after stroke. We included all studies that evaluated reliability, measurement error, criterion validity, content validity, hypothesis testing and/or responsiveness of cardiopulmonary exercise test protocols. Two researchers independently screened the literature, assessed methodological quality using the COnsensus-based Standards for the selection of health Measurement INstruments checklist and extracted data on measurement properties of cardiopulmonary exercise test protocols. Most studies reported on only one measurement property. Best-evidence synthesis was derived taking into account the methodological quality of the studies, the results and the consistency of the results. No judgement could be made on which protocol is "best" for measuring VO2max in persons after stroke due to lack of high-quality studies on the measurement properties of the cardiopulmonary exercise test.

  1. Kinematic tests of exotic flat cosmological models

    International Nuclear Information System (INIS)

    Charlton, J.C.; Turner, M.S.; NASA/Fermilab Astrophysics Center, Batavia, IL)

    1987-01-01

    Theoretical prejudice and inflationary models of the very early universe strongly favor the flat, Einstein-de Sitter model of the universe. At present the observational data conflict with this prejudice. This conflict can be resolved by considering flat models of the universe which posses a smooth component of energy density. The kinematics of such models, where the smooth component is relativistic particles, a cosmological term, a network of light strings, or fast-moving, light strings is studied in detail. The observational tests which can be used to discriminate between these models are also discussed. These tests include the magnitude-redshift, lookback time-redshift, angular size-redshift, and comoving volume-redshift diagrams and the growth of density fluctuations. 58 references

  2. Kinematic tests of exotic flat cosmological models

    International Nuclear Information System (INIS)

    Charlton, J.C.; Turner, M.S.

    1986-05-01

    Theoretical prejudice and inflationary models of the very early Universe strongly favor the flat, Einstein-deSitter model of the Universe. At present the observational data conflict with this prejudice. This conflict can be resolved by considering flat models of the Universe which possess a smooth component by energy density. We study in detail the kinematics of such models, where the smooth component is relativistic particles, a cosmological term, a network of light strings, or fast-moving, light strings. We also discuss the observational tests which can be used to discriminate between these models. These tests include the magnitude-redshift, lookback time-redshift, angular size-redshift, and comoving volume-redshift diagrams and the growth of density fluctuations

  3. Kinematic tests of exotic flat cosmological models

    Energy Technology Data Exchange (ETDEWEB)

    Charlton, J.C.; Turner, M.S.

    1986-05-01

    Theoretical prejudice and inflationary models of the very early Universe strongly favor the flat, Einstein-deSitter model of the Universe. At present the observational data conflict with this prejudice. This conflict can be resolved by considering flat models of the Universe which possess a smooth component by energy density. We study in detail the kinematics of such models, where the smooth component is relativistic particles, a cosmological term, a network of light strings, or fast-moving, light strings. We also discuss the observational tests which can be used to discriminate between these models. These tests include the magnitude-redshift, lookback time-redshift, angular size-redshift, and comoving volume-redshift diagrams and the growth of density fluctuations.

  4. Observation-Based Modeling for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.G.

    2009-01-01

    One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through

  5. Engineering Abstractions in Model Checking and Testing

    DEFF Research Database (Denmark)

    Achenbach, Michael; Ostermann, Klaus

    2009-01-01

    Abstractions are used in model checking to tackle problems like state space explosion or modeling of IO. The application of these abstractions in real software development processes, however, lacks engineering support. This is one reason why model checking is not widely used in practice yet...... and testing is still state of the art in falsification. We show how user-defined abstractions can be integrated into a Java PathFinder setting with tools like AspectJ or Javassist and discuss implications of remaining weaknesses of these tools. We believe that a principled engineering approach to designing...... and implementing abstractions will improve the applicability of model checking in practice....

  6. Systematic simulations of modified gravity: symmetron and dilaton models

    International Nuclear Information System (INIS)

    Brax, Philippe; Davis, Anne-Christine; Li, Baojiu; Winther, Hans A.; Zhao, Gong-Bo

    2012-01-01

    We study the linear and nonlinear structure formation in the dilaton and symmetron models of modified gravity using a generic parameterisation which describes a large class of scenarios using only a few parameters, such as the coupling between the scalar field and the matter, and the range of the scalar force on very large scales. For this we have modified the N-body simulation code ECOSMOG, which is a variant of RAMSES working in modified gravity scenarios, to perform a set of 110 simulations for different models and parameter values, including the default ΛCDM. These simulations enable us to explore a large portion of the parameter space. We have studied the effects of modified gravity on the matter power spectrum and mass function, and found a rich and interesting phenomenology where the difference with the ΛCDM template cannot be reproduced by a linear analysis even on scales as large as k ∼ 0.05 hMpc −1 . Our results show the full effect of screening on nonlinear structure formation and the associated deviation from ΛCDM. We also investigate how differences in the force mediated by the scalar field in modified gravity models lead to qualitatively different features for the nonlinear power spectrum and the halo mass function, and how varying the individual model parameters changes these observables. The differences are particularly large in the nonlinear power spectra whose shapes for f(R), dilaton and symmetron models vary greatly, and where the characteristic bump around 1 hMpc −1 of f(R) models is preserved for symmetrons, whereas an increase on much smaller scales is particular to symmetrons. No bump is present for dilatons where a flattening of the power spectrum takes place on small scales. These deviations from ΛCDM and the differences between modified gravity models, such as dilatons and symmetrons, could be tested with future surveys

  7. A systematic review investigating measurement properties of physiological tests in rugby.

    Science.gov (United States)

    Chiwaridzo, Matthew; Oorschot, Sander; Dambi, Jermaine M; Ferguson, Gillian D; Bonney, Emmanuel; Mudawarima, Tapfuma; Tadyanemhandu, Cathrine; Smits-Engelsman, Bouwien C M

    2017-01-01

    This systematic review was conducted with the first objective aimed at providing an overview of the physiological characteristics commonly evaluated in rugby and the corresponding tests used to measure each construct. Secondly, the measurement properties of all identified tests per physiological construct were evaluated with the ultimate purpose of identifying tests with strongest level of evidence per construct. The review was conducted in two stages. In all stages, electronic databases of EBSCOhost, Medline and Scopus were searched for full-text articles. Stage 1 included studies examining physiological characteristics in rugby. Stage 2 included studies evaluating measurement properties of all tests identified in Stage 1 either in rugby or related sports such as Australian Rules football and Soccer. Two independent reviewers screened relevant articles from titles and abstracts for both stages. Seventy studies met the inclusion criteria for Stage 1. The studies described 63 tests assessing speed (8), agility/change of direction speed (7), upper-body muscular endurance (8), upper-body muscular power (6), upper-body muscular strength (5), anaerobic endurance (4), maximal aerobic power (4), lower-body muscular power (3), prolonged high-intensity intermittent running ability/endurance (5), lower-body muscular strength (5), repeated high-intensity exercise performance (3), repeated-sprint ability (2), repeated-effort ability (1), maximal aerobic speed (1) and abdominal endurance (1). Stage 2 identified 20 studies describing measurement properties of 21 different tests. Only moderate evidence was found for the reliability of the 30-15 Intermittent Fitness. There was limited evidence found for the reliability and/or validity of 5 m, 10 m, 20 m speed tests, 505 test, modified 505 test, L run test, Sergeant Jump test and bench press repetitions-to-fatigue tests. There was no information from high-quality studies on the measurement properties of all the other tests

  8. Pile Model Tests Using Strain Gauge Technology

    Science.gov (United States)

    Krasiński, Adam; Kusio, Tomasz

    2015-09-01

    Ordinary pile bearing capacity tests are usually carried out to determine the relationship between load and displacement of pile head. The measurement system required in such tests consists of force transducer and three or four displacement gauges. The whole system is installed at the pile head above the ground level. This approach, however, does not give us complete information about the pile-soil interaction. We can only determine the total bearing capacity of the pile, without the knowledge of its distribution into the shaft and base resistances. Much more information can be obtained by carrying out a test of instrumented pile equipped with a system for measuring the distribution of axial force along its core. In the case of pile model tests the use of such measurement is difficult due to small scale of the model. To find a suitable solution for axial force measurement, which could be applied to small scale model piles, we had to take into account the following requirements: - a linear and stable relationship between measured and physical values, - the force measurement accuracy of about 0.1 kN, - the range of measured forces up to 30 kN, - resistance of measuring gauges against aggressive counteraction of concrete mortar and against moisture, - insensitivity to pile bending, - economical factor. These requirements can be fulfilled by strain gauge sensors if an appropriate methodology is used for test preparation (Hoffmann [1]). In this paper, we focus on some aspects of the application of strain gauge sensors for model pile tests. The efficiency of the method is proved on the examples of static load tests carried out on SDP model piles acting as single piles and in a group.

  9. Accuracy tests of the tessellated SLBM model

    International Nuclear Information System (INIS)

    Ramirez, A L; Myers, S C

    2007-01-01

    We have compared the Seismic Location Base Model (SLBM) tessellated model (version 2.0 Beta, posted July 3, 2007) with the GNEMRE Unified Model. The comparison is done on a layer/depth-by-layer/depth and layer/velocity-by-layer/velocity comparison. The SLBM earth model is defined on a tessellation that spans the globe at a constant resolution of about 1 degree (Ballard, 2007). For the tests, we used the earth model in file ''unified( ) iasp.grid''. This model contains the top 8 layers of the Unified Model (UM) embedded in a global IASP91 grid. Our test queried the same set of nodes included in the UM model file. To query the model stored in memory, we used some of the functionality built into the SLBMInterface object. We used the method get InterpolatedPoint() to return desired values for each layer at user-specified points. The values returned include: depth to the top of each layer, layer velocity, layer thickness and (for the upper-mantle layer) velocity gradient. The SLBM earth model has an extra middle crust layer whose values are used when Pg/Lg phases are being calculated. This extra layer was not accessed by our tests. Figures 1 to 8 compare the layer depths, P velocities and P gradients in the UM and SLBM models. The figures show results for the three sediment layers, three crustal layers and the upper mantle layer defined in the UM model. Each layer in the models (sediment1, sediment2, sediment3, upper crust, middle crust, lower crust and upper mantle) is shown on a separate figure. The upper mantle P velocity and gradient distribution are shown on Figures 7 and 8. The left and center images in the top row of each figure is the rendering of depth to the top of the specified layer for the UM and SLBM models. When a layer has zero thickness, its depth is the same as that of the layer above. The right image in the top row is the difference between in layer depth for the UM and SLBM renderings. The left and center images in the bottom row of the figures are

  10. Unit testing, model validation, and biological simulation.

    Science.gov (United States)

    Sarma, Gopal P; Jacobs, Travis W; Watts, Mark D; Ghayoomie, S Vahid; Larson, Stephen D; Gerkin, Richard C

    2016-01-01

    The growth of the software industry has gone hand in hand with the development of tools and cultural practices for ensuring the reliability of complex pieces of software. These tools and practices are now acknowledged to be essential to the management of modern software. As computational models and methods have become increasingly common in the biological sciences, it is important to examine how these practices can accelerate biological software development and improve research quality. In this article, we give a focused case study of our experience with the practices of unit testing and test-driven development in OpenWorm, an open-science project aimed at modeling Caenorhabditis elegans. We identify and discuss the challenges of incorporating test-driven development into a heterogeneous, data-driven project, as well as the role of model validation tests, a category of tests unique to software which expresses scientific models.

  11. Variable amplitude fatigue, modelling and testing

    International Nuclear Information System (INIS)

    Svensson, Thomas.

    1993-01-01

    Problems related to metal fatigue modelling and testing are here treated in four different papers. In the first paper different views of the subject are summarised in a literature survey. In the second paper a new model for fatigue life is investigated. Experimental results are established which are promising for further development of the mode. In the third paper a method is presented that generates a stochastic process, suitable to fatigue testing. The process is designed in order to resemble certain fatigue related features in service life processes. In the fourth paper fatigue problems in transport vibrations are treated

  12. Design, modeling and testing of data converters

    CERN Document Server

    Kiaei, Sayfe; Xu, Fang

    2014-01-01

    This book presents the a scientific discussion of the state-of-the-art techniques and designs for modeling, testing and for the performance analysis of data converters. The focus is put on sustainable data conversion. Sustainability has become a public issue that industries and users can not ignore. Devising environmentally friendly solutions for data conversion designing, modeling and testing is nowadays a requirement that researchers and practitioners must consider in their activities. This book presents the outcome of the IWADC workshop 2011, held in Orvieto, Italy.

  13. Hall Thruster Thermal Modeling and Test Data Correlation

    Science.gov (United States)

    Myers, James; Kamhawi, Hani; Yim, John; Clayman, Lauren

    2016-01-01

    The life of Hall Effect thrusters are primarily limited by plasma erosion and thermal related failures. NASA Glenn Research Center (GRC) in cooperation with the Jet Propulsion Laboratory (JPL) have recently completed development of a Hall thruster with specific emphasis to mitigate these limitations. Extending the operational life of Hall thursters makes them more suitable for some of NASA's longer duration interplanetary missions. This paper documents the thermal model development, refinement and correlation of results with thruster test data. Correlation was achieved by minimizing uncertainties in model input and recognizing the relevant parameters for effective model tuning. Throughout the thruster design phase the model was used to evaluate design options and systematically reduce component temperatures. Hall thrusters are inherently complex assemblies of high temperature components relying on internal conduction and external radiation for heat dispersion and rejection. System solutions are necessary in most cases to fully assess the benefits and/or consequences of any potential design change. Thermal model correlation is critical since thruster operational parameters can push some components/materials beyond their temperature limits. This thruster incorporates a state-of-the-art magnetic shielding system to reduce plasma erosion and to a lesser extend power/heat deposition. Additionally a comprehensive thermal design strategy was employed to reduce temperatures of critical thruster components (primarily the magnet coils and the discharge channel). Long term wear testing is currently underway to assess the effectiveness of these systems and consequently thruster longevity.

  14. The diagnostic accuracy of serological tests for Lyme borreliosis in Europe: a systematic review and meta-analysis.

    Science.gov (United States)

    Leeflang, M M G; Ang, C W; Berkhout, J; Bijlmer, H A; Van Bortel, W; Brandenburg, A H; Van Burgel, N D; Van Dam, A P; Dessau, R B; Fingerle, V; Hovius, J W R; Jaulhac, B; Meijer, B; Van Pelt, W; Schellekens, J F P; Spijker, R; Stelma, F F; Stanek, G; Verduyn-Lunel, F; Zeller, H; Sprong, H

    2016-03-25

    Interpretation of serological assays in Lyme borreliosis requires an understanding of the clinical indications and the limitations of the currently available tests. We therefore systematically reviewed the accuracy of serological tests for the diagnosis of Lyme borreliosis in Europe. We searched EMBASE en MEDLINE and contacted experts. Studies evaluating the diagnostic accuracy of serological assays for Lyme borreliosis in Europe were eligible. Study selection and data-extraction were done by two authors independently. We assessed study quality using the QUADAS-2 checklist. We used a hierarchical summary ROC meta-regression method for the meta-analyses. Potential sources of heterogeneity were test-type, commercial or in-house, Ig-type, antigen type and study quality. These were added as covariates to the model, to assess their effect on test accuracy. Seventy-eight studies evaluating an Enzyme-Linked ImmunoSorbent assay (ELISA) or an immunoblot assay against a reference standard of clinical criteria were included. None of the studies had low risk of bias for all QUADAS-2 domains. Sensitivity was highly heterogeneous, with summary estimates: erythema migrans 50% (95% CI 40% to 61%); neuroborreliosis 77% (95% CI 67% to 85%); acrodermatitis chronica atrophicans 97% (95% CI 94% to 99%); unspecified Lyme borreliosis 73% (95% CI 53% to 87%). Specificity was around 95% in studies with healthy controls, but around 80% in cross-sectional studies. Two-tiered algorithms or antibody indices did not outperform single test approaches. The observed heterogeneity and risk of bias complicate the extrapolation of our results to clinical practice. The usefulness of the serological tests for Lyme disease depends on the pre-test probability and subsequent predictive values in the setting where the tests are being used. Future diagnostic accuracy studies should be prospectively planned cross-sectional studies, done in settings where the test will be used in practice.

  15. Flight Test Maneuvers for Efficient Aerodynamic Modeling

    Science.gov (United States)

    Morelli, Eugene A.

    2011-01-01

    Novel flight test maneuvers for efficient aerodynamic modeling were developed and demonstrated in flight. Orthogonal optimized multi-sine inputs were applied to aircraft control surfaces to excite aircraft dynamic response in all six degrees of freedom simultaneously while keeping the aircraft close to chosen reference flight conditions. Each maneuver was designed for a specific modeling task that cannot be adequately or efficiently accomplished using conventional flight test maneuvers. All of the new maneuvers were first described and explained, then demonstrated on a subscale jet transport aircraft in flight. Real-time and post-flight modeling results obtained using equation-error parameter estimation in the frequency domain were used to show the effectiveness and efficiency of the new maneuvers, as well as the quality of the aerodynamic models that can be identified from the resultant flight data.

  16. Should trained lay providers perform HIV testing? A systematic review to inform World Health Organization guidelines.

    Science.gov (United States)

    Kennedy, C E; Yeh, P T; Johnson, C; Baggaley, R

    2017-12-01

    New strategies for HIV testing services (HTS) are needed to achieve UN 90-90-90 targets, including diagnosis of 90% of people living with HIV. Task-sharing HTS to trained lay providers may alleviate health worker shortages and better reach target groups. We conducted a systematic review of studies evaluating HTS by lay providers using rapid diagnostic tests (RDTs). Peer-reviewed articles were included if they compared HTS using RDTs performed by trained lay providers to HTS by health professionals, or to no intervention. We also reviewed data on end-users' values and preferences around lay providers preforming HTS. Searching was conducted through 10 online databases, reviewing reference lists, and contacting experts. Screening and data abstraction were conducted in duplicate using systematic methods. Of 6113 unique citations identified, 5 studies were included in the effectiveness review and 6 in the values and preferences review. One US-based randomized trial found patients' uptake of HTS doubled with lay providers (57% vs. 27%, percent difference: 30, 95% confidence interval: 27-32, p lay providers. Studies from Cambodia, Malawi, and South Africa comparing testing quality between lay providers and laboratory staff found little discordance and high sensitivity and specificity (≥98%). Values and preferences studies generally found support for lay providers conducting HTS, particularly in non-hypothetical scenarios. Based on evidence supporting using trained lay providers, a WHO expert panel recommended lay providers be allowed to conduct HTS using HIV RDTs. Uptake of this recommendation could expand HIV testing to more people globally.

  17. Modeling and Testing Legacy Data Consistency Requirements

    DEFF Research Database (Denmark)

    Nytun, J. P.; Jensen, Christian Søndergaard

    2003-01-01

    An increasing number of data sources are available on the Internet, many of which offer semantically overlapping data, but based on different schemas, or models. While it is often of interest to integrate such data sources, the lack of consistency among them makes this integration difficult....... This paper addresses the need for new techniques that enable the modeling and consistency checking for legacy data sources. Specifically, the paper contributes to the development of a framework that enables consistency testing of data coming from different types of data sources. The vehicle is UML and its...... accompanying XMI. The paper presents techniques for modeling consistency requirements using OCL and other UML modeling elements: it studies how models that describe the required consistencies among instances of legacy models can be designed in standard UML tools that support XMI. The paper also considers...

  18. Reliability of physical examination tests for the diagnosis of knee disorders: Evidence from a systematic review.

    Science.gov (United States)

    Décary, Simon; Ouellet, Philippe; Vendittoli, Pascal-André; Desmeules, François

    2016-12-01

    Clinicians often rely on physical examination tests to guide them in the diagnostic process of knee disorders. However, reliability of these tests is often overlooked and may influence the consistency of results and overall diagnostic validity. Therefore, the objective of this study was to systematically review evidence on the reliability of physical examination tests for the diagnosis of knee disorders. A structured literature search was conducted in databases up to January 2016. Included studies needed to report reliability measures of at least one physical test for any knee disorder. Methodological quality was evaluated using the QAREL checklist. A qualitative synthesis of the evidence was performed. Thirty-three studies were included with a mean QAREL score of 5.5 ± 0.5. Based on low to moderate quality evidence, the Thessaly test for meniscal injuries reached moderate inter-rater reliability (k = 0.54). Based on moderate to excellent quality evidence, the Lachman for anterior cruciate ligament injuries reached moderate to excellent inter-rater reliability (k = 0.42 to 0.81). Based on low to moderate quality evidence, the Tibiofemoral Crepitus, Joint Line and Patellofemoral Pain/Tenderness, Bony Enlargement and Joint Pain on Movement tests for knee osteoarthritis reached fair to excellent inter-rater reliability (k = 0.29 to 0.93). Based on low to moderate quality evidence, the Lateral Glide, Lateral Tilt, Lateral Pull and Quality of Movement tests for patellofemoral pain reached moderate to good inter-rater reliability (k = 0.49 to 0.73). Many physical tests appear to reach good inter-rater reliability, but this is based on low-quality and conflicting evidence. High-quality research is required to evaluate the reliability of knee physical examination tests. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Health-care providers' experiences with opt-out HIV testing: a systematic review.

    Science.gov (United States)

    Leidel, Stacy; Wilson, Sally; McConigley, Ruth; Boldy, Duncan; Girdler, Sonya

    2015-01-01

    HIV is now a manageable chronic disease with a good prognosis, but early detection and referral for treatment are vital. In opt-out HIV testing, patients are informed that they will be tested unless they decline. This qualitative systematic review explored the experiences, attitudes, barriers, and facilitators of opt-out HIV testing from a health-care provider (HCP) perspective. Four articles were included in the synthesis and reported on findings from approximately 70 participants, representing diverse geographical regions and a range of human development status and HIV prevalence. Two synthesized findings emerged: HCP attitudes and systems. The first synthesized finding encompassed HCP decision-making attitudes about who and when to test for HIV. It also included the assumptions the HCPs made about patient consequences. The second synthesized finding related to systems. System-related barriers to opt-out HIV testing included lack of time, resources, and adequate training. System-related facilitators included integration into standard practice, support of the medical setting, and electronic reminders. A common attitude among HCPs was the outdated notion that HIV is a terrible disease that equates to certain death. Some HCPs stated that offering the HIV test implied that the patient had engaged in immoral behaviour, which could lead to stigma or disengagement with health services. This paternalism diminished patient autonomy, because patients who were excluded from opt-out HIV testing could have benefited from it. One study highlighted the positive aspects of opt-out HIV testing, in which participants underscored the professional satisfaction that arose from making an HIV diagnosis, particularly when marginalized patients could be connected to treatment and social services. Recommendations for opt-out HIV testing should be disseminated to HCPs in a broad range of settings. Implementation of system-related factors such as electronic reminders and care coordination

  20. Internet-Based Direct-to-Consumer Genetic Testing: A Systematic Review.

    Science.gov (United States)

    Covolo, Loredana; Rubinelli, Sara; Ceretti, Elisabetta; Gelatti, Umberto

    2015-12-14

    Direct-to-consumer genetic tests (DTC-GT) are easily purchased through the Internet, independent of a physician referral or approval for testing, allowing the retrieval of genetic information outside the clinical context. There is a broad debate about the testing validity, their impact on individuals, and what people know and perceive about them. The aim of this review was to collect evidence on DTC-GT from a comprehensive perspective that unravels the complexity of the phenomenon. A systematic search was carried out through PubMed, Web of Knowledge, and Embase, in addition to Google Scholar according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist with the key term "Direct-to-consumer genetic test." In the final sample, 118 articles were identified. Articles were summarized in five categories according to their focus on (1) knowledge of, attitude toward use of, and perception of DTC-GT (n=37), (2) the impact of genetic risk information on users (n=37), (3) the opinion of health professionals (n=20), (4) the content of websites selling DTC-GT (n=16), and (5) the scientific evidence and clinical utility of the tests (n=14). Most of the articles analyzed the attitude, knowledge, and perception of DTC-GT, highlighting an interest in using DTC-GT, along with the need for a health care professional to help interpret the results. The articles investigating the content analysis of the websites selling these tests are in agreement that the information provided by the companies about genetic testing is not completely comprehensive for the consumer. Given that risk information can modify consumers' health behavior, there are surprisingly few studies carried out on actual consumers and they do not confirm the overall concerns on the possible impact of DTC-GT. Data from studies that investigate the quality of the tests offered confirm that they are not informative, have little predictive power, and do not measure genetic risk

  1. Internet-Based Direct-to-Consumer Genetic Testing: A Systematic Review

    Science.gov (United States)

    Rubinelli, Sara; Ceretti, Elisabetta; Gelatti, Umberto

    2015-01-01

    Background Direct-to-consumer genetic tests (DTC-GT) are easily purchased through the Internet, independent of a physician referral or approval for testing, allowing the retrieval of genetic information outside the clinical context. There is a broad debate about the testing validity, their impact on individuals, and what people know and perceive about them. Objective The aim of this review was to collect evidence on DTC-GT from a comprehensive perspective that unravels the complexity of the phenomenon. Methods A systematic search was carried out through PubMed, Web of Knowledge, and Embase, in addition to Google Scholar according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist with the key term “Direct-to-consumer genetic test.” Results In the final sample, 118 articles were identified. Articles were summarized in five categories according to their focus on (1) knowledge of, attitude toward use of, and perception of DTC-GT (n=37), (2) the impact of genetic risk information on users (n=37), (3) the opinion of health professionals (n=20), (4) the content of websites selling DTC-GT (n=16), and (5) the scientific evidence and clinical utility of the tests (n=14). Most of the articles analyzed the attitude, knowledge, and perception of DTC-GT, highlighting an interest in using DTC-GT, along with the need for a health care professional to help interpret the results. The articles investigating the content analysis of the websites selling these tests are in agreement that the information provided by the companies about genetic testing is not completely comprehensive for the consumer. Given that risk information can modify consumers’ health behavior, there are surprisingly few studies carried out on actual consumers and they do not confirm the overall concerns on the possible impact of DTC-GT. Data from studies that investigate the quality of the tests offered confirm that they are not informative, have little predictive

  2. Explanatory item response modelling of an abstract reasoning assessment: A case for modern test design

    OpenAIRE

    Helland, Fredrik

    2016-01-01

    Assessment is an integral part of society and education, and for this reason it is important to know what you measure. This thesis is about explanatory item response modelling of an abstract reasoning assessment, with the objective to create a modern test design framework for automatic generation of valid and precalibrated items of abstract reasoning. Modern test design aims to strengthen the connections between the different components of a test, with a stress on strong theory, systematic it...

  3. Strengthening Theoretical Testing in Criminology Using Agent-based Modeling.

    Science.gov (United States)

    Johnson, Shane D; Groff, Elizabeth R

    2014-07-01

    The Journal of Research in Crime and Delinquency ( JRCD ) has published important contributions to both criminological theory and associated empirical tests. In this article, we consider some of the challenges associated with traditional approaches to social science research, and discuss a complementary approach that is gaining popularity-agent-based computational modeling-that may offer new opportunities to strengthen theories of crime and develop insights into phenomena of interest. Two literature reviews are completed. The aim of the first is to identify those articles published in JRCD that have been the most influential and to classify the theoretical perspectives taken. The second is intended to identify those studies that have used an agent-based model (ABM) to examine criminological theories and to identify which theories have been explored. Ecological theories of crime pattern formation have received the most attention from researchers using ABMs, but many other criminological theories are amenable to testing using such methods. Traditional methods of theory development and testing suffer from a number of potential issues that a more systematic use of ABMs-not without its own issues-may help to overcome. ABMs should become another method in the criminologists toolbox to aid theory testing and falsification.

  4. Systematic review and meta-analysis of studies evaluating diagnostic test accuracy: A practical review for clinical researchers-Part II. general guidance and tips

    International Nuclear Information System (INIS)

    Kim, Kyung Won; Choi, Sang Hyun; Huh, Jimi; Park, Seong Ho; Lee, June Young

    2015-01-01

    Meta-analysis of diagnostic test accuracy studies differs from the usual meta-analysis of therapeutic/interventional studies in that, it is required to simultaneously analyze a pair of two outcome measures such as sensitivity and specificity, instead of a single outcome. Since sensitivity and specificity are generally inversely correlated and could be affected by a threshold effect, more sophisticated statistical methods are required for the meta-analysis of diagnostic test accuracy. Hierarchical models including the bivariate model and the hierarchical summary receiver operating characteristic model are increasingly being accepted as standard methods for meta-analysis of diagnostic test accuracy studies. We provide a conceptual review of statistical methods currently used and recommended for meta-analysis of diagnostic test accuracy studies. This article could serve as a methodological reference for those who perform systematic review and meta-analysis of diagnostic test accuracy studies

  5. Testing of a steel containment vessel model

    International Nuclear Information System (INIS)

    Luk, V.K.; Hessheimer, M.F.; Matsumoto, T.; Komine, K.; Costello, J.F.

    1997-01-01

    A mixed-scale containment vessel model, with 1:10 in containment geometry and 1:4 in shell thickness, was fabricated to represent an improved, boiling water reactor (BWR) Mark II containment vessel. A contact structure, installed over the model and separated at a nominally uniform distance from it, provided a simplified representation of a reactor shield building in the actual plant. This paper describes the pretest preparations and the conduct of the high pressure test of the model performed on December 11-12, 1996. 4 refs., 2 figs

  6. Precision tests of the Standard Model

    International Nuclear Information System (INIS)

    Ol'shevskij, A.G.

    1996-01-01

    The present status of the precision measurements of electroweak observables is discussed with the special emphasis on the results obtained recently. All together these measurements provide the basis for the stringent test of the Standard Model and determination of the SM parameters. 22 refs., 23 figs., 11 tabs

  7. Binomial test models and item difficulty

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1979-01-01

    In choosing a binomial test model, it is important to know exactly what conditions are imposed on item difficulty. In this paper these conditions are examined for both a deterministic and a stochastic conception of item responses. It appears that they are more restrictive than is generally

  8. Shallow foundation model tests in Europe

    Czech Academy of Sciences Publication Activity Database

    Feda, Jaroslav; Simonini, P.; Arslan, U.; Georgiodis, M.; Laue, J.; Pinto, I.

    1999-01-01

    Roč. 2, č. 4 (1999), s. 447-475 ISSN 1436-6517. [Int. Conf. on Soil - Structure Interaction in Urban Civ. Engineering. Darmstadt, 08.10.1999-09.10.1999] R&D Projects: GA MŠk OC C7.10 Keywords : shallow foundations * model tests * sandy subsoil * bearing capacity * settlement Subject RIV: JM - Building Engineering

  9. Economic evaluation of medical tests at the early phases of development: a systematic review of empirical studies.

    Science.gov (United States)

    Frempong, Samuel N; Sutton, Andrew J; Davenport, Clare; Barton, Pelham

    2018-02-01

    There is little specific guidance on the implementation of cost-effectiveness modelling at the early stage of test development. The aim of this study was to review the literature in this field to examine the methodologies and tools that have been employed to date. Areas Covered: A systematic review to identify relevant studies in established literature databases. Five studies were identified and included for narrative synthesis. These studies revealed that there is no consistent approach in this growing field. The perspective of patients and the potential for value of information (VOI) to provide information on the value of future research is often overlooked. Test accuracy is an essential consideration, with most studies having described and included all possible test results in their analysis, and conducted extensive sensitivity analyses on important parameters. Headroom analysis was considered in some instances but at the early development stage (not the concept stage). Expert commentary: The techniques available to modellers that can demonstrate the value of conducting further research and product development (i.e. VOI analysis, headroom analysis) should be better utilized. There is the need for concerted efforts to develop rigorous methodology in this growing field to maximize the value and quality of such analysis.

  10. Clinical outcomes following inpatient penicillin allergy testing: A systematic review and meta-analysis.

    Science.gov (United States)

    Sacco, K A; Bates, A; Brigham, T J; Imam, J S; Burton, M C

    2017-09-01

    A documented penicillin allergy is associated with increased morbidity including length of hospital stay and an increased incidence of resistant infections attributed to use of broader-spectrum antibiotics. The aim of the systematic review was to identify whether inpatient penicillin allergy testing affected clinical outcomes during hospitalization. We performed an electronic search of Ovid MEDLINE/PubMed, Embase, Web of Science, Scopus, and the Cochrane Library over the past 20 years. Inpatients having a documented penicillin allergy that underwent penicillin allergy testing were included. Twenty-four studies met eligibility criteria. Study sample size was between 24 and 252 patients in exclusively inpatient cohorts. Penicillin skin testing (PST) with or without oral amoxicillin challenge was the main intervention described (18 studies). The population-weighted mean for a negative PST was 95.1% [CI 93.8-96.1]. Inpatient penicillin allergy testing led to a change in antibiotic selection that was greater in the intensive care unit (77.97% [CI 72.0-83.1] vs 54.73% [CI 51.2-58.2], Pallergy testing was associated with decreased healthcare cost in four studies. Inpatient penicillin allergy testing is safe and effective in ruling out penicillin allergy. The rate of negative tests is comparable to outpatient and perioperative data. Patients with a documented penicillin allergy who require penicillin should be tested during hospitalization given its benefit for individual patient outcomes and antibiotic stewardship. © 2017 EAACI and John Wiley and Sons A/S. Published by John Wiley and Sons Ltd.

  11. Testing mechanistic models of growth in insects.

    Science.gov (United States)

    Maino, James L; Kearney, Michael R

    2015-11-22

    Insects are typified by their small size, large numbers, impressive reproductive output and rapid growth. However, insect growth is not simply rapid; rather, insects follow a qualitatively distinct trajectory to many other animals. Here we present a mechanistic growth model for insects and show that increasing specific assimilation during the growth phase can explain the near-exponential growth trajectory of insects. The presented model is tested against growth data on 50 insects, and compared against other mechanistic growth models. Unlike the other mechanistic models, our growth model predicts energy reserves per biomass to increase with age, which implies a higher production efficiency and energy density of biomass in later instars. These predictions are tested against data compiled from the literature whereby it is confirmed that insects increase their production efficiency (by 24 percentage points) and energy density (by 4 J mg(-1)) between hatching and the attainment of full size. The model suggests that insects achieve greater production efficiencies and enhanced growth rates by increasing specific assimilation and increasing energy reserves per biomass, which are less costly to maintain than structural biomass. Our findings illustrate how the explanatory and predictive power of mechanistic growth models comes from their grounding in underlying biological processes. © 2015 The Author(s).

  12. Ecological validity of cost-effectiveness models of universal HPV vaccination: A systematic literature review.

    Science.gov (United States)

    Favato, Giampiero; Easton, Tania; Vecchiato, Riccardo; Noikokyris, Emmanouil

    2017-05-09

    The protective (herd) effect of the selective vaccination of pubertal girls against human papillomavirus (HPV) implies a high probability that one of the two partners involved in intercourse is immunised, hence preventing the other from this sexually transmitted infection. The dynamic transmission models used to inform immunisation policy should include consideration of sexual behaviours and population mixing in order to demonstrate an ecological validity, whereby the scenarios modelled remain faithful to the real-life social and cultural context. The primary aim of this review is to test the ecological validity of the universal HPV vaccination cost-effectiveness modelling available in the published literature. The research protocol related to this systematic review has been registered in the International Prospective Register of Systematic Reviews (PROSPERO: CRD42016034145). Eight published economic evaluations were reviewed. None of the studies showed due consideration of the complexities of human sexual behaviour and the impact this may have on the transmission of HPV. Our findings indicate that all the included models might be affected by a different degree of ecological bias, which implies an inability to reflect the natural demographic and behavioural trends in their outcomes and, consequently, to accurately inform public healthcare policy. In particular, ecological bias have the effect to over-estimate the preference-based outcomes of selective immunisation. A relatively small (15-20%) over-estimation of quality-adjusted life years (QALYs) gained with selective immunisation programmes could induce a significant error in the estimate of cost-effectiveness of universal immunisation, by inflating its incremental cost effectiveness ratio (ICER) beyond the acceptability threshold. The results modelled here demonstrate the limitations of the cost-effectiveness studies for HPV vaccination, and highlight the concern that public healthcare policy might have been

  13. Testing proton spin models with polarized beams

    International Nuclear Information System (INIS)

    Ramsey, G.P.

    1991-01-01

    We review models for spin-weighted parton distributions in a proton. Sum rules involving the nonsinglet components of the structure function xg 1 p help narrow the range of parameters in these models. The contribution of the γ 5 anomaly term depends on the size of the integrated polarized gluon distribution and experimental predictions depend on its size. We have proposed three models for the polarized gluon distributions, whose range is considerable. These model distributions give an overall range is considerable. These model distributions give an overall range of parameters that can be tested with polarized beam experiments. These are discussed with regard to specific predictions for polarized beam experiments at energies typical of UNK

  14. Which physical examination tests provide clinicians with the most value when examining the shoulder? Update of a systematic review with meta-analysis of individual tests.

    Science.gov (United States)

    Hegedus, Eric J; Goode, Adam P; Cook, Chad E; Michener, Lori; Myer, Cortney A; Myer, Daniel M; Wright, Alexis A

    2012-11-01

    To update our previously published systematic review and meta-analysis by subjecting the literature on shoulder physical examination (ShPE) to careful analysis in order to determine each tests clinical utility. This review is an update of previous work, therefore the terms in the Medline and CINAHL search strategies remained the same with the exception that the search was confined to the dates November, 2006 through to February, 2012. The previous study dates were 1966 - October, 2006. Further, the original search was expanded, without date restrictions, to include two new databases: EMBASE and the Cochrane Library. The Quality Assessment of Diagnostic Accuracy Studies, version 2 (QUADAS 2) tool was used to critique the quality of each new paper. Where appropriate, data from the prior review and this review were combined to perform meta-analysis using the updated hierarchical summary receiver operating characteristic and bivariate models. Since the publication of the 2008 review, 32 additional studies were identified and critiqued. For subacromial impingement, the meta-analysis revealed that the pooled sensitivity and specificity for the Neer test was 72% and 60%, respectively, for the Hawkins-Kennedy test was 79% and 59%, respectively, and for the painful arc was 53% and 76%, respectively. Also from the meta-analysis, regarding superior labral anterior to posterior (SLAP) tears, the test with the best sensitivity (52%) was the relocation test; the test with the best specificity (95%) was Yergason's test; and the test with the best positive likelihood ratio (2.81) was the compression-rotation test. Regarding new (to this series of reviews) ShPE tests, where meta-analysis was not possible because of lack of sufficient studies or heterogeneity between studies, there are some individual tests that warrant further investigation. A highly specific test (specificity >80%, LR+ ≥ 5.0) from a low bias study is the passive distraction test for a SLAP lesion. This test may

  15. Testing Parametric versus Semiparametric Modelling in Generalized Linear Models

    NARCIS (Netherlands)

    Härdle, W.K.; Mammen, E.; Müller, M.D.

    1996-01-01

    We consider a generalized partially linear model E(Y|X,T) = G{X'b + m(T)} where G is a known function, b is an unknown parameter vector, and m is an unknown function.The paper introduces a test statistic which allows to decide between a parametric and a semiparametric model: (i) m is linear, i.e.

  16. OTEC riser cable model and prototype testing

    Science.gov (United States)

    Kurt, J. P.; Schultz, J. A.; Roblee, L. H. S.

    1981-12-01

    Two different OTEC riser cables have been developed to span the distance between a floating OTEC power plant and the ocean floor. The major design concerns for a riser cable in the dynamic OTEC environment are fatigue, corrosion, and electrical/mechanical aging of the cable components. The basic properties of the cable materials were studied through tests on model cables and on samples of cable materials. Full-scale prototype cables were manufactured and were tested to measure their electrical and mechanical properties and performance. The full-scale testing was culminated by the electrical/mechanical fatigue test, which exposes full-scale cables to simultaneous tension, bending and electrical loads, all in a natural seawater environment.

  17. A Systematic Review of Agent-Based Modelling and Simulation Applications in the Higher Education Domain

    Science.gov (United States)

    Gu, X.; Blackmore, K. L.

    2015-01-01

    This paper presents the results of a systematic review of agent-based modelling and simulation (ABMS) applications in the higher education (HE) domain. Agent-based modelling is a "bottom-up" modelling paradigm in which system-level behaviour (macro) is modelled through the behaviour of individual local-level agent interactions (micro).…

  18. Prospective Tests on Biological Models of Acupuncture

    Directory of Open Access Journals (Sweden)

    Charles Shang

    2009-01-01

    Full Text Available The biological effects of acupuncture include the regulation of a variety of neurohumoral factors and growth control factors. In science, models or hypotheses with confirmed predictions are considered more convincing than models solely based on retrospective explanations. Literature review showed that two biological models of acupuncture have been prospectively tested with independently confirmed predictions: The neurophysiology model on the long-term effects of acupuncture emphasizes the trophic and anti-inflammatory effects of acupuncture. Its prediction on the peripheral effect of endorphin in acupuncture has been confirmed. The growth control model encompasses the neurophysiology model and suggests that a macroscopic growth control system originates from a network of organizers in embryogenesis. The activity of the growth control system is important in the formation, maintenance and regulation of all the physiological systems. Several phenomena of acupuncture such as the distribution of auricular acupuncture points, the long-term effects of acupuncture and the effect of multimodal non-specific stimulation at acupuncture points are consistent with the growth control model. The following predictions of the growth control model have been independently confirmed by research results in both acupuncture and conventional biomedical sciences: (i Acupuncture has extensive growth control effects. (ii Singular point and separatrix exist in morphogenesis. (iii Organizers have high electric conductance, high current density and high density of gap junctions. (iv A high density of gap junctions is distributed as separatrices or boundaries at body surface after early embryogenesis. (v Many acupuncture points are located at transition points or boundaries between different body domains or muscles, coinciding with the connective tissue planes. (vi Some morphogens and organizers continue to function after embryogenesis. Current acupuncture research suggests a

  19. Outcomes Definitions and Statistical Tests in Oncology Studies: A Systematic Review of the Reporting Consistency.

    Science.gov (United States)

    Rivoirard, Romain; Duplay, Vianney; Oriol, Mathieu; Tinquaut, Fabien; Chauvin, Franck; Magne, Nicolas; Bourmaud, Aurelie

    2016-01-01

    Quality of reporting for Randomized Clinical Trials (RCTs) in oncology was analyzed in several systematic reviews, but, in this setting, there is paucity of data for the outcomes definitions and consistency of reporting for statistical tests in RCTs and Observational Studies (OBS). The objective of this review was to describe those two reporting aspects, for OBS and RCTs in oncology. From a list of 19 medical journals, three were retained for analysis, after a random selection: British Medical Journal (BMJ), Annals of Oncology (AoO) and British Journal of Cancer (BJC). All original articles published between March 2009 and March 2014 were screened. Only studies whose main outcome was accompanied by a corresponding statistical test were included in the analysis. Studies based on censored data were excluded. Primary outcome was to assess quality of reporting for description of primary outcome measure in RCTs and of variables of interest in OBS. A logistic regression was performed to identify covariates of studies potentially associated with concordance of tests between Methods and Results parts. 826 studies were included in the review, and 698 were OBS. Variables were described in Methods section for all OBS studies and primary endpoint was clearly detailed in Methods section for 109 RCTs (85.2%). 295 OBS (42.2%) and 43 RCTs (33.6%) had perfect agreement for reported statistical test between Methods and Results parts. In multivariable analysis, variable "number of included patients in study" was associated with test consistency: aOR (adjusted Odds Ratio) for third group compared to first group was equal to: aOR Grp3 = 0.52 [0.31-0.89] (P value = 0.009). Variables in OBS and primary endpoint in RCTs are reported and described with a high frequency. However, statistical tests consistency between methods and Results sections of OBS is not always noted. Therefore, we encourage authors and peer reviewers to verify consistency of statistical tests in oncology studies.

  20. The Alcock Paczy'nski test with Baryon Acoustic Oscillations: systematic effects for future surveys

    Energy Technology Data Exchange (ETDEWEB)

    Lepori, Francesca; Viel, Matteo; Baccigalupi, Carlo [SISSA—International School for Advanced Studies, Via Bonomea 265, 34136 Trieste (Italy); Dio, Enea Di [INAF—Osservatorio Astronomico di Trieste, Via G.B. Tiepolo 11, I-34143 Trieste (Italy); Durrer, Ruth, E-mail: flepori@sissa.it, E-mail: enea.didio@oats.inaf.it, E-mail: viel@oats.inaf.it, E-mail: carlo.baccigalupi@sissa.it, E-mail: Ruth.Durrer@unige.ch [Université de Genève, Département de Physique Théorique and CAP, 24 quai Ernest-Ansermet, CH-1211 Genève 4 (Switzerland)

    2017-02-01

    We investigate the Alcock Paczy'nski (AP) test applied to the Baryon Acoustic Oscillation (BAO) feature in the galaxy correlation function. By using a general formalism that includes relativistic effects, we quantify the importance of the linear redshift space distortions and gravitational lensing corrections to the galaxy number density fluctuation. We show that redshift space distortions significantly affect the shape of the correlation function, both in radial and transverse directions, causing different values of galaxy bias to induce offsets up to 1% in the AP test. On the other hand, we find that the lensing correction around the BAO scale modifies the amplitude but not the shape of the correlation function and therefore does not introduce any systematic effect. Furthermore, we investigate in details how the AP test is sensitive to redshift binning: a window function in transverse direction suppresses correlations and shifts the peak position toward smaller angular scales. We determine the correction that should be applied in order to account for this effect, when performing the test with data from three future planned galaxy redshift surveys: Euclid, the Dark Energy Spectroscopic Instrument (DESI) and the Square Kilometer Array (SKA).

  1. Prediction of pre-eclampsia: a protocol for systematic reviews of test accuracy

    Directory of Open Access Journals (Sweden)

    Khan Khalid S

    2006-10-01

    Full Text Available Abstract Background Pre-eclampsia, a syndrome of hypertension and proteinuria, is a major cause of maternal and perinatal morbidity and mortality. Accurate prediction of pre-eclampsia is important, since high risk women could benefit from intensive monitoring and preventive treatment. However, decision making is currently hampered due to lack of precise and up to date comprehensive evidence summaries on estimates of risk of developing pre-eclampsia. Methods/Design A series of systematic reviews and meta-analyses will be undertaken to determine, among women in early pregnancy, the accuracy of various tests (history, examinations and investigations for predicting pre-eclampsia. We will search Medline, Embase, Cochrane Library, MEDION, citation lists of review articles and eligible primary articles and will contact experts in the field. Reviewers working independently will select studies, extract data, and assess study validity according to established criteria. Language restrictions will not be applied. Bivariate meta-analysis of sensitivity and specificity will be considered for tests whose studies allow generation of 2 × 2 tables. Discussion The results of the test accuracy reviews will be integrated with results of effectiveness reviews of preventive interventions to assess the impact of test-intervention combinations for prevention of pre-eclampsia.

  2. Interventions to Improve Follow-Up of Laboratory Test Results Pending at Discharge: A Systematic Review.

    Science.gov (United States)

    Whitehead, Nedra S; Williams, Laurina; Meleth, Sreelatha; Kennedy, Sara; Epner, Paul; Singh, Hardeep; Wooldridge, Kathleene; Dalal, Anuj K; Walz, Stacy E; Lorey, Tom; Graber, Mark L

    2018-02-28

    Failure to follow up test results pending at discharge (TPAD) from hospitals or emergency departments is a major patient safety concern. The purpose of this review is to systematically evaluate the effectiveness of interventions to improve follow-up of laboratory TPAD. We conducted literature searches in PubMed, CINAHL, Cochrane, and EMBASE using search terms for relevant health care settings, transition of patient care, laboratory tests, communication, and pending or missed tests. We solicited unpublished studies from the clinical laboratory community and excluded articles that did not address transitions between settings, did not include an intervention, or were not related to laboratory TPAD. We also excluded letters, editorials, commentaries, abstracts, case reports, and case series. Of the 9,592 abstracts retrieved, 8 met the inclusion criteria and reported the successful communication of TPAD. A team member abstracted predetermined data elements from each study, and a senior scientist reviewed the abstraction. Two experienced reviewers independently appraised the quality of each study using published LMBP™ A-6 scoring criteria. We assessed the body of evidence using the A-6 methodology, and the evidence suggested that electronic tools or one-on-one education increased documentation of pending tests in discharge summaries. We also found that automated notifications improved awareness of TPAD. The interventions were supported by suggestive evidence; this type of evidence is below the level of evidence required for LMBP™ recommendations. We encourage additional research into the impact of these interventions on key processes and health outcomes. © 2018 Society of Hospital Medicine.

  3. Physical modelling and testing in environmental geotechnics

    International Nuclear Information System (INIS)

    Garnier, J.; Thorel, L.; Haza, E.

    2000-01-01

    The preservation of natural environment has become a major concern, which affects nowadays a wide range of professionals from local communities administrators to natural resources managers (water, wildlife, flora, etc) and, in the end, to the consumers that we all are. Although totally ignored some fifty years ago, environmental geotechnics has become an emergent area of study and research which borders on the traditional domains, with which the geo-technicians are confronted (soil and rock mechanics, engineering geology, natural and anthropogenic risk management). Dedicated to experimental approaches (in-situ investigations and tests, laboratory tests, small-scale model testing), the Symposium fits in with the geotechnical domains of environment and transport of soil pollutants. These proceedings report some progress of developments in measurement techniques and studies of transport of pollutants in saturated and unsaturated soils in order to improve our understanding of such phenomena within multiphase environments. Experimental investigations on decontamination and isolation methods for polluted soils are discussed. The intention is to assess the impact of in-situ and laboratory tests, as well as small-scale model testing, on engineering practice. One paper is analysed in INIS data base for its specific interest in nuclear industry. The other ones, concerning the energy, are analyzed in ETDE data base

  4. Hypervapotron flow testing with rapid prototype models

    International Nuclear Information System (INIS)

    Driemeyer, D.; Hellwig, T.; Kubik, D.; Langenderfer, E.; Mantz, H.; McSmith, M.; Jones, B.; Butler, J.

    1995-01-01

    A flow test model of the inlet section of a three channel hypervapotron plate that has been proposed as a heat sink in the ITER divertor was prepared using a rapid prototyping stereolithography process that is widely used for component development in US industry. An existing water flow loop at the University of Illinois is being used for isothermal flow tests to collect pressure drop data for comparison with proposed vapotron friction factor correlations. Differential pressure measurements are taken, across the test section inlet manifold, the vapotron channel (about a seven inch length), the outlet manifold and the inlet-to-outlet. The differential pressures are currently measured with manometers. Tests were conducted at flow velocities from 1--10 m/s to cover the full range of ITER interest. A tap was also added for a small hypodermic needle to inject dye into the flow channel at several positions to examine the nature of the developing flow field at the entrance to the vapotron section. Follow-on flow tests are planned using a model with adjustable flow channel dimensions to permit more extensive pressure drop data to be collected. This information will be used to update vapotron design correlations for ITER

  5. Physical modelling and testing in environmental geotechnics

    Energy Technology Data Exchange (ETDEWEB)

    Garnier, J.; Thorel, L.; Haza, E. [Laboratoire Central des Ponts et Chaussees a Nantes, 44 - Nantes (France)

    2000-07-01

    The preservation of natural environment has become a major concern, which affects nowadays a wide range of professionals from local communities administrators to natural resources managers (water, wildlife, flora, etc) and, in the end, to the consumers that we all are. Although totally ignored some fifty years ago, environmental geotechnics has become an emergent area of study and research which borders on the traditional domains, with which the geo-technicians are confronted (soil and rock mechanics, engineering geology, natural and anthropogenic risk management). Dedicated to experimental approaches (in-situ investigations and tests, laboratory tests, small-scale model testing), the Symposium fits in with the geotechnical domains of environment and transport of soil pollutants. These proceedings report some progress of developments in measurement techniques and studies of transport of pollutants in saturated and unsaturated soils in order to improve our understanding of such phenomena within multiphase environments. Experimental investigations on decontamination and isolation methods for polluted soils are discussed. The intention is to assess the impact of in-situ and laboratory tests, as well as small-scale model testing, on engineering practice. One paper has been analyzed in INIS data base for its specific interest in nuclear industry.

  6. Horns Rev II, 2-D Model Tests

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Frigaard, Peter

    This report present the results of 2D physical model tests carried out in the shallow wave flume at Dept. of Civil Engineering, Aalborg University (AAU), on behalf of Energy E2 A/S part of DONG Energy A/S, Denmark. The objective of the tests was: to investigate the combined influence of the pile...... diameter to water depth ratio and the wave hight to water depth ratio on wave run-up of piles. The measurements should be used to design access platforms on piles....

  7. Temperature Buffer Test. Final THM modelling

    International Nuclear Information System (INIS)

    Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan; Ledesma, Alberto; Jacinto, Abel

    2012-01-01

    The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code B right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code B right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to

  8. Temperature Buffer Test. Final THM modelling

    Energy Technology Data Exchange (ETDEWEB)

    Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan [Clay Technology AB, Lund (Sweden); Ledesma, Alberto; Jacinto, Abel [UPC, Universitat Politecnica de Catalunya, Barcelona (Spain)

    2012-01-15

    The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code{sub B}right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code{sub B}right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to

  9. Immunochemical faecal occult blood test for colorectal cancer screening: a systematic review.

    Science.gov (United States)

    Syful Azlie, M F; Hassan, M R; Junainah, S; Rugayah, B

    2015-02-01

    A systematic review on the effectiveness and costeffectiveness of Immunochemical faecal occult IFOBT for CRC screening was carried out. A total of 450 relevant titles were identified, 41 abstracts were screened and 18 articles were included in the results. There was fair level of retrievable evidence to suggest that the sensitivity and specificity of IFOBT varies with the cut-off point of haemoglobin, whereas the diagnostic accuracy performance was influenced by high temperature and haemoglobin stability. A screening programme using IFOBT can be effective for prevention of advanced CRC and reduced mortality. There was also evidence to suggest that IFOBT is cost-effective in comparison with no screening, whereby a two-day faecal collection method was found to be costeffective as a means of screening for CRC. Based on the review, quantitative IFOBT method can be used in Malaysia as a screening test for CRC. The use of fully automated IFOBT assay would be highly desirable.

  10. Observational tests of FRW world models

    International Nuclear Information System (INIS)

    Lahav, Ofer

    2002-01-01

    Observational tests for the cosmological principle are reviewed. Assuming the FRW metric we then summarize estimates of cosmological parameters from various datasets, in particular the cosmic microwave background and the 2dF galaxy redshift survey. These and other analyses suggest a best-fit Λ-cold dark matter model with Ω m = 1 - Ω l ∼ 0.3 and H 0 ∼ 70 km s -1 Mpc -1 . It is remarkable that different measurements converge to this 'concordance model', although it remains to be seen if the two main components of this model, the dark matter and the dark energy, are real entities or just 'epicycles'. We point out some open questions related to this fashionable model

  11. Testing the standard model of particle physics using lattice QCD

    International Nuclear Information System (INIS)

    Water, Ruth S van de

    2007-01-01

    Recent advances in both computers and algorithms now allow realistic calculations of Quantum Chromodynamics (QCD) interactions using the numerical technique of lattice QCD. The methods used in so-called '2+1 flavor' lattice calculations have been verified both by post-dictions of quantities that were already experimentally well-known and by predictions that occurred before the relevant experimental determinations were sufficiently precise. This suggests that the sources of systematic error in lattice calculations are under control, and that lattice QCD can now be reliably used to calculate those weak matrix elements that cannot be measured experimentally but are necessary to interpret the results of many high-energy physics experiments. These same calculations also allow stringent tests of the Standard Model of particle physics, and may therefore lead to the discovery of new physics in the future

  12. A Systematic Review of the Anxiolytic-Like Effects of Essential Oils in Animal Models

    Directory of Open Access Journals (Sweden)

    Damião Pergentino de Sousa

    2015-10-01

    Full Text Available The clinical efficacy of standardized essential oils (such as Lavender officinalis, in treating anxiety disorders strongly suggests that these natural products are an important candidate source for new anxiolytic drugs. A systematic review of essential oils, their bioactive constituents, and anxiolytic-like activity is conducted. The essential oil with the best profile is Lavendula angustifolia, which has already been tested in controlled clinical trials with positive results. Citrus aurantium using different routes of administration also showed significant effects in several animal models, and was corroborated by different research groups. Other promising essential oils are Citrus sinensis and bergamot oil, which showed certain clinical anxiolytic actions; along with Achillea wilhemsii, Alpinia zerumbet, Citrus aurantium, and Spiranthera odoratissima, which, like Lavendula angustifolia, appear to exert anxiolytic-like effects without GABA/benzodiazepine activity, thus differing in their mechanisms of action from the benzodiazepines. The anxiolytic activity of 25 compounds commonly found in essential oils is also discussed.

  13. Rapid antigen group A streptococcus test to diagnose pharyngitis: a systematic review and meta-analysis.

    Directory of Open Access Journals (Sweden)

    Emily H Stewart

    Full Text Available BACKGROUND: Pharyngitis management guidelines include estimates of the test characteristics of rapid antigen streptococcus tests (RAST using a non-systematic approach. OBJECTIVE: To examine the sensitivity and specificity, and sources of variability, of RAST for diagnosing group A streptococcal (GAS pharyngitis. DATA SOURCES: MEDLINE, Cochrane Reviews, Centre for Reviews and Dissemination, Scopus, SciELO, CINAHL, guidelines, 2000-2012. STUDY SELECTION: Culture as reference standard, all languages. DATA EXTRACTION AND SYNTHESIS: Study characteristics, quality. MAIN OUTCOME(S AND MEASURE(S: Sensitivity, specificity. RESULTS: We included 59 studies encompassing 55,766 patients. Forty three studies (18,464 patients fulfilled the higher quality definition (at least 50 patients, prospective data collection, and no significant biases and 16 (35,634 patients did not. For the higher quality immunochromatographic methods in children (10,325 patients, heterogeneity was high for sensitivity (inconsistency [I(2] 88% and specificity (I(2 86%. For enzyme immunoassay in children (342 patients, the pooled sensitivity was 86% (95% CI, 79-92% and the pooled specificity was 92% (95% CI, 88-95%. For the higher quality immunochromatographic methods in the adult population (1,216 patients, the pooled sensitivity was 91% (95% CI, 87 to 94% and the pooled specificity was 93% (95% CI, 92 to 95%; however, heterogeneity was modest for sensitivity (I(2 61% and specificity (I(2 72%. For enzyme immunoassay in the adult population (333 patients, the pooled sensitivity was 86% (95% CI, 81-91% and the pooled specificity was 97% (95% CI, 96 to 99%; however, heterogeneity was high for sensitivity and specificity (both, I(2 88%. CONCLUSIONS: RAST immunochromatographic methods appear to be very sensitive and highly specific to diagnose group A streptococcal pharyngitis among adults but not in children. We could not identify sources of variability among higher quality studies. The

  14. Preliminary Test for Constitutive Models of CAP

    Energy Technology Data Exchange (ETDEWEB)

    Choo, Yeon Joon; Hong, Soon Joon; Hwang, Su Hyun; Lee, Keo Hyung; Kim, Min Ki; Lee, Byung Chul [FNC Tech., Seoul (Korea, Republic of); Ha, Sang Jun; Choi, Hoon [Korea Electric Power Research Institute, Daejeon (Korea, Republic of)

    2010-05-15

    The development project for the domestic design code was launched to be used for the safety and performance analysis of pressurized light water reactors. As a part of this project, CAP (Containment Analysis Package) code has been developing for the containment safety and performance analysis side by side with SPACE. The CAP code treats three fields (vapor, continuous liquid and dispersed drop) for the assessment of containment specific phenomena, and is featured by assessment capabilities in multi-dimensional and lumped parameter thermal hydraulic cell. Thermal hydraulics solver was developed and has a significant progress now. Implementation of the well proven constitutive models and correlations are essential in other for a containment code to be used with the generalized or optimized purposes. Generally, constitutive equations are composed of interfacial and wall transport models and correlations. These equations are included in the source terms of the governing field equations. In order to develop the best model and correlation package of the CAP code, various models currently used in major containment analysis codes, such as GOTHIC, CONTAIN2.0 and CONTEMPT-LT are reviewed. Several models and correlations were incorporated for the preliminary test of CAP's performance and test results and future plans to improve the level of execution besides will be discussed in this paper

  15. Preliminary Test for Constitutive Models of CAP

    International Nuclear Information System (INIS)

    Choo, Yeon Joon; Hong, Soon Joon; Hwang, Su Hyun; Lee, Keo Hyung; Kim, Min Ki; Lee, Byung Chul; Ha, Sang Jun; Choi, Hoon

    2010-01-01

    The development project for the domestic design code was launched to be used for the safety and performance analysis of pressurized light water reactors. As a part of this project, CAP (Containment Analysis Package) code has been developing for the containment safety and performance analysis side by side with SPACE. The CAP code treats three fields (vapor, continuous liquid and dispersed drop) for the assessment of containment specific phenomena, and is featured by assessment capabilities in multi-dimensional and lumped parameter thermal hydraulic cell. Thermal hydraulics solver was developed and has a significant progress now. Implementation of the well proven constitutive models and correlations are essential in other for a containment code to be used with the generalized or optimized purposes. Generally, constitutive equations are composed of interfacial and wall transport models and correlations. These equations are included in the source terms of the governing field equations. In order to develop the best model and correlation package of the CAP code, various models currently used in major containment analysis codes, such as GOTHIC, CONTAIN2.0 and CONTEMPT-LT are reviewed. Several models and correlations were incorporated for the preliminary test of CAP's performance and test results and future plans to improve the level of execution besides will be discussed in this paper

  16. Business model stress testing : A practical approach to test the robustness of a business model

    NARCIS (Netherlands)

    Haaker, T.I.; Bouwman, W.A.G.A.; Janssen, W; de Reuver, G.A.

    Business models and business model innovation are increasingly gaining attention in practice as well as in academic literature. However, the robustness of business models (BM) is seldom tested vis-à-vis the fast and unpredictable changes in digital technologies, regulation and markets. The

  17. Divergence-based tests for model diagnostic

    Czech Academy of Sciences Publication Activity Database

    Hobza, Tomáš; Esteban, M. D.; Morales, D.; Marhuenda, Y.

    2008-01-01

    Roč. 78, č. 13 (2008), s. 1702-1710 ISSN 0167-7152 R&D Projects: GA MŠk 1M0572 Grant - others:Instituto Nacional de Estadistica (ES) MTM2006-05693 Institutional research plan: CEZ:AV0Z10750506 Keywords : goodness of fit * devergence statistics * GLM * model checking * bootstrap Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.445, year: 2008 http://library.utia.cas.cz/separaty/2008/SI/hobza-divergence-based%20tests%20for%20model%20diagnostic.pdf

  18. A systematic review of current immunological tests for the diagnosis of cattle brucellosis.

    Science.gov (United States)

    Ducrotoy, Marie J; Muñoz, Pilar M; Conde-Álvarez, Raquel; Blasco, José M; Moriyón, Ignacio

    2018-03-01

    Brucellosis is a worldwide extended zoonosis with a heavy economic and public health impact. Cattle, sheep and goats are infected by smooth Brucella abortus and Brucella melitensis, and represent a common source of the human disease. Brucellosis diagnosis in these animals is largely based on detection of a specific immunoresponse. We review here the immunological tests used for the diagnosis of cattle brucellosis. First, we discuss how the diagnostic sensitivity (DSe) and specificity (DSp), balance should be adjusted for brucellosis diagnosis, and the difficulties that brucellosis tests specifically present for the estimation of DSe/DSp in frequentistic (gold standard) and Bayesian analyses. Then, we present a systematic review (PubMed, GoogleScholar and CABdirect) of works (154 out of 991; years 1960-August 2017) identified (by title and Abstract content) as DSe and DSp studies of smooth lipopolysaccharide, O-polysaccharide-core, native hapten and protein diagnostic tests. We summarize data of gold standard studies (n = 23) complying with strict inclusion and exclusion criteria with regards to test methodology and definition of the animals studied (infected and S19 or RB51 vaccinated cattle, and Brucella-free cattle affected or not by false positive serological reactions). We also discuss some studies (smooth lipopolysaccharide tests, protein antibody and delayed type hypersensitivity [skin] tests) that do not meet the criteria and yet fill some of the gaps in information. We review Bayesian studies (n = 5) and report that in most cases priors and assumptions on conditional dependence/independence are not coherent with the variable serological picture of the disease in different epidemiological scenarios and the bases (antigen, isotype and immunoglobulin properties involved) of brucellosis tests, practical experience and the results of gold standard studies. We conclude that very useful lipopolysaccharide (buffered plate antigen and indirect ELISA) and

  19. Overload prevention in model supports for wind tunnel model testing

    Directory of Open Access Journals (Sweden)

    Anton IVANOVICI

    2015-09-01

    Full Text Available Preventing overloads in wind tunnel model supports is crucial to the integrity of the tested system. Results can only be interpreted as valid if the model support, conventionally called a sting remains sufficiently rigid during testing. Modeling and preliminary calculation can only give an estimate of the sting’s behavior under known forces and moments but sometimes unpredictable, aerodynamically caused model behavior can cause large transient overloads that cannot be taken into account at the sting design phase. To ensure model integrity and data validity an analog fast protection circuit was designed and tested. A post-factum analysis was carried out to optimize the overload detection and a short discussion on aeroelastic phenomena is included to show why such a detector has to be very fast. The last refinement of the concept consists in a fast detector coupled with a slightly slower one to differentiate between transient overloads that decay in time and those that are the result of aeroelastic unwanted phenomena. The decision to stop or continue the test is therefore conservatively taken preserving data and model integrity while allowing normal startup loads and transients to manifest.

  20. Aspergillus Polymerase Chain Reaction: Systematic Review of Evidence for Clinical Use in Comparison With Antigen Testing

    Science.gov (United States)

    White, P. Lewis; Wingard, John R.; Bretagne, Stéphane; Löffler, Jürgen; Patterson, Thomas F.; Slavin, Monica A.; Barnes, Rosemary A.; Pappas, Peter G.; Donnelly, J. Peter

    2015-01-01

    Background. Aspergillus polymerase chain reaction (PCR) was excluded from the European Organisation for the Research and Treatment of Cancer/Mycoses Study Group (EORTC/MSG) definitions of invasive fungal disease because of limited standardization and validation. The definitions are being revised. Methods. A systematic literature review was performed to identify analytical and clinical information available on inclusion of galactomannan enzyme immunoassay (GM-EIA) (2002) and β-d-glucan (2008), providing a minimal threshold when considering PCR. Categorical parameters and statistical performance were compared. Results. When incorporated, GM-EIA and β-d-glucan sensitivities and specificities for diagnosing invasive aspergillosis were 81.6% and 91.6%, and 76.9% and 89.4%, respectively. Aspergillus PCR has similar sensitivity and specificity (76.8%–88.0% and 75.0%–94.5%, respectively) and comparable utility. Methodological recommendations and commercial PCR assays assist standardization. Although all tests have limitations, currently, PCR is the only test with independent quality control. Conclusions. We propose that there is sufficient evidence that is at least equivalent to that used to include GM-EIA and β-d-glucan testing, and that PCR is now mature enough for inclusion in the EORTC/MSG definitions. PMID:26113653

  1. Designing healthy communities: Testing the walkability model

    OpenAIRE

    Zuniga-Teran, Adriana; Orr, Barron; Gimblett, Randy; Chalfoun, Nader; Marsh, Stuart; Guertin, David; Going, Scott

    2017-01-01

    Research from multiple domains has provided insights into how neighborhood design can be improved to have a more favorable effect on physical activity, a concept known as walkability. The relevant research findings/hypotheses have been integrated into a Walkability Framework, which organizes the design elements into nine walkability categories. The purpose of this study was to test whether this conceptual framework can be used as a model to measure the interactions between the built environme...

  2. 2-D Model Test of Dolosse Breakwater

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; Liu, Zhou

    1994-01-01

    ). To extend the design diagram to cover Dolos breakwaters with superstructure, 2-D model tests of Dolos breakwater with wave wall is included in the project Rubble Mound Breakwater Failure Modes sponsored by the Directorate General XII of the Commission of the European Communities under Contract MAS-CT92......The rational design diagram for Dolos armour should incorporate both the hydraulic stability and the structural integrity. The previous tests performed by Aalborg University (AU) made available such design diagram for the trunk of Dolos breakwater without superstructures (Burcharth et al. 1992...... was on the Dolos breakwater with a high superstructure, where there was almost no overtopping. This case is believed to be the most dangerous one. The test of the Dolos breakwater with a low superstructure was also performed. The objective of the last part of the experiment is to investigate the influence...

  3. Acute Myocardial Infarction Readmission Risk Prediction Models: A Systematic Review of Model Performance.

    Science.gov (United States)

    Smith, Lauren N; Makam, Anil N; Darden, Douglas; Mayo, Helen; Das, Sandeep R; Halm, Ethan A; Nguyen, Oanh Kieu

    2018-01-01

    Hospitals are subject to federal financial penalties for excessive 30-day hospital readmissions for acute myocardial infarction (AMI). Prospectively identifying patients hospitalized with AMI at high risk for readmission could help prevent 30-day readmissions by enabling targeted interventions. However, the performance of AMI-specific readmission risk prediction models is unknown. We systematically searched the published literature through March 2017 for studies of risk prediction models for 30-day hospital readmission among adults with AMI. We identified 11 studies of 18 unique risk prediction models across diverse settings primarily in the United States, of which 16 models were specific to AMI. The median overall observed all-cause 30-day readmission rate across studies was 16.3% (range, 10.6%-21.0%). Six models were based on administrative data; 4 on electronic health record data; 3 on clinical hospital data; and 5 on cardiac registry data. Models included 7 to 37 predictors, of which demographics, comorbidities, and utilization metrics were the most frequently included domains. Most models, including the Centers for Medicare and Medicaid Services AMI administrative model, had modest discrimination (median C statistic, 0.65; range, 0.53-0.79). Of the 16 reported AMI-specific models, only 8 models were assessed in a validation cohort, limiting generalizability. Observed risk-stratified readmission rates ranged from 3.0% among the lowest-risk individuals to 43.0% among the highest-risk individuals, suggesting good risk stratification across all models. Current AMI-specific readmission risk prediction models have modest predictive ability and uncertain generalizability given methodological limitations. No existing models provide actionable information in real time to enable early identification and risk-stratification of patients with AMI before hospital discharge, a functionality needed to optimize the potential effectiveness of readmission reduction interventions

  4. Inverse hydrochemical models of aqueous extracts tests

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, L.; Samper, J.; Montenegro, L.

    2008-10-10

    Aqueous extract test is a laboratory technique commonly used to measure the amount of soluble salts of a soil sample after adding a known mass of distilled water. Measured aqueous extract data have to be re-interpreted in order to infer porewater chemical composition of the sample because porewater chemistry changes significantly due to dilution and chemical reactions which take place during extraction. Here we present an inverse hydrochemical model to estimate porewater chemical composition from measured water content, aqueous extract, and mineralogical data. The model accounts for acid-base, redox, aqueous complexation, mineral dissolution/precipitation, gas dissolution/ex-solution, cation exchange and surface complexation reactions, of which are assumed to take place at local equilibrium. It has been solved with INVERSE-CORE{sup 2D} and been tested with bentonite samples taken from FEBEX (Full-scale Engineered Barrier EXperiment) in situ test. The inverse model reproduces most of the measured aqueous data except bicarbonate and provides an effective, flexible and comprehensive method to estimate porewater chemical composition of clays. Main uncertainties are related to kinetic calcite dissolution and variations in CO2(g) pressure.

  5. Commercial serological antibody detection tests for the diagnosis of pulmonary tuberculosis: a systematic review.

    Directory of Open Access Journals (Sweden)

    Karen R Steingart

    2007-06-01

    Full Text Available BACKGROUND: The global tuberculosis epidemic results in nearly 2 million deaths and 9 million new cases of the disease a year. The vast majority of tuberculosis patients live in developing countries, where the diagnosis of tuberculosis relies on the identification of acid-fast bacilli on unprocessed sputum smears using conventional light microscopy. Microscopy has high specificity in tuberculosis-endemic countries, but modest sensitivity which varies among laboratories (range 20% to 80%. Moreover, the sensitivity is poor for paucibacillary disease (e.g., pediatric and HIV-associated tuberculosis. Thus, the development of rapid and accurate new diagnostic tools is imperative. Immune-based tests are potentially suitable for use in low-income countries as some test formats can be performed at the point of care without laboratory equipment. Currently, dozens of distinct commercial antibody detection tests are sold in developing countries. The question is "do they work?" METHODS AND FINDINGS: We conducted a systematic review to assess the accuracy of commercial antibody detection tests for the diagnosis of pulmonary tuberculosis. Studies from all countries using culture and/or microscopy smear for confirmation of pulmonary tuberculosis were eligible. Studies with fewer than 50 participants (25 patients and 25 control participants were excluded. In a comprehensive search, we identified 68 studies. The results demonstrate that (1 overall, commercial tests vary widely in performance; (2 sensitivity is higher in smear-positive than smear-negative samples; (3 in studies of smear-positive patients, Anda-TB IgG by enzyme-linked immunosorbent assay shows limited sensitivity (range 63% to 85% and inconsistent specificity (range 73% to 100%; (4 specificity is higher in healthy volunteers than in patients in whom tuberculosis disease is initially suspected and subsequently ruled out; and (5 there are insufficient data to determine the accuracy of most

  6. Movable scour protection. Model test report

    Energy Technology Data Exchange (ETDEWEB)

    Lorenz, R.

    2002-07-01

    This report presents the results of a series of model tests with scour protection of marine structures. The objective of the model tests is to investigate the integrity of the scour protection during a general lowering of the surrounding seabed, for instance in connection with movement of a sand bank or with general subsidence. The scour protection in the tests is made out of stone material. Two different fractions have been used: 4 mm and 40 mm. Tests with current, with waves and with combined current and waves were carried out. The scour protection material was placed after an initial scour hole has evolved in the seabed around the structure. This design philosophy has been selected because the situation often is that the scour hole starts to generate immediately after the structure has been placed. It is therefore difficult to establish a scour protection at the undisturbed seabed if the scour material is placed after the main structure. Further, placing the scour material in the scour hole increases the stability of the material. Two types of structure have been used for the test, a Monopile and a Tripod foundation. Test with protection mats around the Monopile model was also carried out. The following main conclusions have emerged form the model tests with flat bed (i.e. no general seabed lowering): 1. The maximum scour depth found in steady current on sand bed was 1.6 times the cylinder diameter, 2. The minimum horizontal extension of the scour hole (upstream direction) was 2.8 times the cylinder diameter, corresponding to a slope of 30 degrees, 3. Concrete protection mats do not meet the criteria for a strongly erodible seabed. In the present test virtually no reduction in the scour depth was obtained. The main problem is the interface to the cylinder. If there is a void between the mats and the cylinder, scour will develop. Even with the protection mats that are tightly connected to the cylinder, scour is expected to develop as long as the mats allow for

  7. A systematic study of multiple minerals precipitation modelling in wastewater treatment.

    Science.gov (United States)

    Kazadi Mbamba, Christian; Tait, Stephan; Flores-Alsina, Xavier; Batstone, Damien J

    2015-11-15

    Mineral solids precipitation is important in wastewater treatment. However approaches to minerals precipitation modelling are varied, often empirical, and mostly focused on single precipitate classes. A common approach, applicable to multi-species precipitates, is needed to integrate into existing wastewater treatment models. The present study systematically tested a semi-mechanistic modelling approach, using various experimental platforms with multiple minerals precipitation. Experiments included dynamic titration with addition of sodium hydroxide to synthetic wastewater, and aeration to progressively increase pH and induce precipitation in real piggery digestate and sewage sludge digestate. The model approach consisted of an equilibrium part for aqueous phase reactions and a kinetic part for minerals precipitation. The model was fitted to dissolved calcium, magnesium, total inorganic carbon and phosphate. Results indicated that precipitation was dominated by the mineral struvite, forming together with varied and minor amounts of calcium phosphate and calcium carbonate. The model approach was noted to have the advantage of requiring a minimal number of fitted parameters, so the model was readily identifiable. Kinetic rate coefficients, which were statistically fitted, were generally in the range 0.35-11.6 h(-1) with confidence intervals of 10-80% relative. Confidence regions for the kinetic rate coefficients were often asymmetric with model-data residuals increasing more gradually with larger coefficient values. This suggests that a large kinetic coefficient could be used when actual measured data is lacking for a particular precipitate-matrix combination. Correlation between the kinetic rate coefficients of different minerals was low, indicating that parameter values for individual minerals could be independently fitted (keeping all other model parameters constant). Implementation was therefore relatively flexible, and would be readily expandable to include other

  8. Thurstonian models for sensory discrimination tests as generalized linear models

    DEFF Research Database (Denmark)

    Brockhoff, Per B.; Christensen, Rune Haubo Bojesen

    2010-01-01

    as a so-called generalized linear model. The underlying sensory difference 6 becomes directly a parameter of the statistical model and the estimate d' and it's standard error becomes the "usual" output of the statistical analysis. The d' for the monadic A-NOT A method is shown to appear as a standard......Sensory discrimination tests such as the triangle, duo-trio, 2-AFC and 3-AFC tests produce binary data and the Thurstonian decision rule links the underlying sensory difference 6 to the observed number of correct responses. In this paper it is shown how each of these four situations can be viewed...

  9. Information Processing and Risk Perception: An Adaptation of the Heuristic-Systematic Model.

    Science.gov (United States)

    Trumbo, Craig W.

    2002-01-01

    Describes heuristic-systematic information-processing model and risk perception--the two major conceptual areas of the analysis. Discusses the proposed model, describing the context of the data collections (public health communication involving cancer epidemiology) and providing the results of a set of three replications using the proposed model.…

  10. Blast Testing and Modelling of Composite Structures

    DEFF Research Database (Denmark)

    Giversen, Søren

    The motivation for this work is based on a desire for finding light weight alternatives to high strength steel as the material to use for armouring in military vehicles. With the use of high strength steel, an increase in the level of armouring has a significant impact on the vehicle weight......, affecting for example the manoeuvrability and top speed negatively, which ultimately affects the safety of the personal in the vehicle. Strong and light materials, such as fibre reinforced composites, could therefore act as substitutes for the high strength steel, and minimize the impact on the vehicle...... work this set-up should be improved such that the modelled pressure can be validated. For tests performed with a 250g charge load comparisons with model data showed poor agreement. This was found to be due to improper design of the modelled laminate panels, where the layer interface delamination...

  11. BIOMOVS test scenario model comparison using BIOPATH

    International Nuclear Information System (INIS)

    Grogan, H.A.; Van Dorp, F.

    1986-07-01

    This report presents the results of the irrigation test scenario, presented in the BIOMOVS intercomparison study, calculated by the computer code BIOPATH. This scenario defines a constant release of Tc-99 and Np-237 into groundwater that is used for irrigation. The system of compartments used to model the biosphere is based upon an area in northern Switzerland and is essentially the same as that used in Projekt Gewaehr to assess the radiological impact of a high level waste repository. Two separate irrigation methods are considered, namely ditch and overhead irrigation. Their influence on the resultant activities calculated in the groundwater, soil and different foodproducts, as a function of time, is evaluated. The sensitivity of the model to parameter variations is analysed which allows a deeper understanding of the model chain. These results are assessed subjectively in a first effort to realistically quantify the uncertainty associated with each calculated activity. (author)

  12. Thermal modelling of Advanced LIGO test masses

    International Nuclear Information System (INIS)

    Wang, H; Dovale Álvarez, M; Mow-Lowry, C M; Freise, A; Blair, C; Brooks, A; Kasprzack, M F; Ramette, J; Meyers, P M; Kaufer, S; O’Reilly, B

    2017-01-01

    High-reflectivity fused silica mirrors are at the epicentre of today’s advanced gravitational wave detectors. In these detectors, the mirrors interact with high power laser beams. As a result of finite absorption in the high reflectivity coatings the mirrors suffer from a variety of thermal effects that impact on the detectors’ performance. We propose a model of the Advanced LIGO mirrors that introduces an empirical term to account for the radiative heat transfer between the mirror and its surroundings. The mechanical mode frequency is used as a probe for the overall temperature of the mirror. The thermal transient after power build-up in the optical cavities is used to refine and test the model. The model provides a coating absorption estimate of 1.5–2.0 ppm and estimates that 0.3 to 1.3 ppm of the circulating light is scattered onto the ring heater. (paper)

  13. Testing substellar models with dynamical mass measurements

    Directory of Open Access Journals (Sweden)

    Liu M.C.

    2011-07-01

    Full Text Available We have been using Keck laser guide star adaptive optics to monitor the orbits of ultracool binaries, providing dynamical masses at lower luminosities and temperatures than previously available and enabling strong tests of theoretical models. We have identified three specific problems with theory: (1 We find that model color–magnitude diagrams cannot be reliably used to infer masses as they do not accurately reproduce the colors of ultracool dwarfs of known mass. (2 Effective temperatures inferred from evolutionary model radii are typically inconsistent with temperatures derived from fitting atmospheric models to observed spectra by 100–300 K. (3 For the only known pair of field brown dwarfs with a precise mass (3% and age determination (≈25%, the measured luminosities are ~2–3× higher than predicted by model cooling rates (i.e., masses inferred from Lbol and age are 20–30% larger than measured. To make progress in understanding the observed discrepancies, more mass measurements spanning a wide range of luminosity, temperature, and age are needed, along with more accurate age determinations (e.g., via asteroseismology for primary stars with brown dwarf binary companions. Also, resolved optical and infrared spectroscopy are needed to measure lithium depletion and to characterize the atmospheres of binary components in order to better assess model deficiencies.

  14. Extensive and systematic rewiring of histone post-translational modifications in cancer model systems.

    Science.gov (United States)

    Noberini, Roberta; Osti, Daniela; Miccolo, Claudia; Richichi, Cristina; Lupia, Michela; Corleone, Giacomo; Hong, Sung-Pil; Colombo, Piergiuseppe; Pollo, Bianca; Fornasari, Lorenzo; Pruneri, Giancarlo; Magnani, Luca; Cavallaro, Ugo; Chiocca, Susanna; Minucci, Saverio; Pelicci, Giuliana; Bonaldi, Tiziana

    2018-05-04

    Histone post-translational modifications (PTMs) generate a complex combinatorial code that regulates gene expression and nuclear functions, and whose deregulation has been documented in different types of cancers. Therefore, the availability of relevant culture models that can be manipulated and that retain the epigenetic features of the tissue of origin is absolutely crucial for studying the epigenetic mechanisms underlying cancer and testing epigenetic drugs. In this study, we took advantage of quantitative mass spectrometry to comprehensively profile histone PTMs in patient tumor tissues, primary cultures and cell lines from three representative tumor models, breast cancer, glioblastoma and ovarian cancer, revealing an extensive and systematic rewiring of histone marks in cell culture conditions, which includes a decrease of H3K27me2/me3, H3K79me1/me2 and H3K9ac/K14ac, and an increase of H3K36me1/me2. While some changes occur in short-term primary cultures, most of them are instead time-dependent and appear only in long-term cultures. Remarkably, such changes mostly revert in cell line- and primary cell-derived in vivo xenograft models. Taken together, these results support the use of xenografts as the most representative models of in vivo epigenetic processes, suggesting caution when using cultured cells, in particular cell lines and long-term primary cultures, for epigenetic investigations.

  15. Correlation of the New York Heart Association classification and the cardiopulmonary exercise test: A systematic review.

    Science.gov (United States)

    Lim, Fang Yi; Yap, Jonathan; Gao, Fei; Teo, Ling Li; Lam, Carolyn S P; Yeo, Khung Keong

    2018-07-15

    The New York Heart Association (NYHA) classification is frequently used in the management of heart failure but may be limited by patient and physician subjectivity. Cardiopulmonary exercise testing (CPET) provides a potentially more objective measurement of functional status. We aim to study the correlation between NYHA classification and peak oxygen consumption (pVO 2 ) on Cardiopulmonary Exercise Testing (CPET) within and across published studies. A systematic literature review on all studies reporting both NYHA class and CPET data was performed, and pVO 2 from CPET was correlated to reported NYHA class within and across eligible studies. 38 studies involving 2645 patients were eligible. Heterogenity was assessed by the Q statistic, which is a χ2 test and marker of systematic differences between studies. Within each NYHA class, significant heterogeneity in pVO 2 was seen across studies: NYHA I (n = 17, Q = 486.7, p < 0.0001), II (n = 24, Q = 381.0, p < 0.0001), III (n = 32, Q = 761.3, p < 0.0001) and IV (n = 5, Q = 12.8, p = 0.012). Significant differences in mean pVO 2 were observed between NYHA I and II (23.8 vs 17.6 mL/(kg·min), p < 0.0001) and II and III (17.6 vs 13.3 mL/(kg·min), p < 0.0001); but not between NYHA III and IV (13.3 vs 12.5 mL/(kg·min), p = 0.45). These differences remained significant after adjusting for age, gender, ejection fraction and region of study. There was a general inverse correlation between NYHA class and pVO 2. However, significant heterogeneity in pVO 2 exists across studies within each NYHA class. While the NYHA classification holds clinical value in heart failure management, direct comparison across studies may have its limitations. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Seepage Calibration Model and Seepage Testing Data

    International Nuclear Information System (INIS)

    Dixon, P.

    2004-01-01

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM is developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA (see upcoming REV 02 of CRWMS M and O 2000 [153314]), which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model (see BSC 2003 [161530]). The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross Drift to obtain the permeability structure for the seepage model; (3) to use inverse modeling to calibrate the SCM and to estimate seepage-relevant, model-related parameters on the drift scale; (4) to estimate the epistemic uncertainty of the derived parameters, based on the goodness-of-fit to the observed data and the sensitivity of calculated seepage with respect to the parameters of interest; (5) to characterize the aleatory uncertainty

  17. Should we systematically test patients with clinically isolated syndrome for auto-antibodies?

    Science.gov (United States)

    Negrotto, Laura; Tur, Carmen; Tintoré, Mar; Arrambide, Georgina; Sastre-Garriga, Jaume; Río, Jordi; Comabella, Manuel; Nos, Carlos; Galán, Ingrid; Vidal-Jordana, Angela; Simon, Eva; Castilló, Joaquín; Palavra, Filipe; Mitjana, Raquel; Auger, Cristina; Rovira, Àlex; Montalban, Xavier

    2015-12-01

    Several autoimmune diseases (ADs) can mimic multiple sclerosis (MS). For this reason, testing for auto-antibodies (auto-Abs) is often included in the diagnostic work-up of patients with a clinically isolated syndrome (CIS). The purpose was to study how useful it was to systematically determine antinuclear-antibodies, anti-SSA and anti-SSB in a non-selected cohort of CIS patients, regarding the identification of other ADs that could represent an alternative diagnosis. From a prospective CIS cohort, we selected 772 patients in which auto-Ab levels were tested within the first year from CIS. Baseline characteristics of auto-Ab positive and negative patients were compared. A retrospective revision of clinical records was then performed in the auto-Ab positive patients to identify those who developed ADs during follow-up. One or more auto-Ab were present in 29.4% of patients. Only 1.8% of patients developed other ADs during a mean follow-up of 6.6 years. In none of these cases the concurrent AD was considered the cause of the CIS. In all cases the diagnosis of the AD resulted from the development of signs and/or symptoms suggestive of each disease. Antinuclear-antibodies, anti-SSA and anti-SSB should not be routinely determined in CIS patients but only in those presenting symptoms suggestive of other ADs. © The Author(s), 2015.

  18. Large scale injection test (LASGIT) modelling

    International Nuclear Information System (INIS)

    Arnedo, D.; Olivella, S.; Alonso, E.E.

    2010-01-01

    Document available in extended abstract form only. With the objective of understanding the gas flow processes through clay barriers in schemes of radioactive waste disposal, the Lasgit in situ experiment was planned and is currently in progress. The modelling of the experiment will permit to better understand of the responses, to confirm hypothesis of mechanisms and processes and to learn in order to design future experiments. The experiment and modelling activities are included in the project FORGE (FP7). The in situ large scale injection test Lasgit is currently being performed at the Aespoe Hard Rock Laboratory by SKB and BGS. An schematic layout of the test is shown. The deposition hole follows the KBS3 scheme. A copper canister is installed in the axe of the deposition hole, surrounded by blocks of highly compacted MX-80 bentonite. A concrete plug is placed at the top of the buffer. A metallic lid anchored to the surrounding host rock is included in order to prevent vertical movements of the whole system during gas injection stages (high gas injection pressures are expected to be reached). Hydration of the buffer material is achieved by injecting water through filter mats, two placed at the rock walls and two at the interfaces between bentonite blocks. Water is also injected through the 12 canister filters. Gas injection stages are performed injecting gas to some of the canister injection filters. Since the water pressure and the stresses (swelling pressure development) will be high during gas injection, it is necessary to inject at high gas pressures. This implies mechanical couplings as gas penetrates after the gas entry pressure is achieved and may produce deformations which in turn lead to permeability increments. A 3D hydro-mechanical numerical model of the test using CODE-BRIGHT is presented. The domain considered for the modelling is shown. The materials considered in the simulation are the MX-80 bentonite blocks (cylinders and rings), the concrete plug

  19. Adversarial life testing: A Bayesian negotiation model

    International Nuclear Information System (INIS)

    Rufo, M.J.; Martín, J.; Pérez, C.J.

    2014-01-01

    Life testing is a procedure intended for facilitating the process of making decisions in the context of industrial reliability. On the other hand, negotiation is a process of making joint decisions that has one of its main foundations in decision theory. A Bayesian sequential model of negotiation in the context of adversarial life testing is proposed. This model considers a general setting for which a manufacturer offers a product batch to a consumer. It is assumed that the reliability of the product is measured in terms of its lifetime. Furthermore, both the manufacturer and the consumer have to use their own information with respect to the quality of the product. Under these assumptions, two situations can be analyzed. For both of them, the main aim is to accept or reject the product batch based on the product reliability. This topic is related to a reliability demonstration problem. The procedure is applied to a class of distributions that belong to the exponential family. Thus, a unified framework addressing the main topics in the considered Bayesian model is presented. An illustrative example shows that the proposed technique can be easily applied in practice

  20. Tests of local Lorentz invariance violation of gravity in the standard model extension with pulsars.

    Science.gov (United States)

    Shao, Lijing

    2014-03-21

    The standard model extension is an effective field theory introducing all possible Lorentz-violating (LV) operators to the standard model and general relativity (GR). In the pure-gravity sector of minimal standard model extension, nine coefficients describe dominant observable deviations from GR. We systematically implemented 27 tests from 13 pulsar systems to tightly constrain eight linear combinations of these coefficients with extensive Monte Carlo simulations. It constitutes the first detailed and systematic test of the pure-gravity sector of minimal standard model extension with the state-of-the-art pulsar observations. No deviation from GR was detected. The limits of LV coefficients are expressed in the canonical Sun-centered celestial-equatorial frame for the convenience of further studies. They are all improved by significant factors of tens to hundreds with existing ones. As a consequence, Einstein's equivalence principle is verified substantially further by pulsar experiments in terms of local Lorentz invariance in gravity.

  1. Consistency test of the standard model

    International Nuclear Information System (INIS)

    Pawlowski, M.; Raczka, R.

    1997-01-01

    If the 'Higgs mass' is not the physical mass of a real particle but rather an effective ultraviolet cutoff then a process energy dependence of this cutoff must be admitted. Precision data from at least two energy scale experimental points are necessary to test this hypothesis. The first set of precision data is provided by the Z-boson peak experiments. We argue that the second set can be given by 10-20 GeV e + e - colliders. We pay attention to the special role of tau polarization experiments that can be sensitive to the 'Higgs mass' for a sample of ∼ 10 8 produced tau pairs. We argue that such a study may be regarded as a negative selfconsistency test of the Standard Model and of most of its extensions

  2. Uncertainty Analysis of Resistance Tests in Ata Nutku Ship Model Testing Laboratory of Istanbul Technical University

    Directory of Open Access Journals (Sweden)

    Cihad DELEN

    2015-12-01

    Full Text Available In this study, some systematical resistance tests, where were performed in Ata Nutku Ship Model Testing Laboratory of Istanbul Technical University (ITU, have been included in order to determine the uncertainties. Experiments which are conducted in the framework of mathematical and physical rules for the solution of engineering problems, measurements, calculations include uncertainty. To question the reliability of the obtained values, the existing uncertainties should be expressed as quantities. The uncertainty of a measurement system is not known if the results do not carry a universal value. On the other hand, resistance is one of the most important parameters that should be considered in the process of ship design. Ship resistance during the design phase of a ship cannot be determined precisely and reliably due to the uncertainty resources in determining the resistance value that are taken into account. This case may cause negative effects to provide the required specifications in the latter design steps. The uncertainty arising from the resistance test has been estimated and compared for a displacement type ship and high speed marine vehicles according to ITTC 2002 and ITTC 2014 regulations which are related to the uncertainty analysis methods. Also, the advantages and disadvantages of both ITTC uncertainty analysis methods have been discussed.

  3. Tests on thirteen navy type model propellers

    Science.gov (United States)

    Durand, W F

    1927-01-01

    The tests on these model propellers were undertaken for the purpose of determining the performance coefficients and characteristics for certain selected series of propellers of form and type as commonly used in recent navy designs. The first series includes seven propellers of pitch ratio varying by 0.10 to 1.10, the area, form of blade, thickness, etc., representing an arbitrary standard propeller which had shown good results. The second series covers changes in thickness of blade section, other things equal, and the third series, changes in blade area, other things equal. These models are all of 36-inch diameter. Propellers A to G form the series on pitch ratio, C, N. I. J the series on thickness of section, and K, M, C, L the series on area. (author)

  4. Impact of systematic HIV testing on case finding and retention in care at a primary care clinic in South Africa.

    Science.gov (United States)

    Clouse, Kate; Hanrahan, Colleen F; Bassett, Jean; Fox, Matthew P; Sanne, Ian; Van Rie, Annelies

    2014-12-01

    Systematic, opt-out HIV counselling and testing (HCT) may diagnose individuals at lower levels of immunodeficiency but may impact loss to follow-up (LTFU) if healthier people are less motivated to engage and remain in HIV care. We explored LTFU and patient clinical outcomes under two different HIV testing strategies. We compared patient characteristics and retention in care between adults newly diagnosed with HIV by either voluntary counselling and testing (VCT) plus targeted provider-initiated counselling and testing (PITC) or systematic HCT at a primary care clinic in Johannesburg, South Africa. One thousand one hundred and forty-four adults were newly diagnosed by VCT/PITC and 1124 by systematic HCT. Two-thirds of diagnoses were in women. Median CD4 count at HIV diagnosis (251 vs. 264 cells/μl, P = 0.19) and proportion of individuals eligible for antiretroviral therapy (ART) (67.2% vs. 66.7%, P = 0.80) did not differ by HCT strategy. Within 1 year of HIV diagnosis, half were LTFU: 50.5% under VCT/PITC and 49.6% under systematic HCT (P = 0.64). The overall hazard of LTFU was not affected by testing policy (aHR 0.98, 95%CI: 0.87-1.10). Independent of HCT strategy, males, younger adults and those ineligible for ART were at higher risk of LTFU. Implementation of systematic HCT did not increase baseline CD4 count. Overall retention in the first year after HIV diagnosis was low (37.9%), especially among those ineligible for ART, but did not differ by testing strategy. Expansion of HIV testing should coincide with effective strategies to increase retention in care, especially among those not yet eligible for ART at initial diagnosis. © 2014 John Wiley & Sons Ltd.

  5. Seepage Calibration Model and Seepage Testing Data

    Energy Technology Data Exchange (ETDEWEB)

    S. Finsterle

    2004-09-02

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM was developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). This Model Report has been revised in response to a comprehensive, regulatory-focused evaluation performed by the Regulatory Integration Team [''Technical Work Plan for: Regulatory Integration Evaluation of Analysis and Model Reports Supporting the TSPA-LA'' (BSC 2004 [DIRS 169653])]. The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross-Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA [''Seepage Model for PA Including Drift Collapse'' (BSC 2004 [DIRS 167652])], which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model [see ''Drift-Scale Coupled Processes (DST and TH Seepage) Models'' (BSC 2004 [DIRS 170338])]. The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross-Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross

  6. Seepage Calibration Model and Seepage Testing Data

    International Nuclear Information System (INIS)

    Finsterle, S.

    2004-01-01

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM was developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). This Model Report has been revised in response to a comprehensive, regulatory-focused evaluation performed by the Regulatory Integration Team [''Technical Work Plan for: Regulatory Integration Evaluation of Analysis and Model Reports Supporting the TSPA-LA'' (BSC 2004 [DIRS 169653])]. The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross-Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA [''Seepage Model for PA Including Drift Collapse'' (BSC 2004 [DIRS 167652])], which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model [see ''Drift-Scale Coupled Processes (DST and TH Seepage) Models'' (BSC 2004 [DIRS 170338])]. The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross-Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross-Drift to obtain the permeability structure for the seepage model

  7. Systematic Methods and Tools for Computer Aided Modelling

    DEFF Research Database (Denmark)

    Fedorova, Marina

    and processes can be faster, cheaper and very efficient. The developed modelling framework involves five main elements: 1) a modelling tool, that includes algorithms for model generation; 2) a template library, which provides building blocks for the templates (generic models previously developed); 3) computer......-format and COM-objects, are incorporated to allow the export and import of mathematical models; 5) a user interface that provides the work-flow and data-flow to guide the user through the different modelling tasks....

  8. Systematic and reliable multiscale modelling of lithium batteries

    Science.gov (United States)

    Atalay, Selcuk; Schmuck, Markus

    2017-11-01

    Motivated by the increasing interest in lithium batteries as energy storage devices (e.g. cars/bycicles/public transport, social robot companions, mobile phones, and tablets), we investigate three basic cells: (i) a single intercalation host; (ii) a periodic arrangement of intercalation hosts; and (iii) a rigorously upscaled formulation of (ii) as initiated in. By systematically accounting for Li transport and interfacial reactions in (i)-(iii), we compute the associated chracteristic current-voltage curves and power densities. Finally, we discuss the influence of how the intercalation particles are arranged. Our findings are expected to improve the understanding of how microscopic properties affect the battery behaviour observed on the macroscale and at the same time, the upscaled formulation (iii) serves as an efficient computational tool. This work has been supported by EPSRC, UK, through the Grant No. EP/P011713/1.

  9. HCV Core Antigen Testing for Diagnosis of HCV Infection: A systematic review and meta-analysis

    Science.gov (United States)

    Freiman, J. Morgan; Tran, Trang M.; Schumacher, Samuel G; White, Laura F.; Ongarello, Stefano; Cohn, Jennifer; Easterbrook, Philippa J.; Linas, Benjamin P.; Denkinger, Claudia M.

    2017-01-01

    Background Diagnosis of chronic Hepatitis C Virus (HCV) infection requires both a positive HCV antibody screen and confirmatory nucleic acid test (NAT). HCV core antigen (HCVcAg) is a potential alternative to NAT. Purpose This systematic review evaluated the accuracy of diagnosis of active HCV infection among adults and children for five HCVcAg tests compared to NAT. Data Sources EMBASE, PubMed, Web of Science, Scopus, and Cochrane from 1990 through March 31, 2016. Study Selection Cohort, cross-sectional, and randomized controlled trials were included without language restriction Data Extraction Two independent reviewers extracted data and assessed quality using an adapted Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. Data Synthesis 44 studies evaluated 5 index tests. Studies for the ARCHITECT had the highest quality, while those for Ortho ELISA were the lowest. From bivariate analyses, the sensitivity and specificity with 95% CI were: ARCHITECT 93.4% (90.1, 96.4) and 98.8% (97.4, 99.5), Ortho ELISA 93.2% (81.6, 97.7) and 99.2% (87.9, 100), and Hunan Jynda 59.5% (46.0, 71.7) and 82.9% (58.6, 94.3). Insufficient data were available for a meta-analysis for Lumipulse and Lumispot. In three quantitative studies using ARCHITECT, HCVcAg correlated closely with HCV RNA above 3000 IU/mL. Limitations There was insufficient data on covariates such as HIV or HBV status for sub-group analyses. Few studies reported genotypes of isolates and there were scant data for genotypes 4, 5, and 6. Most studies were conducted in high resource settings within reference laboratories. Conclusions HCVcAg assays with signal amplification have high sensitivity, high specificity, and good correlation with HCV RNA above 3000 IU/mL. HCVcAg assays have the potential to replace NAT in high HCV prevalence settings. PMID:27322622

  10. Modeling of novel diagnostic strategies for active tuberculosis - a systematic review: current practices and recommendations.

    Directory of Open Access Journals (Sweden)

    Alice Zwerling

    Full Text Available The field of diagnostics for active tuberculosis (TB is rapidly developing. TB diagnostic modeling can help to inform policy makers and support complicated decisions on diagnostic strategy, with important budgetary implications. Demand for TB diagnostic modeling is likely to increase, and an evaluation of current practice is important. We aimed to systematically review all studies employing mathematical modeling to evaluate cost-effectiveness or epidemiological impact of novel diagnostic strategies for active TB.Pubmed, personal libraries and reference lists were searched to identify eligible papers. We extracted data on a wide variety of model structure, parameter choices, sensitivity analyses and study conclusions, which were discussed during a meeting of content experts.From 5619 records a total of 36 papers were included in the analysis. Sixteen papers included population impact/transmission modeling, 5 were health systems models, and 24 included estimates of cost-effectiveness. Transmission and health systems models included specific structure to explore the importance of the diagnostic pathway (n = 4, key determinants of diagnostic delay (n = 5, operational context (n = 5, and the pre-diagnostic infectious period (n = 1. The majority of models implemented sensitivity analysis, although only 18 studies described multi-way sensitivity analysis of more than 2 parameters simultaneously. Among the models used to make cost-effectiveness estimates, most frequent diagnostic assays studied included Xpert MTB/RIF (n = 7, and alternative nucleic acid amplification tests (NAATs (n = 4. Most (n = 16 of the cost-effectiveness models compared new assays to an existing baseline and generated an incremental cost-effectiveness ratio (ICER.Although models have addressed a small number of important issues, many decisions regarding implementation of TB diagnostics are being made without the full benefits of insight from mathematical

  11. KIDMED TEST; PREVALENCE OF LOW ADHERENCE TO THE MEDITERRANEAN DIET IN CHILDREN AND YOUNG; A SYSTEMATIC REVIEW.

    Science.gov (United States)

    García Cabrera, S; Herrera Fernández, N; Rodríguez Hernández, C; Nissensohn, M; Román-Viñas, B; Serra-Majem, L

    2015-12-01

    during the last decades, a quick and important modification of the dietary habits has been observed in the Mediterranean countries, especially among young people. Several authors have evaluated the pattern of adherence to the Mediterranean Diet in this group of population, by using the KIDMED test. the purpose of this study was to evaluate the adherence to the Mediterranean Diet among children and adolescents by using the KIDMED test through a systematic review and meta-analysis. PubMed database was accessed until January 2014. Only cross-sectional studies evaluating children and young people were included. A random effects model was considered. eighteen cross-sectional studies were included. The population age ranged from 2 to 25 years. The total sample included 24 067 people. The overall percentage of high adherence to the Mediterranean Diet was 10% (95% CI 0.07-0.13), while the low adhesion was 21% (IC 95% 0.14 to 0.27). In the low adherence group, further analyses were performed by defined subgroups, finding differences for the age of the population and the geographical area. the results obtained showed important differences between high and low adherence to the Mediterranean Diet levels, although successive subgroup analyzes were performed. There is a clear trend towards the abandonment of the Mediterranean lifestyle. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  12. Systematic study of polycrystalline flow during tension test of sheet 304 austenitic stainless steel at room temperature

    International Nuclear Information System (INIS)

    Muñoz-Andrade, Juan D.

    2013-01-01

    By systematic study the mapping of polycrystalline flow of sheet 304 austenitic stainless steel (ASS) during tension test at constant crosshead velocity at room temperature was obtained. The main results establish that the trajectory of crystals in the polycrystalline spatially extended system (PCSES), during irreversible deformation process obey a hyperbolic motion. Where, the ratio between the expansion velocity of the field and the velocity of the field source is not constant and the field lines of such trajectory of crystals become curved, this accelerated motion is called a hyperbolic motion. Such behavior is assisted by dislocations dynamics and self-accommodation process between crystals in the PCSES. Furthermore, by applying the quantum mechanics and relativistic model proposed by Muñoz-Andrade, the activation energy for polycrystalline flow during the tension test of 304 ASS was calculated for each instant in a global form. In conclusion was established that the mapping of the polycrystalline flow is fundamental to describe in an integral way the phenomenology and mechanics of irreversible deformation processes

  13. Systematic study of polycrystalline flow during tension test of sheet 304 austenitic stainless steel at room temperature

    Energy Technology Data Exchange (ETDEWEB)

    Muñoz-Andrade, Juan D., E-mail: jdma@correo.azc.uam.mx [Departamento de Materiales, División de Ciencias Básicas e Ingeniería, Universidad Autónoma Metropolitana Unidad Azcapotzalco, Av. San Pablo No. 180, Colonia Reynosa Tamaulipas, C.P. 02200, México Distrito Federal (Mexico)

    2013-12-16

    By systematic study the mapping of polycrystalline flow of sheet 304 austenitic stainless steel (ASS) during tension test at constant crosshead velocity at room temperature was obtained. The main results establish that the trajectory of crystals in the polycrystalline spatially extended system (PCSES), during irreversible deformation process obey a hyperbolic motion. Where, the ratio between the expansion velocity of the field and the velocity of the field source is not constant and the field lines of such trajectory of crystals become curved, this accelerated motion is called a hyperbolic motion. Such behavior is assisted by dislocations dynamics and self-accommodation process between crystals in the PCSES. Furthermore, by applying the quantum mechanics and relativistic model proposed by Muñoz-Andrade, the activation energy for polycrystalline flow during the tension test of 304 ASS was calculated for each instant in a global form. In conclusion was established that the mapping of the polycrystalline flow is fundamental to describe in an integral way the phenomenology and mechanics of irreversible deformation processes.

  14. Asteroseismic modelling of solar-type stars: internal systematics from input physics and surface correction methods

    Science.gov (United States)

    Nsamba, B.; Campante, T. L.; Monteiro, M. J. P. F. G.; Cunha, M. S.; Rendle, B. M.; Reese, D. R.; Verma, K.

    2018-04-01

    Asteroseismic forward modelling techniques are being used to determine fundamental properties (e.g. mass, radius, and age) of solar-type stars. The need to take into account all possible sources of error is of paramount importance towards a robust determination of stellar properties. We present a study of 34 solar-type stars for which high signal-to-noise asteroseismic data is available from multi-year Kepler photometry. We explore the internal systematics on the stellar properties, that is, associated with the uncertainty in the input physics used to construct the stellar models. In particular, we explore the systematics arising from: (i) the inclusion of the diffusion of helium and heavy elements; and (ii) the uncertainty in solar metallicity mixture. We also assess the systematics arising from (iii) different surface correction methods used in optimisation/fitting procedures. The systematics arising from comparing results of models with and without diffusion are found to be 0.5%, 0.8%, 2.1%, and 16% in mean density, radius, mass, and age, respectively. The internal systematics in age are significantly larger than the statistical uncertainties. We find the internal systematics resulting from the uncertainty in solar metallicity mixture to be 0.7% in mean density, 0.5% in radius, 1.4% in mass, and 6.7% in age. The surface correction method by Sonoi et al. and Ball & Gizon's two-term correction produce the lowest internal systematics among the different correction methods, namely, ˜1%, ˜1%, ˜2%, and ˜8% in mean density, radius, mass, and age, respectively. Stellar masses obtained using the surface correction methods by Kjeldsen et al. and Ball & Gizon's one-term correction are systematically higher than those obtained using frequency ratios.

  15. Physical examination tests of the shoulder: a systematic review and meta-analysis of diagnostic test performance.

    Science.gov (United States)

    Gismervik, Sigmund Ø; Drogset, Jon O; Granviken, Fredrik; Rø, Magne; Leivseth, Gunnar

    2017-01-25

    Physical examination tests of the shoulder (PETS) are clinical examination maneuvers designed to aid the assessment of shoulder complaints. Despite more than 180 PETS described in the literature, evidence of their validity and usefulness in diagnosing the shoulder is questioned. This meta-analysis aims to use diagnostic odds ratio (DOR) to evaluate how much PETS shift overall probability and to rank the test performance of single PETS in order to aid the clinician's choice of which tests to use. This study adheres to the principles outlined in the Cochrane guidelines and the PRISMA statement. A fixed effect model was used to assess the overall diagnostic validity of PETS by pooling DOR for different PETS with similar biomechanical rationale when possible. Single PETS were assessed and ranked by DOR. Clinical performance was assessed by sensitivity, specificity, accuracy and likelihood ratio. Six thousand nine-hundred abstracts and 202 full-text articles were assessed for eligibility; 20 articles were eligible and data from 11 articles could be included in the meta-analysis. All PETS for SLAP (superior labral anterior posterior) lesions pooled gave a DOR of 1.38 [1.13, 1.69]. The Supraspinatus test for any full thickness rotator cuff tear obtained the highest DOR of 9.24 (sensitivity was 0.74, specificity 0.77). Compression-Rotation test obtained the highest DOR (6.36) among single PETS for SLAP lesions (sensitivity 0.43, specificity 0.89) and Hawkins test obtained the highest DOR (2.86) for impingement syndrome (sensitivity 0.58, specificity 0.67). No single PETS showed superior clinical test performance. The clinical performance of single PETS is limited. However, when the different PETS for SLAP lesions were pooled, we found a statistical significant change in post-test probability indicating an overall statistical validity. We suggest that clinicians choose their PETS among those with the highest pooled DOR and to assess validity to their own specific clinical

  16. Combinatorial QSAR modeling of chemical toxicants tested against Tetrahymena pyriformis.

    Science.gov (United States)

    Zhu, Hao; Tropsha, Alexander; Fourches, Denis; Varnek, Alexandre; Papa, Ester; Gramatica, Paola; Oberg, Tomas; Dao, Phuong; Cherkasov, Artem; Tetko, Igor V

    2008-04-01

    Selecting most rigorous quantitative structure-activity relationship (QSAR) approaches is of great importance in the development of robust and predictive models of chemical toxicity. To address this issue in a systematic way, we have formed an international virtual collaboratory consisting of six independent groups with shared interests in computational chemical toxicology. We have compiled an aqueous toxicity data set containing 983 unique compounds tested in the same laboratory over a decade against Tetrahymena pyriformis. A modeling set including 644 compounds was selected randomly from the original set and distributed to all groups that used their own QSAR tools for model development. The remaining 339 compounds in the original set (external set I) as well as 110 additional compounds (external set II) published recently by the same laboratory (after this computational study was already in progress) were used as two independent validation sets to assess the external predictive power of individual models. In total, our virtual collaboratory has developed 15 different types of QSAR models of aquatic toxicity for the training set. The internal prediction accuracy for the modeling set ranged from 0.76 to 0.93 as measured by the leave-one-out cross-validation correlation coefficient ( Q abs2). The prediction accuracy for the external validation sets I and II ranged from 0.71 to 0.85 (linear regression coefficient R absI2) and from 0.38 to 0.83 (linear regression coefficient R absII2), respectively. The use of an applicability domain threshold implemented in most models generally improved the external prediction accuracy but at the same time led to a decrease in chemical space coverage. Finally, several consensus models were developed by averaging the predicted aquatic toxicity for every compound using all 15 models, with or without taking into account their respective applicability domains. We find that consensus models afford higher prediction accuracy for the

  17. Prediction models for successful external cephalic version: a systematic review.

    Science.gov (United States)

    Velzel, Joost; de Hundt, Marcella; Mulder, Frederique M; Molkenboer, Jan F M; Van der Post, Joris A M; Mol, Ben W; Kok, Marjolein

    2015-12-01

    To provide an overview of existing prediction models for successful ECV, and to assess their quality, development and performance. We searched MEDLINE, EMBASE and the Cochrane Library to identify all articles reporting on prediction models for successful ECV published from inception to January 2015. We extracted information on study design, sample size, model-building strategies and validation. We evaluated the phases of model development and summarized their performance in terms of discrimination, calibration and clinical usefulness. We collected different predictor variables together with their defined significance, in order to identify important predictor variables for successful ECV. We identified eight articles reporting on seven prediction models. All models were subjected to internal validation. Only one model was also validated in an external cohort. Two prediction models had a low overall risk of bias, of which only one showed promising predictive performance at internal validation. This model also completed the phase of external validation. For none of the models their impact on clinical practice was evaluated. The most important predictor variables for successful ECV described in the selected articles were parity, placental location, breech engagement and the fetal head being palpable. One model was assessed using discrimination and calibration using internal (AUC 0.71) and external validation (AUC 0.64), while two other models were assessed with discrimination and calibration, respectively. We found one prediction model for breech presentation that was validated in an external cohort and had acceptable predictive performance. This model should be used to council women considering ECV. Copyright © 2015. Published by Elsevier Ireland Ltd.

  18. Model-Based Software Testing for Object-Oriented Software

    Science.gov (United States)

    Biju, Soly Mathew

    2008-01-01

    Model-based testing is one of the best solutions for testing object-oriented software. It has a better test coverage than other testing styles. Model-based testing takes into consideration behavioural aspects of a class, which are usually unchecked in other testing methods. An increase in the complexity of software has forced the software industry…

  19. The reliability of physical examination tests for the diagnosis of anterior cruciate ligament rupture--A systematic review.

    Science.gov (United States)

    Lange, Toni; Freiberg, Alice; Dröge, Patrik; Lützner, Jörg; Schmitt, Jochen; Kopkow, Christian

    2015-06-01

    Systematic literature review. Despite their frequent application in routine care, a systematic review on the reliability of clinical examination tests to evaluate the integrity of the ACL is missing. To summarize and evaluate intra- and interrater reliability research on physical examination tests used for the diagnosis of ACL tears. A comprehensive systematic literature search was conducted in MEDLINE, EMBASE and AMED until May 30th 2013. Studies were included if they assessed the intra- and/or interrater reliability of physical examination tests for the integrity of the ACL. Methodological quality was evaluated with the Quality Appraisal of Reliability Studies (QAREL) tool by two independent reviewers. 110 hits were achieved of which seven articles finally met the inclusion criteria. These studies examined the reliability of four physical examination tests. Intrarater reliability was assessed in three studies and ranged from fair to almost perfect (Cohen's k = 0.22-1.00). Interrater reliability was assessed in all included studies and ranged from slight to almost perfect (Cohen's k = 0.02-0.81). The Lachman test is the physical tests with the highest intrarater reliability (Cohen's k = 1.00), the Lachman test performed in prone position the test with the highest interrater reliability (Cohen's k = 0.81). Included studies were partly of low methodological quality. A meta-analysis could not be performed due to the heterogeneity in study populations, reliability measures and methodological quality of included studies. Systematic investigations on the reliability of physical examination tests to assess the integrity of the ACL are scarce and of varying methodological quality. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Reduction in uptake of PSA tests following decision aids: systematic review of current aids and their evaluations.

    NARCIS (Netherlands)

    Evans, R.; Edwards, A.; Brett, J.; Bradburn, M.; Watson, E.; Austoker, J.; Elwyn, G.

    2005-01-01

    A man's decision to have a prostate-specific antigen (PSA) test should be an informed one. We undertook a systematic review to identify and appraise PSA decision aids and evaluations. We searched 15 electronic databases and hand-searched key journals. We also contacted key authors and organisations.

  1. A systematic review of the diagnostic accuracy of provocative tests of the neck for diagnosing cervical radiculopathy.

    NARCIS (Netherlands)

    Rubinstein, S.M.; Pool, J.J.; van Tulder, M.W.; Riphagen, II; de Vet, H.C.W.

    2007-01-01

    Clinical provocative tests of the neck, which position the neck and arm inorder to aggravate or relieve arm symptoms, are commonly used in clinical practice in patients with a suspected cervical radiculopathy. Their diagnostic accuracy, however, has never been examined in a systematic review. A

  2. The Technical Quality of Test Items Generated Using a Systematic Approach to Item Writing.

    Science.gov (United States)

    Siskind, Theresa G.; Anderson, Lorin W.

    The study was designed to examine the similarity of response options generated by different item writers using a systematic approach to item writing. The similarity of response options to student responses for the same item stems presented in an open-ended format was also examined. A non-systematic (subject matter expertise) approach and a…

  3. Experimentally testing the standard cosmological model

    Energy Technology Data Exchange (ETDEWEB)

    Schramm, D.N. (Chicago Univ., IL (USA) Fermi National Accelerator Lab., Batavia, IL (USA))

    1990-11-01

    The standard model of cosmology, the big bang, is now being tested and confirmed to remarkable accuracy. Recent high precision measurements relate to the microwave background; and big bang nucleosynthesis. This paper focuses on the latter since that relates more directly to high energy experiments. In particular, the recent LEP (and SLC) results on the number of neutrinos are discussed as a positive laboratory test of the standard cosmology scenario. Discussion is presented on the improved light element observational data as well as the improved neutron lifetime data. alternate nucleosynthesis scenarios of decaying matter or of quark-hadron induced inhomogeneities are discussed. It is shown that when these scenarios are made to fit the observed abundances accurately, the resulting conclusions on the baryonic density relative to the critical density, {Omega}{sub b}, remain approximately the same as in the standard homogeneous case, thus, adding to the robustness of the standard model conclusion that {Omega}{sub b} {approximately} 0.06. This latter point is the deriving force behind the need for non-baryonic dark matter (assuming {Omega}{sub total} = 1) and the need for dark baryonic matter, since {Omega}{sub visible} < {Omega}{sub b}. Recent accelerator constraints on non-baryonic matter are discussed, showing that any massive cold dark matter candidate must now have a mass M{sub x} {approx gt} 20 GeV and an interaction weaker than the Z{sup 0} coupling to a neutrino. It is also noted that recent hints regarding the solar neutrino experiments coupled with the see-saw model for {nu}-masses may imply that the {nu}{sub {tau}} is a good hot dark matter candidate. 73 refs., 5 figs.

  4. Experimentally testing the standard cosmological model

    International Nuclear Information System (INIS)

    Schramm, D.N.

    1990-11-01

    The standard model of cosmology, the big bang, is now being tested and confirmed to remarkable accuracy. Recent high precision measurements relate to the microwave background; and big bang nucleosynthesis. This paper focuses on the latter since that relates more directly to high energy experiments. In particular, the recent LEP (and SLC) results on the number of neutrinos are discussed as a positive laboratory test of the standard cosmology scenario. Discussion is presented on the improved light element observational data as well as the improved neutron lifetime data. alternate nucleosynthesis scenarios of decaying matter or of quark-hadron induced inhomogeneities are discussed. It is shown that when these scenarios are made to fit the observed abundances accurately, the resulting conclusions on the baryonic density relative to the critical density, Ω b , remain approximately the same as in the standard homogeneous case, thus, adding to the robustness of the standard model conclusion that Ω b ∼ 0.06. This latter point is the deriving force behind the need for non-baryonic dark matter (assuming Ω total = 1) and the need for dark baryonic matter, since Ω visible b . Recent accelerator constraints on non-baryonic matter are discussed, showing that any massive cold dark matter candidate must now have a mass M x approx-gt 20 GeV and an interaction weaker than the Z 0 coupling to a neutrino. It is also noted that recent hints regarding the solar neutrino experiments coupled with the see-saw model for ν-masses may imply that the ν τ is a good hot dark matter candidate. 73 refs., 5 figs

  5. Real-time screening tests for functional alignment of the trunk and lower extremities in adolescent – a systematic review

    DEFF Research Database (Denmark)

    Junge, Tina; Wedderkopp, N; Juul-Kristensen, B

    mechanisms resulting in ACL injuries (Hewett, 2010). Prevention may therefore depend on identifying these potentially injury risk factors. Screening tools must thus include patterns of typical movements in sport and leisure time activities, consisting of high-load and multi-directional tests, focusing...... on functional alignment. In large epidemiological studies these tests must only require a minimum of time and technical equipment. Objective The purpose of the study was to accomplish a systematic review of screening tests for identification of adolescents at increased risk of knee injuries, focusing...... of knee alignment, there is a further need to evaluate reliability and validity of real-time functional alignment tests, before the can be used as screening tools for prevention of knee injuries among adolescents. Still the next step in this systematic review is to evaluate the quality and feasibility...

  6. Deterministic Modeling of the High Temperature Test Reactor

    International Nuclear Information System (INIS)

    Ortensi, J.; Cogliati, J.J.; Pope, M.A.; Ferrer, R.M.; Ougouag, A.M.

    2010-01-01

    Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL's current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is used in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green's Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2-3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the control

  7. Aerospace structural design process improvement using systematic evolutionary structural modeling

    Science.gov (United States)

    Taylor, Robert Michael

    2000-10-01

    A multidisciplinary team tasked with an aircraft design problem must understand the problem requirements and metrics to produce a successful design. This understanding entails not only knowledge of what these requirements and metrics are, but also how they interact, which are most important (to the customer as well as to aircraft performance), and who in the organization can provide pertinent knowledge for each. In recent years, product development researchers and organizations have developed and successfully applied a variety of tools such as Quality Function Deployment (QFD) to coordinate multidisciplinary team members. The effectiveness of these methods, however, depends on the quality and fidelity of the information that team members can input. In conceptual aircraft design, structural information is of lower quality compared to aerodynamics or performance because it is based on experience rather than theory. This dissertation shows how advanced structural design tools can be used in a multidisciplinary team setting to improve structural information generation and communication through a systematic evolution of structural detail. When applied to conceptual design, finite element-based structural design tools elevate structural information to the same level as other computationally supported disciplines. This improved ability to generate and communicate structural information enables a design team to better identify and meet structural design requirements, consider producibility issues earlier, and evaluate structural concepts. A design process experiment of a wing structural layout in collaboration with an industrial partner illustrates and validates the approach.

  8. Measurement properties of the upright motor control test for adults with stroke: a systematic review.

    Science.gov (United States)

    Gorgon, Edward James R; Lazaro, Rolando T

    2016-01-01

    The Upright Motor Control Test (UMCT) has been used in clinical practice and research to assess functional strength of the hemiparetic lower limb in adults with stroke. It is unclear if evidence is sufficient to warrant its use. The purpose of this systematic review was to synthesize available evidence on the measurement properties of the UMCT for stroke rehabilitation. Electronic databases that indexed biomedical literature were systematically searched from inception until October 2015 (week 4): Embase, PubMed, Web of Science, CINAHL, PEDro, Cochrane Library, Scopus, ScienceDirect, SPORTDiscus, LILACS, DOAJ, and Google Scholar. All studies that had used the UMCT in the time period covered underwent hand searching for any additional study. Observational studies involving adults with stroke that explored any measurement property of the UMCT were included. The COnsensus-based Standards for the selection of health Measurement INstruments was used to assess the methodological quality of included studies. The CanChild Outcome Measures Rating Form was used for extracting data on measurement properties and clinical utility. The search yielded three methodologic studies that addressed criterion-related validity and contruct validity. Two studies of fair methodological quality demonstrated moderate-level evidence that Knee Extension and Knee Flexion subtest scores were predictive of community-level and household-level ambulation. One study of fair methodological quality provided limited-level evidence for the correlation of Knee Extension subtest scores with a laboratory measure of ground reaction forces. No published studies formally assessed reliability, responsiveness, or clinical utility. Limited information on responsiveness and clinical utility dimensions could be inferred from the included studies. The UMCT is a practical assessment tool for voluntary control or functional strength of the hemiparetic lower limb in standing in adults with stroke. Although different

  9. Retention in HIV Care between Testing and Treatment in Sub-Saharan Africa: A Systematic Review

    Science.gov (United States)

    Rosen, Sydney; Fox, Matthew P.

    2011-01-01

    Background Improving the outcomes of HIV/AIDS treatment programs in resource-limited settings requires successful linkage of patients testing positive for HIV to pre–antiretroviral therapy (ART) care and retention in pre-ART care until ART initiation. We conducted a systematic review of pre-ART retention in care in Africa. Methods and Findings We searched PubMed, ISI Web of Knowledge, conference abstracts, and reference lists for reports on the proportion of adult patients retained between any two points between testing positive for HIV and initiating ART in sub-Saharan African HIV/AIDS care programs. Results were categorized as Stage 1 (from HIV testing to receipt of CD4 count results or clinical staging), Stage 2 (from staging to ART eligibility), or Stage 3 (from ART eligibility to ART initiation). Medians (ranges) were reported for the proportions of patients retained in each stage. We identified 28 eligible studies. The median proportion retained in Stage 1 was 59% (35%–88%); Stage 2, 46% (31%–95%); and Stage 3, 68% (14%–84%). Most studies reported on only one stage; none followed a cohort of patients through all three stages. Enrollment criteria, terminology, end points, follow-up, and outcomes varied widely and were often poorly defined, making aggregation of results difficult. Synthesis of findings from multiple studies suggests that fewer than one-third of patients testing positive for HIV and not yet eligible for ART when diagnosed are retained continuously in care, though this estimate should be regarded with caution because of review limitations. Conclusions Studies of retention in pre-ART care report substantial loss of patients at every step, starting with patients who do not return for their initial CD4 count results and ending with those who do not initiate ART despite eligibility. Better health information systems that allow patients to be tracked between service delivery points are needed to properly evaluate pre-ART loss to care, and

  10. Introducing malaria rapid diagnostic tests in private medicine retail outlets: A systematic literature review.

    Directory of Open Access Journals (Sweden)

    Theodoor Visser

    Full Text Available Many patients with malaria-like symptoms seek treatment in private medicine retail outlets (PMR that distribute malaria medicines but do not traditionally provide diagnostic services, potentially leading to overtreatment with antimalarial drugs. To achieve universal access to prompt parasite-based diagnosis, many malaria-endemic countries are considering scaling up malaria rapid diagnostic tests (RDTs in these outlets, an intervention that may require legislative changes and major investments in supporting programs and infrastructures. This review identifies studies that introduced malaria RDTs in PMRs and examines study outcomes and success factors to inform scale up decisions.Published and unpublished studies that introduced malaria RDTs in PMRs were systematically identified and reviewed. Literature published before November 2016 was searched in six electronic databases, and unpublished studies were identified through personal contacts and stakeholder meetings. Outcomes were extracted from publications or provided by principal investigators.Six published and six unpublished studies were found. Most studies took place in sub-Saharan Africa and were small-scale pilots of RDT introduction in drug shops or pharmacies. None of the studies assessed large-scale implementation in PMRs. RDT uptake varied widely from 8%-100%. Provision of artemisinin-based combination therapy (ACT for patients testing positive ranged from 30%-99%, and was more than 85% in five studies. Of those testing negative, provision of antimalarials varied from 2%-83% and was less than 20% in eight studies. Longer provider training, lower RDT retail prices and frequent supervision appeared to have a positive effect on RDT uptake and provider adherence to test results. Performance of RDTs by PMR vendors was generally good, but disposal of medical waste and referral of patients to public facilities were common challenges.Expanding services of PMRs to include malaria diagnostic

  11. Prediction models for successful external cephalic version: a systematic review

    NARCIS (Netherlands)

    Velzel, Joost; de Hundt, Marcella; Mulder, Frederique M.; Molkenboer, Jan F. M.; van der Post, Joris A. M.; Mol, Ben W.; Kok, Marjolein

    2015-01-01

    To provide an overview of existing prediction models for successful ECV, and to assess their quality, development and performance. We searched MEDLINE, EMBASE and the Cochrane Library to identify all articles reporting on prediction models for successful ECV published from inception to January 2015.

  12. Bayesian Network Models in Cyber Security: A Systematic Review

    NARCIS (Netherlands)

    Chockalingam, S.; Pieters, W.; Herdeiro Teixeira, A.M.; van Gelder, P.H.A.J.M.; Lipmaa, Helger; Mitrokotsa, Aikaterini; Matulevicius, Raimundas

    2017-01-01

    Bayesian Networks (BNs) are an increasingly popular modelling technique in cyber security especially due to their capability to overcome data limitations. This is also instantiated by the growth of BN models development in cyber security. However, a comprehensive comparison and analysis of these

  13. Testing an integral conceptual model of frailty.

    Science.gov (United States)

    Gobbens, Robbert J; van Assen, Marcel A; Luijkx, Katrien G; Schols, Jos M

    2012-09-01

    This paper is a report of a study conducted to test three hypotheses derived from an integral conceptual model of frailty.   The integral model of frailty describes the pathway from life-course determinants to frailty to adverse outcomes. The model assumes that life-course determinants and the three domains of frailty (physical, psychological, social) affect adverse outcomes, the effect of disease(s) on adverse outcomes is mediated by frailty, and the effect of frailty on adverse outcomes depends on the life-course determinants. In June 2008 a questionnaire was sent to a sample of community-dwelling people, aged 75 years and older (n = 213). Life-course determinants and frailty were assessed using the Tilburg frailty indicator. Adverse outcomes were measured using the Groningen activity restriction scale, the WHOQOL-BREF and questions regarding healthcare utilization. The effect of seven self-reported chronic diseases was examined. Life-course determinants, chronic disease(s), and frailty together explain a moderate to large part of the variance of the seven continuous adverse outcomes (26-57%). All these predictors together explained a significant part of each of the five dichotomous adverse outcomes. The effect of chronic disease(s) on all 12 adverse outcomes was mediated at least partly by frailty. The effect of frailty domains on adverse outcomes did not depend on life-course determinants. Our finding that the adverse outcomes are differently and uniquely affected by the three domains of frailty (physical, psychological, social), and life-course determinants and disease(s), emphasizes the importance of an integral conceptual model of frailty. © 2011 Blackwell Publishing Ltd.

  14. Do negative screening test results cause false reassurance? A systematic review.

    Science.gov (United States)

    Cooper, Grace C; Harvie, Michelle N; French, David P

    2017-11-01

    It has been suggested that receiving a negative screening test result may cause false reassurance or have a 'certificate of health effect'. False reassurance in those receiving a negative screening test result may result in them wrongly believing themselves to be at lower risk of the disease, and consequently less likely to engage in health-related behaviours that would lower their risk. The present systematic review aimed to identify the evidence regarding false reassurance effects due to negative screening test results in adults (over 18 years) screened for the presence of a disease or its precursors, where disease or precursors are linked to lifestyle behaviours. MEDLINE and PsycINFO were searched for trials that compared a group who had received negative screening results to an unscreened control group. The following outcomes were considered as markers of false reassurance: perceived risk of disease; anxiety and worry about disease; health-related behaviours or intention to change health-related behaviours (i.e., smoking, diet, physical activity, and alcohol consumption); self-rated health status. Nine unique studies were identified, reporting 55 measures in relation to the outcomes considered. Outcomes were measured at various time points from immediately following screening to up to 11 years after screening. Despite considerable variation in outcome measures used and timing of measurements, effect sizes for comparisons between participants who received negative screening test results and control participants were typically small with few statistically significant differences. There was evidence of high risk of bias, and measures of behaviours employed were often not valid. The limited evidence base provided little evidence of false reassurance following a negative screening test results on any of four outcomes examined. False reassurance should not be considered a significant harm of screening, but further research is warranted. Statement of contribution

  15. Pion interferometric tests of transport models

    Energy Technology Data Exchange (ETDEWEB)

    Padula, S.S.; Gyulassy, M.; Gavin, S. (Lawrence Berkeley Lab., CA (USA). Nuclear Science Div.)

    1990-01-08

    In hadronic reactions, the usual space-time interpretation of pion interferometry often breaks down due to strong correlations between spatial and momentum coordinates. We derive a general interferometry formula based on the Wigner density formalism that allows for arbitrary phase space and multiparticle correlations. Correction terms due to intermediate state pion cascading are derived using semiclassical hadronic transport theory. Finite wave packets are used to reveal the sensitivity of pion interference effects on the details of the production dynamics. The covariant generalization of the formula is shown to be equivalent to the formula derived via an alternate current ensemble formalism for minimal wave packets and reduces in the nonrelativistic limit to a formula derived by Pratt. The final expression is ideally suited for pion interferometric tests of Monte Carlo transport models. Examples involving gaussian and inside-outside phase space distributions are considered. (orig.).

  16. Pion interferometric tests of transport models

    International Nuclear Information System (INIS)

    Padula, S.S.; Gyulassy, M.; Gavin, S.

    1990-01-01

    In hadronic reactions, the usual space-time interpretation of pion interferometry often breaks down due to strong correlations between spatial and momentum coordinates. We derive a general interferometry formula based on the Wigner density formalism that allows for arbitrary phase space and multiparticle correlations. Correction terms due to intermediate state pion cascading are derived using semiclassical hadronic transport theory. Finite wave packets are used to reveal the sensitivity of pion interference effects on the details of the production dynamics. The covariant generalization of the formula is shown to be equivalent to the formula derived via an alternate current ensemble formalism for minimal wave packets and reduces in the nonrelativistic limit to a formula derived by Pratt. The final expression is ideally suited for pion interferometric tests of Monte Carlo transport models. Examples involving gaussian and inside-outside phase space distributions are considered. (orig.)

  17. Experimental Tests of the Algebraic Cluster Model

    Science.gov (United States)

    Gai, Moshe

    2018-02-01

    The Algebraic Cluster Model (ACM) of Bijker and Iachello that was proposed already in 2000 has been recently applied to 12C and 16O with much success. We review the current status in 12C with the outstanding observation of the ground state rotational band composed of the spin-parity states of: 0+, 2+, 3-, 4± and 5-. The observation of the 4± parity doublet is a characteristic of (tri-atomic) molecular configuration where the three alpha- particles are arranged in an equilateral triangular configuration of a symmetric spinning top. We discuss future measurement with electron scattering, 12C(e,e’) to test the predicted B(Eλ) of the ACM.

  18. Models of care in nursing: a systematic review.

    Science.gov (United States)

    Fernandez, Ritin; Johnson, Maree; Tran, Duong Thuy; Miranda, Charmaine

    2012-12-01

    This review investigated the effect of the various models of nursing care delivery using the diverse levels of nurses on patient and nursing outcomes. All published studies that investigated patient and nursing outcomes were considered. Studies were included if the nursing delivery models only included nurses with varying skill levels. A literature search was performed using the following databases: Medline (1985-2011), CINAHL (1985-2011), EMBASE (1985 to current) and the Cochrane Controlled Studies Register (Issue 3, 2011 of Cochrane Library). In addition, the reference lists of relevant studies and conference proceedings were also scrutinised. Two reviewers independently assessed the eligibility of the studies for inclusion in the review, the methodological quality and extracted details of eligible studies. Data were analysed using the RevMan software (Nordic Cochrane Centre, Copenhagen, Denmark). Fourteen studies were included in this review. The results reveal that implementation of the team nursing model of care resulted in significantly decreased incidence of medication errors and adverse intravenous outcomes, as well as lower pain scores among patients; however, there was no effect of this model of care on the incidence of falls. Wards that used a hybrid model demonstrated significant improvement in quality of patient care, but no difference in incidence of pressure areas or infection rates. There were no significant differences in nursing outcomes relating to role clarity, job satisfaction and nurse absenteeism rates between any of the models of care. Based on the available evidence, a predominance of team nursing within the comparisons is suggestive of its popularity. Patient outcomes, nurse satisfaction, absenteeism and role clarity/confusion did not differ across model comparisons. Little benefit was found within primary nursing comparisons and the cost effectiveness of team nursing over other models remains debatable. Nonetheless, team nursing does

  19. Systematic modeling for free stators of rotary - Piezoelectric ultrasonic motors

    DEFF Research Database (Denmark)

    Mojallali, Hamed; Amini, Rouzbeh; Izadi-Zamanabadi, Roozbeh

    2007-01-01

    An equivalent circuit model with complex elements is presented in this paper to describe the free stator model of traveling wave piezoelectric motors. The mechanical, dielectric and piezoelectric losses associated with the vibrator are considered by introducing the imaginary part to the equivalent...... circuit elements. The determination of the complex circuit elements is performed by using a new simple iterative method. The presented method uses information about five points of the stator admittance measurements. The accuracy of the model in fitting to the experimental data is verified by using...

  20. Excised Abdominoplasty Material as a Systematic Plastic Surgical Training Model

    Directory of Open Access Journals (Sweden)

    M. Erol Demirseren

    2012-01-01

    Full Text Available Achieving a level of technical skill and confidence in surgical operations is the main goal of plastic surgical training. Operating rooms were accepted as the practical teaching venues of the traditional apprenticeship model. However, increased patient population, time, and ethical and legal considerations made preoperation room practical work a must for plastic surgical training. There are several plastic surgical teaching models and simulators which are very useful in preoperation room practical training and the evaluation of plastic surgery residents. The full thickness skin with its vascular network excised in abdominoplasty procedures is an easily obtainable real human tissue which could be used as a training model in plastic surgery.

  1. Two Bayesian tests of the GLOMOsys Model.

    Science.gov (United States)

    Field, Sarahanne M; Wagenmakers, Eric-Jan; Newell, Ben R; Zeelenberg, René; van Ravenzwaaij, Don

    2016-12-01

    Priming is arguably one of the key phenomena in contemporary social psychology. Recent retractions and failed replication attempts have led to a division in the field between proponents and skeptics and have reinforced the importance of confirming certain priming effects through replication. In this study, we describe the results of 2 preregistered replication attempts of 1 experiment by Förster and Denzler (2012). In both experiments, participants first processed letters either globally or locally, then were tested using a typicality rating task. Bayes factor hypothesis tests were conducted for both experiments: Experiment 1 (N = 100) yielded an indecisive Bayes factor of 1.38, indicating that the in-lab data are 1.38 times more likely to have occurred under the null hypothesis than under the alternative. Experiment 2 (N = 908) yielded a Bayes factor of 10.84, indicating strong support for the null hypothesis that global priming does not affect participants' mean typicality ratings. The failure to replicate this priming effect challenges existing support for the GLOMO sys model. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  2. Experimental tests of the standard model

    International Nuclear Information System (INIS)

    Nodulman, L.

    1998-01-01

    The title implies an impossibly broad field, as the Standard Model includes the fermion matter states, as well as the forces and fields of SU(3) x SU(2) x U(1). For practical purposes, I will confine myself to electroweak unification, as discussed in the lectures of M. Herrero. Quarks and mixing were discussed in the lectures of R. Aleksan, and leptons and mixing were discussed in the lectures of K. Nakamura. I will essentially assume universality, that is flavor independence, rather than discussing tests of it. I will not pursue tests of QED beyond noting the consistency and precision of measurements of α EM in various processes including the Lamb shift, the anomalous magnetic moment (g-2) of the electron, and the quantum Hall effect. The fantastic precision and agreement of these predictions and measurements is something that convinces people that there may be something to this science enterprise. Also impressive is the success of the ''Universal Fermi Interaction'' description of beta decay processes, or in more modern parlance, weak charged current interactions. With one coupling constant G F , most precisely determined in muon decay, a huge number of nuclear instabilities are described. The slightly slow rate for neutron beta decay was one of the initial pieces of evidence for Cabbibo mixing, now generalized so that all charged current decays of any flavor are covered

  3. Using logic model methods in systematic review synthesis: describing complex pathways in referral management interventions.

    Science.gov (United States)

    Baxter, Susan K; Blank, Lindsay; Woods, Helen Buckley; Payne, Nick; Rimmer, Melanie; Goyder, Elizabeth

    2014-05-10

    There is increasing interest in innovative methods to carry out systematic reviews of complex interventions. Theory-based approaches, such as logic models, have been suggested as a means of providing additional insights beyond that obtained via conventional review methods. This paper reports the use of an innovative method which combines systematic review processes with logic model techniques to synthesise a broad range of literature. The potential value of the model produced was explored with stakeholders. The review identified 295 papers that met the inclusion criteria. The papers consisted of 141 intervention studies and 154 non-intervention quantitative and qualitative articles. A logic model was systematically built from these studies. The model outlines interventions, short term outcomes, moderating and mediating factors and long term demand management outcomes and impacts. Interventions were grouped into typologies of practitioner education, process change, system change, and patient intervention. Short-term outcomes identified that may result from these interventions were changed physician or patient knowledge, beliefs or attitudes and also interventions related to changed doctor-patient interaction. A range of factors which may influence whether these outcomes lead to long term change were detailed. Demand management outcomes and intended impacts included content of referral, rate of referral, and doctor or patient satisfaction. The logic model details evidence and assumptions underpinning the complex pathway from interventions to demand management impact. The method offers a useful addition to systematic review methodologies. PROSPERO registration number: CRD42013004037.

  4. Computerized Classification Testing with the Rasch Model

    Science.gov (United States)

    Eggen, Theo J. H. M.

    2011-01-01

    If classification in a limited number of categories is the purpose of testing, computerized adaptive tests (CATs) with algorithms based on sequential statistical testing perform better than estimation-based CATs (e.g., Eggen & Straetmans, 2000). In these computerized classification tests (CCTs), the Sequential Probability Ratio Test (SPRT) (Wald,…

  5. External Validity and Model Validity: A Conceptual Approach for Systematic Review Methodology

    Directory of Open Access Journals (Sweden)

    Raheleh Khorsan

    2014-01-01

    Full Text Available Background. Evidence rankings do not consider equally internal (IV, external (EV, and model validity (MV for clinical studies including complementary and alternative medicine/integrative health care (CAM/IHC research. This paper describe this model and offers an EV assessment tool (EVAT© for weighing studies according to EV and MV in addition to IV. Methods. An abbreviated systematic review methodology was employed to search, assemble, and evaluate the literature that has been published on EV/MV criteria. Standard databases were searched for keywords relating to EV, MV, and bias-scoring from inception to Jan 2013. Tools identified and concepts described were pooled to assemble a robust tool for evaluating these quality criteria. Results. This study assembled a streamlined, objective tool to incorporate for the evaluation of quality of EV/MV research that is more sensitive to CAM/IHC research. Conclusion. Improved reporting on EV can help produce and provide information that will help guide policy makers, public health researchers, and other scientists in their selection, development, and improvement in their research-tested intervention. Overall, clinical studies with high EV have the potential to provide the most useful information about “real-world” consequences of health interventions. It is hoped that this novel tool which considers IV, EV, and MV on equal footing will better guide clinical decision making.

  6. The application of the heuristic-systematic processing model to treatment decision making about prostate cancer.

    Science.gov (United States)

    Steginga, Suzanne K; Occhipinti, Stefano

    2004-01-01

    The study investigated the utility of the Heuristic-Systematic Processing Model as a framework for the investigation of patient decision making. A total of 111 men recently diagnosed with localized prostate cancer were assessed using Verbal Protocol Analysis and self-report measures. Study variables included men's use of nonsystematic and systematic information processing, desire for involvement in decision making, and the individual differences of health locus of control, tolerance of ambiguity, and decision-related uncertainty. Most men (68%) preferred that decision making be shared equally between them and their doctor. Men's use of the expert opinion heuristic was related to men's verbal reports of decisional uncertainty and having a positive orientation to their doctor and medical care; a desire for greater involvement in decision making was predicted by a high internal locus of health control. Trends were observed for systematic information processing to increase when the heuristic strategy used was negatively affect laden and when men were uncertain about the probabilities for cure and side effects. There was a trend for decreased systematic processing when the expert opinion heuristic was used. Findings were consistent with the Heuristic-Systematic Processing Model and suggest that this model has utility for future research in applied decision making about health.

  7. KRAS mutation testing of tumours in adults with metastatic colorectal cancer: a systematic review and cost-effectiveness analysis.

    Science.gov (United States)

    Westwood, Marie; van Asselt, Thea; Ramaekers, Bram; Whiting, Penny; Joore, Manuela; Armstrong, Nigel; Noake, Caro; Ross, Janine; Severens, Johan; Kleijnen, Jos

    2014-10-01

    Bowel cancer is the third most common cancer in the UK. Most bowel cancers are initially treated with surgery, but around 17% spread to the liver. When this happens, sometimes the liver tumour can be treated surgically, or chemotherapy may be used to shrink the tumour to make surgery possible. Kirsten rat sarcoma viral oncogene (KRAS) mutations make some tumours less responsive to treatment with biological therapies such as cetuximab. There are a variety of tests available to detect these mutations. These vary in the specific mutations that they detect, the amount of mutation they detect, the amount of tumour cells needed, the time to give a result, the error rate and cost. To compare the performance and cost-effectiveness of KRAS mutation tests in differentiating adults with metastatic colorectal cancer whose metastases are confined to the liver and are unresectable and who may benefit from first-line treatment with cetuximab in combination with standard chemotherapy from those who should receive standard chemotherapy alone. Thirteen databases, including MEDLINE and EMBASE, research registers and conference proceedings were searched to January 2013. Additional data were obtained from an online survey of laboratories participating in the UK National External Quality Assurance Scheme pilot for KRAS mutation testing. A systematic review of the evidence was carried out using standard methods. Randomised controlled trials were assessed for quality using the Cochrane risk of bias tool. Diagnostic accuracy studies were assessed using the QUADAS-2 tool. There were insufficient data for meta-analysis. For accuracy studies we calculated sensitivity and specificity together with 95% confidence intervals (CIs). Survival data were summarised as hazard ratios and tumour response data were summarised as relative risks, with 95% CIs. The health economic analysis considered the long-term costs and quality-adjusted life-years associated with different tests followed by treatment

  8. A Systematic Review of Evidence for the Clubhouse Model of Psychosocial Rehabilitation

    OpenAIRE

    McKay, Colleen; Nugent, Katie L.; Johnsen, Matthew; Eaton, William W.; Lidz, Charles W.

    2016-01-01

    The Clubhouse Model has been in existence for over sixty-five years; however, a review that synthesizes the literature on the model is needed. The current study makes use of the existing research to conduct a systematic review of articles providing a comprehensive understanding of what is known about the Clubhouse Model, to identify the best evidence available, as well as areas that would benefit from further study. Findings are summarized and evidence is classified by outcome domains. Fifty-...

  9. A Systematic Evaluation of Ultrasound-based Fetal Weight Estimation Models on Indian Population

    Directory of Open Access Journals (Sweden)

    Sujitkumar S. Hiwale

    2017-12-01

    Conclusion: We found that the existing fetal weight estimation models have high systematic and random errors on Indian population, with a general tendency of overestimation of fetal weight in the LBW category and underestimation in the HBW category. We also observed that these models have a limited ability to predict babies at a risk of either low or high birth weight. It is recommended that the clinicians should consider all these factors, while interpreting estimated weight given by the existing models.

  10. The use of scale models in impact testing

    International Nuclear Information System (INIS)

    Donelan, P.J.; Dowling, A.R.

    1985-01-01

    Theoretical analysis, component testing and model flask testing are employed to investigate the validity of scale models for demonstrating the behaviour of Magnox flasks under impact conditions. Model testing is shown to be a powerful and convenient tool provided adequate care is taken with detail design and manufacture of models and with experimental control. (author)

  11. A person fit test for IRT models for polytomous items

    NARCIS (Netherlands)

    Glas, Cornelis A.W.; Dagohoy, A.V.

    2007-01-01

    A person fit test based on the Lagrange multiplier test is presented for three item response theory models for polytomous items: the generalized partial credit model, the sequential model, and the graded response model. The test can also be used in the framework of multidimensional ability

  12. The Chinese translations of Alcohol Use Disorders Identification Test (AUDIT) in China: a systematic review.

    Science.gov (United States)

    Li, Qing; Babor, Thomas F; Hao, Wei; Chen, Xinguang

    2011-01-01

    To systematically review the literature on the Chinese translations of the Alcohol Use Disorders Identification Test (AUDIT) and their cross-cultural applicability in Chinese language populations. We identified peer-reviewed articles published in English (n = 10) and in Chinese (n = 11) from 1980 to September 2009, with key words China, Chinese and AUDIT among PubMed, EBSCO, PsycInfo, FirstSearch electronic databases and two Chinese databases. Five teams from Beijing, Tibet, Taiwan and Hong Kong reported their region-specific translation procedures, cultural adaptations, validity (0.93-0.95 in two versions) and reliability (0.63-0.99). These Chinese translations and short versions demonstrated relatively high sensitivity (0.880-0.997) and moderate specificity (0.709-0.934) for hazardous/harmful drinking and alcohol dependence, but low specificity for alcohol dependence among Min-Nan Taiwanese (0.58). The AUDIT and its adaptations were most utilized in workplace- and hospital-settings for screening and brief intervention. However, they were under-utilized in population-based surveys, primary care settings, and among women, adolescents, rural-to-urban migrants, the elderly and minorities. Among 12 studies from mainland China, four included both women and men, and only one in Tibet was published in English. There is a growing amount of psychometric, epidemiologic and treatment research using Chinese translations of the AUDIT, much of it still unavailable in the English-language literature. Given the increase in burden of disease and injury attributable to alcohol use in the Western Pacific region, the use of an internationally comparable instrument (such as the AUDIT) in research with Chinese populations presents a unique opportunity to expand clinical and epidemiologic knowledge about alcohol problem epidemics.

  13. POC CD4 Testing Improves Linkage to HIV Care and Timeliness of ART Initiation in a Public Health Approach: A Systematic Review and Meta-Analysis.

    Directory of Open Access Journals (Sweden)

    Lara Vojnov

    Full Text Available CD4 cell count is an important test in HIV programs for baseline risk assessment, monitoring of ART where viral load is not available, and, in many settings, antiretroviral therapy (ART initiation decisions. However, access to CD4 testing is limited, in part due to the centralized conventional laboratory network. Point of care (POC CD4 testing has the potential to address some of the challenges of centralized CD4 testing and delays in delivery of timely testing and ART initiation. We conducted a systematic review and meta-analysis to identify the extent to which POC improves linkages to HIV care and timeliness of ART initiation.We searched two databases and four conference sites between January 2005 and April 2015 for studies reporting test turnaround times, proportion of results returned, and retention associated with the use of point-of-care CD4. Random effects models were used to estimate pooled risk ratios, pooled proportions, and 95% confidence intervals.We identified 30 eligible studies, most of which were completed in Africa. Test turnaround times were reduced with the use of POC CD4. The time from HIV diagnosis to CD4 test was reduced from 10.5 days with conventional laboratory-based testing to 0.1 days with POC CD4 testing. Retention along several steps of the treatment initiation cascade was significantly higher with POC CD4 testing, notably from HIV testing to CD4 testing, receipt of results, and pre-CD4 test retention (all p<0.001. Furthermore, retention between CD4 testing and ART initiation increased with POC CD4 testing compared to conventional laboratory-based testing (p = 0.01. We also carried out a non-systematic review of the literature observing that POC CD4 increased the projected life expectancy, was cost-effective, and acceptable.POC CD4 technologies reduce the time and increase patient retention along the testing and treatment cascade compared to conventional laboratory-based testing. POC CD4 is, therefore, a useful tool

  14. Insights on the impact of systematic model errors on data assimilation performance in changing catchments

    Science.gov (United States)

    Pathiraja, S.; Anghileri, D.; Burlando, P.; Sharma, A.; Marshall, L.; Moradkhani, H.

    2018-03-01

    The global prevalence of rapid and extensive land use change necessitates hydrologic modelling methodologies capable of handling non-stationarity. This is particularly true in the context of Hydrologic Forecasting using Data Assimilation. Data Assimilation has been shown to dramatically improve forecast skill in hydrologic and meteorological applications, although such improvements are conditional on using bias-free observations and model simulations. A hydrologic model calibrated to a particular set of land cover conditions has the potential to produce biased simulations when the catchment is disturbed. This paper sheds new light on the impacts of bias or systematic errors in hydrologic data assimilation, in the context of forecasting in catchments with changing land surface conditions and a model calibrated to pre-change conditions. We posit that in such cases, the impact of systematic model errors on assimilation or forecast quality is dependent on the inherent prediction uncertainty that persists even in pre-change conditions. Through experiments on a range of catchments, we develop a conceptual relationship between total prediction uncertainty and the impacts of land cover changes on the hydrologic regime to demonstrate how forecast quality is affected when using state estimation Data Assimilation with no modifications to account for land cover changes. This work shows that systematic model errors as a result of changing or changed catchment conditions do not always necessitate adjustments to the modelling or assimilation methodology, for instance through re-calibration of the hydrologic model, time varying model parameters or revised offline/online bias estimation.

  15. Systematic approach for the identification of process reference models

    CSIR Research Space (South Africa)

    Van Der Merwe, A

    2009-02-01

    Full Text Available and make it economically viable. In the identification of core elements within the process reference model, the focus is often on the end-product and not on the procedure used to identify the elements. As often proved in development of projects, there is a...

  16. A systematic review of health manpower forecasting models.

    NARCIS (Netherlands)

    Martins-Coelho, G.; Greuningen, M. van; Barros, H.; Batenburg, R.

    2011-01-01

    Context: Health manpower planning (HMP) aims at matching health manpower (HM) supply to the population’s health requirements. To achieve this, HMP needs information on future HM supply and requirement (S&R). This is estimated by several different forecasting models (FMs). In this paper, we review

  17. Systematics of quark mass matrices in the standard electroweak model

    International Nuclear Information System (INIS)

    Frampton, P.H.; Jarlskog, C.; Stockholm Univ.

    1985-01-01

    It is shown that the quark mass matrices in the standard electroweak model satisfy the empirical relation M = M' + O(lambda 2 ), where M(M') refers to the mass matrix of the charge 2/3 (-1/3) quarks normalized to the largest eigenvalue, msub(t) (msub(b)), and lambda = Vsub(us) approx.= 0.22. (orig.)

  18. Accident Analysis Methods and Models — a Systematic Literature Review

    NARCIS (Netherlands)

    Wienen, Hans Christian Augustijn; Bukhsh, Faiza Allah; Vriezekolk, E.; Wieringa, Roelf J.

    2017-01-01

    As part of our co-operation with the Telecommunication Agency of the Netherlands, we want to formulate an accident analysis method and model for use in incidents in telecommunications that cause service unavailability. In order to not re-invent the wheel, we wanted to first get an overview of all

  19. USING OF BYOD MODEL FOR TESTING OF EDUCATIONAL ACHIEVEMENTS ON THE BASIS OF GOOGLE SEARCH SERVICES

    Directory of Open Access Journals (Sweden)

    Tetiana Bondarenko

    2016-04-01

    Full Text Available The technology of using their own mobile devices of learners for testing educational achievements, based on the model of BYOD, in an article is offered. The proposed technology is based on cloud services Google. This technology provides a comprehensive support of testing system: creating appropriate forms, storing the results in cloud storage, processing test results and management of testing system through the use of Google-Calendar. A number of software products based on cloud technologies that allow using BYOD model for testing of educational achievement are described, their strengths and weaknesses are identified. This article also describes the stages of the testing process of the academic achievements of students on the basis of Google search services with using the BYOD model. The proposed approaches to the testing of educational achievements based on using of BYOD model extends the space and time of the testing, makes the test procedure more flexible and systematically, adds to the procedure for testing the elements of a computer game. BYOD model opens up broad prospects for implementation of ICT in all forms of learning process, and particularly in testing of educational achievement in view of the limited computing resources in education

  20. Primary care models for treating opioid use disorders: What actually works? A systematic review.

    Directory of Open Access Journals (Sweden)

    Pooja Lagisetty

    Full Text Available Primary care-based models for Medication-Assisted Treatment (MAT have been shown to reduce mortality for Opioid Use Disorder (OUD and have equivalent efficacy to MAT in specialty substance treatment facilities.The objective of this study is to systematically analyze current evidence-based, primary care OUD MAT interventions and identify program structures and processes associated with improved patient outcomes in order to guide future policy and implementation in primary care settings.PubMed, EMBASE, CINAHL, and PsychInfo.We included randomized controlled or quasi experimental trials and observational studies evaluating OUD treatment in primary care settings treating adult patient populations and assessed structural domains using an established systems engineering framework.We included 35 interventions (10 RCTs and 25 quasi-experimental interventions that all tested MAT, buprenorphine or methadone, in primary care settings across 8 countries. Most included interventions used joint multi-disciplinary (specialty addiction services combined with primary care and coordinated care by physician and non-physician provider delivery models to provide MAT. Despite large variability in reported patient outcomes, processes, and tasks/tools used, similar key design factors arose among successful programs including integrated clinical teams with support staff who were often advanced practice clinicians (nurses and pharmacists as clinical care managers, incorporating patient "agreements," and using home inductions to make treatment more convenient for patients and providers.The findings suggest that multidisciplinary and coordinated care delivery models are an effective strategy to implement OUD treatment and increase MAT access in primary care, but research directly comparing specific structures and processes of care models is still needed.

  1. An evaluation of a model for the systematic documentation of hospital based health promotion activities: results from a multicentre study

    Directory of Open Access Journals (Sweden)

    Morris Denise

    2007-09-01

    Full Text Available Abstract Background The first step of handling health promotion (HP in Diagnosis Related Groups (DRGs is a systematic documentation and registration of the activities in the medical records. So far the possibility and tradition for systematic registration of clinical HP activities in the medical records and in patient administrative systems have been sparse. Therefore, the activities are mostly invisible in the registers of hospital services as well as in budgets and balances. A simple model has been described to structure the registration of the HP procedures performed by the clinical staff. The model consists of two parts; first part includes motivational counselling (7 codes and the second part comprehends intervention, rehabilitation and after treatment (8 codes. The objective was to evaluate in an international study the usefulness, applicability and sufficiency of a simple model for the systematic registration of clinical HP procedures in day life. Methods The multi centre project was carried out in 19 departments/hospitals in 6 countries in a clinical setup. The study consisted of three parts in accordance with the objectives. A: Individual test. 20 consecutive medical records from each participating department/hospital were coded by the (coding specialists at local department/hospital, exclusively (n = 5,529 of 5,700 possible tests in total. B: Common test. 14 standardized medical records were coded by all the specialists from 17 departments/hospitals, who returned 3,046 of 3,570 tests. C: Specialist evaluation. The specialists from the 19 departments/hospitals evaluated if the codes were useful, applicable and sufficient for the registration in their own department/hospital (239 of 285. Results A: In 97 to100% of the local patient pathways the specialists were able to evaluate if there was documentation of HP activities in the medical record to be coded. B: Inter rater reliability on the use of the codes were 93% (57 to 100% and 71% (31

  2. Comparison of two stochastic techniques for reliable urban runoff prediction by modeling systematic errors

    DEFF Research Database (Denmark)

    Del Giudice, Dario; Löwe, Roland; Madsen, Henrik

    2015-01-01

    from different fields and have not yet been compared in environmental modeling. To compare the two approaches, we develop a unifying terminology, evaluate them theoretically, and apply them to conceptual rainfall-runoff modeling in the same drainage system. Our results show that both approaches can......In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two...... approaches which use stochastic processes to describe systematic deviations and to discuss their advantages and drawbacks for urban drainage modeling. The two methodologies are an external bias description (EBD) and an internal noise description (IND, also known as stochastic gray-box modeling). They emerge...

  3. The systematic development of ROsafe: an intervention to promote STI testing among vocational school students.

    Science.gov (United States)

    Wolfers, Mireille; de Zwart, Onno; Kok, Gerjo

    2012-05-01

    This article describes the development of ROsafe, an intervention to promote sexually transmitted infection (STI) testing at vocational schools in the Netherlands. Using the planning model of intervention mapping (IM), an educational intervention was designed that consisted of two lessons, an Internet site, and sexual health services at the school sites. IM is a stepwise approach for theory- and evidence-based development and implementation of interventions. It includes six steps: needs assessment, specification of the objectives in matrices, selection of theoretical methods and practical strategies, program design, implementation planning, and evaluation. The processes and outcomes that are performed during Steps 1 to 4 of IM are presented, that is, literature review and qualitative and quantitative research in needs assessment, leading to the definition of the desired behavioral outcomes and objectives. The matrix of change objectives for STI-testing behavior is presented, and then the development of theory into program is described, using examples from the program. Finally, the planning for implementation and evaluation is discussed. The educational intervention used methods that were derived from the social cognitive theory, the elaboration likelihood model, the persuasive communication matrix, and theories about risk communication. Strategies included short movies, discussion, knowledge quiz, and an interactive behavioral self-test through the Internet.

  4. A Systematic Review of Interventions to Follow-Up Test Results Pending at Discharge.

    Science.gov (United States)

    Darragh, Patrick J; Bodley, T; Orchanian-Cheff, A; Shojania, K G; Kwan, J L; Cram, P

    2018-05-01

    Patients are frequently discharged from the hospital before all test results have been finalized. Thirty to 40% of tests pending at discharge (TPADs) return potentially actionable results that could necessitate change in the patients' management, often unbeknownst to their physicians. Delayed follow-up of TPADs can lead to patient harm. We sought to synthesize the existing literature on interventions intended to improve the management of TPADs, including interventions designed to enhance documentation of TPADs, increase physician awareness when TPAD results finalize post-discharge, decrease adverse events related to missed TPADs, and increase physician satisfaction with TPAD management. We searched Medline, EMBASE, CINAHL, Cochrane Database of Systematic Reviews, Cochrane Database of Controlled Clinical Trials and Medline (January 1, 2000-November 10, 2016) for randomized controlled trials and prospective, controlled observational studies that evaluated interventions to improve follow-up of TPADs for adult patients discharged from acute care hospitals or emergency department settings. From each study we extracted characteristics of the intervention being evaluated and its impact on TPAD management. Nine studies met the criteria for inclusion. Six studies evaluated electronic discharge summary templates with a designated field for documenting TPADs, and three of six of these studies reported a significant improvement in documentation of TPADs in discharge summaries in pre- and post-intervention analysis. One study reported that auditing discharge summaries and providing feedback to physicians were associated with improved TPAD documentation in discharge summaries. Two studies found that email alerts when TPADs were finalized improved physicians' awareness of the results and documentation of their follow-up actions. Of the four studies that assessed patient morbidity, two showed a positive effect; however, none specifically measured the impact of their interventions

  5. Topic Modeling in Sentiment Analysis: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Toqir Ahmad Rana

    2016-06-01

    Full Text Available With the expansion and acceptance of Word Wide Web, sentiment analysis has become progressively popular research area in information retrieval and web data analysis. Due to the huge amount of user-generated contents over blogs, forums, social media, etc., sentiment analysis has attracted researchers both in academia and industry, since it deals with the extraction of opinions and sentiments. In this paper, we have presented a review of topic modeling, especially LDA-based techniques, in sentiment analysis. We have presented a detailed analysis of diverse approaches and techniques, and compared the accuracy of different systems among them. The results of different approaches have been summarized, analyzed and presented in a sophisticated fashion. This is the really effort to explore different topic modeling techniques in the capacity of sentiment analysis and imparting a comprehensive comparison among them.

  6. A Systematic Modelling Framework for Phase Transfer Catalyst Systems

    DEFF Research Database (Denmark)

    Anantpinijwatna, Amata; Sales-Cruz, Mauricio; Hyung Kim, Sun

    2016-01-01

    Phase-transfer catalyst systems contain two liquid phases, with a catalyst (PTC) that transfers between the phases, driving product formation in one phase and being regenerated in the other phase. Typically the reaction involves neutral species in an organic phase and regeneration involves ions i....... The application of the framework is made to two cases in order to highlight the performance and issues of activity coefficient models for predicting design and operation and the effects when different organic solvents are employed....

  7. Accelerated testing statistical models, test plans, and data analysis

    CERN Document Server

    Nelson, Wayne B

    2009-01-01

    The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists. "". . . a goldmine of knowledge on accelerated life testing principles and practices . . . one of the very few capable of advancing the science of reliability. It definitely belongs in every bookshelf on engineering.""-Dev G.

  8. Method matters: systematic effects of testing procedure on visual working memory sensitivity.

    Science.gov (United States)

    Makovski, Tal; Watson, Leah M; Koutstaal, Wilma; Jiang, Yuhong V

    2010-11-01

    Visual working memory (WM) is traditionally considered a robust form of visual representation that survives changes in object motion, observer's position, and other visual transients. This article presents data that are inconsistent with the traditional view. We show that memory sensitivity is dramatically influenced by small variations in the testing procedure, supporting the idea that representations in visual WM are susceptible to interference from testing. In the study, participants were shown an array of colors to remember. After a short retention interval, memory for one of the items was tested with either a same-different task or a 2-alternative-forced-choice (2AFC) task. Memory sensitivity was much lower in the 2AFC task than in the same-different task. This difference was found regardless of encoding similarity or of whether visual WM required a fine or coarse memory resolution. The 2AFC disadvantage was reduced when participants were informed shortly before testing which item would be probed. The 2AFC disadvantage diminished in perceptual tasks and was not found in tasks probing visual long-term memory. These results support memory models that acknowledge the labile nature of visual WM and have implications for the format of visual WM and its assessment. (c) 2010 APA, all rights reserved

  9. The Psychology Department Model Advisement Procedure: A Comprehensive, Systematic Approach to Career Development Advisement

    Science.gov (United States)

    Howell-Carter, Marya; Nieman-Gonder, Jennifer; Pellegrino, Jennifer; Catapano, Brittani; Hutzel, Kimberly

    2016-01-01

    The MAP (Model Advisement Procedure) is a comprehensive, systematic approach to developmental student advisement. The MAP was implemented to improve advisement consistency, improve student preparation for internships/senior projects, increase career exploration, reduce career uncertainty, and, ultimately, improve student satisfaction with the…

  10. Systematic Review of Health Economic Impact Evaluations of Risk Prediction Models : Stop Developing, Start Evaluating

    NARCIS (Netherlands)

    van Giessen, Anoukh; Peters, Jaime; Wilcher, Britni; Hyde, Chris; Moons, Carl; de Wit, Ardine; Koffijberg, Erik

    2017-01-01

    Background: Although health economic evaluations (HEEs) are increasingly common for therapeutic interventions, they appear to be rare for the use of risk prediction models (PMs). Objectives: To evaluate the current state of HEEs of PMs by performing a comprehensive systematic review. Methods: Four

  11. Systematic Analysis of Quantitative Logic Model Ensembles Predicts Drug Combination Effects on Cell Signaling Networks

    Science.gov (United States)

    2016-08-27

    bovine serum albumin (BSA) diluted to the amount corresponding to that in the media of the stimulated cells. Phospho-JNK comprises two isoforms whose...information accompanies this paper on the CPT: Pharmacometrics & Systems Pharmacology website (http://www.wileyonlinelibrary.com/psp4) Systematic Analysis of Quantitative Logic Model Morris et al. 553 www.wileyonlinelibrary/psp4

  12. Systematic literature review and meta-analysis of diagnostic test accuracy in Alzheimer's disease and other dementia using autopsy as standard of truth.

    Science.gov (United States)

    Cure, Sandrine; Abrams, Keith; Belger, Mark; Dell'agnello, Grazzia; Happich, Michael

    2014-01-01

    Early diagnosis of Alzheimer's disease (AD) is crucial to implement the latest treatment strategies and management of AD symptoms. Diagnostic procedures play a major role in this detection process but evidence on their respective accuracy is still limited. To conduct a systematic literature on the sensitivity and specificity of different test modalities to identify AD patients and perform meta-analyses on the test accuracy values of studies focusing on autopsy-confirmation as the standard of truth. The systematic review identified all English papers published between 1984 and 2011 on diagnostic imaging tests and cerebrospinal fluid biomarkers including results on the newest technologies currently investigated in this area. Meta-analyses using bivariate fixed and random-effect models and hierarchical summary receiver operating curve (HSROC) random-effect model were applied. Out of the 1,189 records, 20 publications were identified to report the accuracy of diagnostic tests in distinguishing autopsy-confirmed AD patients from other dementia types and healthy controls. Looking at all tests and comparator populations together, sensitivity was calculated at 85.4% (95% confidence interval [CI]: 80.9%-90.0%) and specificity at 77.7% (95% CI: 70.2%-85.1%). The area under the HSROC curve was 0.88. Sensitivity and specificity values were higher for imaging procedures, and slightly lower for CSF biomarkers. Test-specific random-effect models could not be calculated due to the small number of studies. The review and meta-analysis point to a slight advantage of imaging procedures in correctly detecting AD patients but also highlight the limited evidence on autopsy-confirmations and heterogeneity in study designs.

  13. Standard Model theory calculations and experimental tests

    International Nuclear Information System (INIS)

    Cacciari, M.; Hamel de Monchenault, G.

    2015-01-01

    To present knowledge, all the physics at the Large Hadron Collider (LHC) can be described in the framework of the Standard Model (SM) of particle physics. Indeed the newly discovered Higgs boson with a mass close to 125 GeV seems to confirm the predictions of the SM. Thus, besides looking for direct manifestations of the physics beyond the SM, one of the primary missions of the LHC is to perform ever more stringent tests of the SM. This requires not only improved theoretical developments to produce testable predictions and provide experiments with reliable event generators, but also sophisticated analyses techniques to overcome the formidable experimental environment of the LHC and perform precision measurements. In the first section, we describe the state of the art of the theoretical tools and event generators that are used to provide predictions for the production cross sections of the processes of interest. In section 2, inclusive cross section measurements with jets, leptons and vector bosons are presented. Examples of differential cross sections, charge asymmetries and the study of lepton pairs are proposed in section 3. Finally, in section 4, we report studies on the multiple production of gauge bosons and constraints on anomalous gauge couplings

  14. Methods and models for the construction of weakly parallel tests

    NARCIS (Netherlands)

    Adema, J.J.; Adema, Jos J.

    1990-01-01

    Methods are proposed for the construction of weakly parallel tests, that is, tests with the same test information function. A mathematical programing model for constructing tests with a prespecified test information function and a heuristic for assigning items to tests such that their information

  15. A Systematic Approach to Modelling Change Processes in Construction Projects

    Directory of Open Access Journals (Sweden)

    Ibrahim Motawa

    2012-11-01

    Full Text Available Modelling change processes within construction projects isessential to implement changes efficiently. Incomplete informationon the project variables at the early stages of projects leads toinadequate knowledge of future states and imprecision arisingfrom ambiguity in project parameters. This lack of knowledge isconsidered among the main source of changes in construction.Change identification and evaluation, in addition to predictingits impacts on project parameters, can help in minimising thedisruptive effects of changes. This paper presents a systematicapproach to modelling change process within construction projectsthat helps improve change identification and evaluation. Theapproach represents the key decisions required to implementchanges. The requirements of an effective change processare presented first. The variables defined for efficient changeassessment and diagnosis are then presented. Assessmentof construction changes requires an analysis for the projectcharacteristics that lead to change and also analysis of therelationship between the change causes and effects. The paperconcludes that, at the early stages of a project, projects with a highlikelihood of change occurrence should have a control mechanismover the project characteristics that have high influence on theproject. It also concludes, for the relationship between changecauses and effects, the multiple causes of change should bemodelled in a way to enable evaluating the change effects moreaccurately. The proposed approach is the framework for tacklingsuch conclusions and can be used for evaluating change casesdepending on the available information at the early stages ofconstruction projects.

  16. Facilitators and barriers for HIV-testing in Zambia: A systematic review of multi-level factors.

    Science.gov (United States)

    Qiao, Shan; Zhang, Yao; Li, Xiaoming; Menon, J Anitha

    2018-01-01

    It was estimated that 1.2 million people live with HIV/AIDS in Zambia by 2015. Zambia has developed and implemented diverse programs to reduce the prevalence in the country. HIV-testing is a critical step in HIV treatment and prevention, especially among all the key populations. However, there is no systematic review so far to demonstrate the trend of HIV-testing studies in Zambia since 1990s or synthesis the key factors that associated with HIV-testing practices in the country. Therefore, this study conducted a systematic review to search all English literature published prior to November 2016 in six electronic databases and retrieved 32 articles that meet our inclusion criteria. The results indicated that higher education was a common facilitator of HIV testing, while misconception of HIV testing and the fear of negative consequences were the major barriers for using the testing services. Other factors, such as demographic characteristics, marital dynamics, partner relationship, and relationship with the health care services, also greatly affects the participants' decision making. The findings indicated that 1) individualized strategies and comprehensive services are needed for diverse key population; 2) capacity building for healthcare providers is critical for effectively implementing the task-shifting strategy; 3) HIV testing services need to adapt to the social context of Zambia where HIV-related stigma and discrimination is still persistent and overwhelming; and 4) family-based education and intervention should involving improving gender equity.

  17. Reliability of specific physical examination tests for the diagnosis of shoulder pathologies: a systematic review and meta-analysis.

    Science.gov (United States)

    Lange, Toni; Matthijs, Omer; Jain, Nitin B; Schmitt, Jochen; Lützner, Jörg; Kopkow, Christian

    2017-03-01

    Shoulder pain in the general population is common and to identify the aetiology of shoulder pain, history, motion and muscle testing, and physical examination tests are usually performed. The aim of this systematic review was to summarise and evaluate intrarater and inter-rater reliability of physical examination tests in the diagnosis of shoulder pathologies. A comprehensive systematic literature search was conducted using MEDLINE, EMBASE, Allied and Complementary Medicine Database (AMED) and Physiotherapy Evidence Database (PEDro) through 20 March 2015. Methodological quality was assessed using the Quality Appraisal of Reliability Studies (QAREL) tool by 2 independent reviewers. The search strategy revealed 3259 articles, of which 18 finally met the inclusion criteria. These studies evaluated the reliability of 62 test and test variations used for the specific physical examination tests for the diagnosis of shoulder pathologies. Methodological quality ranged from 2 to 7 positive criteria of the 11 items of the QAREL tool. This review identified a lack of high-quality studies evaluating inter-rater as well as intrarater reliability of specific physical examination tests for the diagnosis of shoulder pathologies. In addition, reliability measures differed between included studies hindering proper cross-study comparisons. PROSPERO CRD42014009018. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  18. Diagnostic validity of physical examination tests for common knee disorders: An overview of systematic reviews and meta-analysis.

    Science.gov (United States)

    Décary, Simon; Ouellet, Philippe; Vendittoli, Pascal-André; Roy, Jean-Sébastien; Desmeules, François

    2017-01-01

    More evidence on diagnostic validity of physical examination tests for knee disorders is needed to lower frequently used and costly imaging tests. To conduct a systematic review of systematic reviews (SR) and meta-analyses (MA) evaluating the diagnostic validity of physical examination tests for knee disorders. A structured literature search was conducted in five databases until January 2016. Methodological quality was assessed using the AMSTAR. Seventeen reviews were included with mean AMSTAR score of 5.5 ± 2.3. Based on six SR, only the Lachman test for ACL injuries is diagnostically valid when individually performed (Likelihood ratio (LR+):10.2, LR-:0.2). Based on two SR, the Ottawa Knee Rule is a valid screening tool for knee fractures (LR-:0.05). Based on one SR, the EULAR criteria had a post-test probability of 99% for the diagnosis of knee osteoarthritis. Based on two SR, a complete physical examination performed by a trained health provider was found to be diagnostically valid for ACL, PCL and meniscal injuries as well as for cartilage lesions. When individually performed, common physical tests are rarely able to rule in or rule out a specific knee disorder, except the Lachman for ACL injuries. There is low-quality evidence concerning the validity of combining history elements and physical tests. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. The use of standardised short-term and working memory tests in aphasia research: a systematic review.

    Science.gov (United States)

    Murray, Laura; Salis, Christos; Martin, Nadine; Dralle, Jenny

    2018-04-01

    Impairments of short-term and working memory (STM, WM), both verbal and non-verbal, are ubiquitous in aphasia. Increasing interest in assessing STM and WM in aphasia research and clinical practice as well as a growing evidence base of STM/WM treatments for aphasia warrant an understanding of the range of standardised STM/WM measures that have been utilised in aphasia. To date, however, no previous systematic review has focused on aphasia. Accordingly, the goals of this systematic review were: (1) to identify standardised tests of STM and WM utilised in the aphasia literature, (2) to evaluate critically the psychometric strength of these tests, and (3) to appraise critically the quality of the investigations utilising these tests. Results revealed that a very limited number of standardised tests, in the verbal and non-verbal domains, had robust psychometric properties. Standardisation samples to elicit normative data were often small, and most measures exhibited poor validity and reliability properties. Studies using these tests inconsistently documented demographic and aphasia variables essential to interpreting STM/WM test outcomes. In light of these findings, recommendations are provided to foster, in the future, consistency across aphasia studies and confidence in STM/WM tests as assessment and treatment outcome measures.

  20. Genetics of borderline personality disorder: systematic review and proposal of an integrative model.

    Science.gov (United States)

    Amad, Ali; Ramoz, Nicolas; Thomas, Pierre; Jardri, Renaud; Gorwood, Philip

    2014-03-01

    Borderline personality disorder (BPD) is one of the most common mental disorders and is characterized by a pervasive pattern of emotional lability, impulsivity, interpersonal difficulties, identity disturbances, and disturbed cognition. Here, we performed a systematic review of the literature concerning the genetics of BPD, including familial and twin studies, association studies, and gene-environment interaction studies. Moreover, meta-analyses were performed when at least two case-control studies testing the same polymorphism were available. For each gene variant, a pooled odds ratio (OR) was calculated using fixed or random effects models. Familial and twin studies largely support the potential role of a genetic vulnerability at the root of BPD, with an estimated heritability of approximately 40%. Moreover, there is evidence for both gene-environment interactions and correlations. However, association studies for BPD are sparse, making it difficult to draw clear conclusions. According to our meta-analysis, no significant associations were found for the serotonin transporter gene, the tryptophan hydroxylase 1 gene, or the serotonin 1B receptor gene. We hypothesize that such a discrepancy (negative association studies but high heritability of the disorder) could be understandable through a paradigm shift, in which "plasticity" genes (rather than "vulnerability" genes) would be involved. Such a framework postulates a balance between positive and negative events, which interact with plasticity genes in the genesis of BPD. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Systematic model calculations of the hyperfine structure in light and heavy ions

    CERN Document Server

    Tomaselli, M; Nörtershäuser, W; Ewald, G; Sánchez, R; Fritzsche, S; Karshenboim, S G

    2003-01-01

    Systematic model calculations are performed for the magnetization distributions and the hyperfine structure (HFS) of light and heavy ions with a mass close to A ~ 6 208 235 to test the interplay of nuclear and atomic structure. A high-precision measurement of lithium-isotope shifts (IS) for suitable transition, combined with an accurate theoretical evaluation of the mass-shift contribution in the respective transition, can be used to determine the root-mean-square (rms) nuclear-charge radius of Li isotopes, particularly of the halo nucleus /sup 11/Li. An experiment of this type is currently underway at GSI in Darmstadt and ISOLDE at CERN. However, the field-shift contributions between the different isotopes can be evaluated using the results obtained for the charge radii, thus casting, with knowledge of the ratio of the HFS constants to the magnetic moments, new light on the IS theory. For heavy charged ions the calculated n- body magnetization distributions reproduce the HFS of hydrogen-like ions well if QED...

  2. Understanding in vivo modelling of depression in non-human animals: a systematic review protocol

    DEFF Research Database (Denmark)

    Bannach-Brown, Alexandra; Liao, Jing; Wegener, Gregers

    2016-01-01

    experimental model(s) to induce or mimic a depressive-like phenotype. Data that will be extracted include the model or method of induction; species and gender of the animals used; the behavioural, anatomical, electrophysiological, neurochemical or genetic outcome measure(s) used; risk of bias......The aim of this study is to systematically collect all published preclinical non-human animal literature on depression to provide an unbiased overview of existing knowledge. A systematic search will be carried out in PubMed and Embase. Studies will be included if they use non-human animal......-analysis of the preclinical studies modelling depression-like behaviours and phenotypes in animals....

  3. Putting hydrological modelling practice to the test

    NARCIS (Netherlands)

    Melsen, Lieke Anna

    2017-01-01

    Six steps can be distinguished in the process of hydrological modelling: the perceptual model (deciding on the processes), the conceptual model (deciding on the equations), the procedural model (get the code to run on a computer), calibration (identify the parameters), evaluation (confronting

  4. Thermohydraulic tests in nuclear fuel model

    International Nuclear Information System (INIS)

    Ladeira, L.C.D.; Navarro, M.A.

    1984-01-01

    The main experimental works performed in the Thermohydraulics Laboratory of the NUCLEBRAS Nuclear Technology Development Center, in the field of thermofluodynamics are briefly described. These works include the performing of steady-state flow tests in single tube test sections, and the design and construction of a rod bundle test section, which will be also used for those kind of testes. Mention is made of the works to be performed in the near future, related to steady-state and transient flow tests. (Author) [pt

  5. Cost-Effectiveness of HBV and HCV Screening Strategies – A Systematic Review of Existing Modelling Techniques

    Science.gov (United States)

    Geue, Claudia; Wu, Olivia; Xin, Yiqiao; Heggie, Robert; Hutchinson, Sharon; Martin, Natasha K.; Fenwick, Elisabeth; Goldberg, David

    2015-01-01

    Introduction Studies evaluating the cost-effectiveness of screening for Hepatitis B Virus (HBV) and Hepatitis C Virus (HCV) are generally heterogeneous in terms of risk groups, settings, screening intervention, outcomes and the economic modelling framework. It is therefore difficult to compare cost-effectiveness results between studies. This systematic review aims to summarise and critically assess existing economic models for HBV and HCV in order to identify the main methodological differences in modelling approaches. Methods A structured search strategy was developed and a systematic review carried out. A critical assessment of the decision-analytic models was carried out according to the guidelines and framework developed for assessment of decision-analytic models in Health Technology Assessment of health care interventions. Results The overall approach to analysing the cost-effectiveness of screening strategies was found to be broadly consistent for HBV and HCV. However, modelling parameters and related structure differed between models, producing different results. More recent publications performed better against a performance matrix, evaluating model components and methodology. Conclusion When assessing screening strategies for HBV and HCV infection, the focus should be on more recent studies, which applied the latest treatment regimes, test methods and had better and more complete data on which to base their models. In addition to parameter selection and associated assumptions, careful consideration of dynamic versus static modelling is recommended. Future research may want to focus on these methodological issues. In addition, the ability to evaluate screening strategies for multiple infectious diseases, (HCV and HIV at the same time) might prove important for decision makers. PMID:26689908

  6. A test for the parameters of multiple linear regression models ...

    African Journals Online (AJOL)

    A test for the parameters of multiple linear regression models is developed for conducting tests simultaneously on all the parameters of multiple linear regression models. The test is robust relative to the assumptions of homogeneity of variances and absence of serial correlation of the classical F-test. Under certain null and ...

  7. Rate-control algorithms testing by using video source model

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Anna

    2008-01-01

    In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set.......In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set....

  8. Validity and Reliability of Published Comprehensive Theory of Mind Tests for Normal Preschool Children: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Seyyede Zohreh Ziatabar Ahmadi

    2015-12-01

    Full Text Available Objective: Theory of mind (ToM or mindreading is an aspect of social cognition that evaluates mental states and beliefs of oneself and others. Validity and reliability are very important criteria when evaluating standard tests; and without them, these tests are not usable. The aim of this study was to systematically review the validity and reliability of published English comprehensive ToM tests developed for normal preschool children.Method: We searched MEDLINE (PubMed interface, Web of Science, Science direct, PsycINFO, and also evidence base Medicine (The Cochrane Library databases from 1990 to June 2015. Search strategy was Latin transcription of ‘Theory of Mind’ AND test AND children. Also, we manually studied the reference lists of all final searched articles and carried out a search of their references. Inclusion criteria were as follows: Valid and reliable diagnostic ToM tests published from 1990 to June 2015 for normal preschool children; and exclusion criteria were as follows: the studies that only used ToM tests and single tasks (false belief tasks for ToM assessment and/or had no description about structure, validity or reliability of their tests. Methodological quality of the selected articles was assessed using the Critical Appraisal Skills Programme (CASP.Result: In primary searching, we found 1237 articles in total databases. After removing duplicates and applying all inclusion and exclusion criteria, we selected 11 tests for this systematic review. Conclusion: There were a few valid, reliable and comprehensive ToM tests for normal preschool children. However, we had limitations concerning the included articles. The defined ToM tests were different in populations, tasks, mode of presentations, scoring, mode of responses, times and other variables. Also, they had various validities and reliabilities. Therefore, it is recommended that the researchers and clinicians select the ToM tests according to their psychometric

  9. Storytelling to Enhance Teaching and Learning: The Systematic Design, Development, and Testing of Two Online Courses

    Science.gov (United States)

    Hirumi, Atsusi; Sivo, Stephen; Pounds, Kelly

    2012-01-01

    Storytelling may be a powerful instructional approach for engaging learners and facilitating e-learning. However, relatively little is known about how to apply story within the context of systematic instructional design processes and claims for the effectiveness of storytelling in training and education have been primarily anecdotal and…

  10. What women want. Women's preferences for the management of low-grade abnormal cervical screening tests: a systematic review

    DEFF Research Database (Denmark)

    Frederiksen, Maria Eiholm; Lynge, E; Rebolj, M

    2012-01-01

    Please cite this paper as: Frederiksen M, Lynge E, Rebolj M. What women want. Women's preferences for the management of low-grade abnormal cervical screening tests: a systematic review. BJOG 2011; DOI: 10.1111/j.1471-0528.2011.03130.x. Background If human papillomavirus (HPV) testing will replace...... cytology in primary cervical screening, the frequency of low-grade abnormal screening tests will double. Several available alternatives for the follow-up of low-grade abnormal screening tests have similar outcomes. In this situation, women's preferences have been proposed as a guide for management....... Selection criteria Studies asking women to state a preference between active follow-up and observation for the management of low-grade abnormalities on screening cytology or HPV tests. Data collection and analysis Information on study design, participants and outcomes was retrieved using a prespecified form...

  11. Scaling analysis in modeling transport and reaction processes a systematic approach to model building and the art of approximation

    CERN Document Server

    Krantz, William B

    2007-01-01

    This book is unique as the first effort to expound on the subject of systematic scaling analysis. Not written for a specific discipline, the book targets any reader interested in transport phenomena and reaction processes. The book is logically divided into chapters on the use of systematic scaling analysis in fluid dynamics, heat transfer, mass transfer, and reaction processes. An integrating chapter is included that considers more complex problems involving combined transport phenomena. Each chapter includes several problems that are explained in considerable detail. These are followed by several worked examples for which the general outline for the scaling is given. Each chapter also includes many practice problems. This book is based on recognizing the value of systematic scaling analysis as a pedagogical method for teaching transport and reaction processes and as a research tool for developing and solving models and in designing experiments. Thus, the book can serve as both a textbook and a reference boo...

  12. Design of roundness measurement model with multi-systematic error for cylindrical components with large radius.

    Science.gov (United States)

    Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao

    2016-02-01

    The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius.

  13. Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks.

    Science.gov (United States)

    Jarama, Ángel J; López-Araquistain, Jaime; Miguel, Gonzalo de; Besada, Juan A

    2017-09-21

    In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases) is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature). It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation.

  14. Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks

    Directory of Open Access Journals (Sweden)

    Ángel J. Jarama

    2017-09-01

    Full Text Available In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature. It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation.

  15. Simulation Modelling in Healthcare: An Umbrella Review of Systematic Literature Reviews.

    Science.gov (United States)

    Salleh, Syed; Thokala, Praveen; Brennan, Alan; Hughes, Ruby; Booth, Andrew

    2017-09-01

    Numerous studies examine simulation modelling in healthcare. These studies present a bewildering array of simulation techniques and applications, making it challenging to characterise the literature. The aim of this paper is to provide an overview of the level of activity of simulation modelling in healthcare and the key themes. We performed an umbrella review of systematic literature reviews of simulation modelling in healthcare. Searches were conducted of academic databases (JSTOR, Scopus, PubMed, IEEE, SAGE, ACM, Wiley Online Library, ScienceDirect) and grey literature sources, enhanced by citation searches. The articles were included if they performed a systematic review of simulation modelling techniques in healthcare. After quality assessment of all included articles, data were extracted on numbers of studies included in each review, types of applications, techniques used for simulation modelling, data sources and simulation software. The search strategy yielded a total of 117 potential articles. Following sifting, 37 heterogeneous reviews were included. Most reviews achieved moderate quality rating on a modified AMSTAR (A Measurement Tool used to Assess systematic Reviews) checklist. All the review articles described the types of applications used for simulation modelling; 15 reviews described techniques used for simulation modelling; three reviews described data sources used for simulation modelling; and six reviews described software used for simulation modelling. The remaining reviews either did not report or did not provide enough detail for the data to be extracted. Simulation modelling techniques have been used for a wide range of applications in healthcare, with a variety of software tools and data sources. The number of reviews published in recent years suggest an increased interest in simulation modelling in healthcare.

  16. Methods and models for the construction of weakly parallel tests

    NARCIS (Netherlands)

    Adema, J.J.; Adema, Jos J.

    1992-01-01

    Several methods are proposed for the construction of weakly parallel tests [i.e., tests with the same test information function (TIF)]. A mathematical programming model that constructs tests containing a prespecified TIF and a heuristic that assigns items to tests with information functions that are

  17. Optimization models for flight test scheduling

    Science.gov (United States)

    Holian, Derreck

    As threats around the world increase with nations developing new generations of warfare technology, the Unites States is keen on maintaining its position on top of the defense technology curve. This in return indicates that the U.S. military/government must research, develop, procure, and sustain new systems in the defense sector to safeguard this position. Currently, the Lockheed Martin F-35 Joint Strike Fighter (JSF) Lightning II is being developed, tested, and deployed to the U.S. military at Low Rate Initial Production (LRIP). The simultaneous act of testing and deployment is due to the contracted procurement process intended to provide a rapid Initial Operating Capability (IOC) release of the 5th Generation fighter. For this reason, many factors go into the determination of what is to be tested, in what order, and at which time due to the military requirements. A certain system or envelope of the aircraft must be assessed prior to releasing that capability into service. The objective of this praxis is to aide in the determination of what testing can be achieved on an aircraft at a point in time. Furthermore, it will define the optimum allocation of test points to aircraft and determine a prioritization of restrictions to be mitigated so that the test program can be best supported. The system described in this praxis has been deployed across the F-35 test program and testing sites. It has discovered hundreds of available test points for an aircraft to fly when it was thought none existed thus preventing an aircraft from being grounded. Additionally, it has saved hundreds of labor hours and greatly reduced the occurrence of test point reflight. Due to the proprietary nature of the JSF program, details regarding the actual test points, test plans, and all other program specific information have not been presented. Generic, representative data is used for example and proof-of-concept purposes. Apart from the data correlation algorithms, the optimization associated

  18. Test Driven Development of Scientific Models

    Science.gov (United States)

    Clune, Thomas L.

    2014-01-01

    Test-Driven Development (TDD), a software development process that promises many advantages for developer productivity and software reliability, has become widely accepted among professional software engineers. As the name suggests, TDD practitioners alternate between writing short automated tests and producing code that passes those tests. Although this overly simplified description will undoubtedly sound prohibitively burdensome to many uninitiated developers, the advent of powerful unit-testing frameworks greatly reduces the effort required to produce and routinely execute suites of tests. By testimony, many developers find TDD to be addicting after only a few days of exposure, and find it unthinkable to return to previous practices.After a brief overview of the TDD process and my experience in applying the methodology for development activities at Goddard, I will delve more deeply into some of the challenges that are posed by numerical and scientific software as well as tools and implementation approaches that should address those challenges.

  19. Clinical information modeling processes for semantic interoperability of electronic health records: systematic review and inductive analysis.

    Science.gov (United States)

    Moreno-Conde, Alberto; Moner, David; Cruz, Wellington Dimas da; Santos, Marcelo R; Maldonado, José Alberto; Robles, Montserrat; Kalra, Dipak

    2015-07-01

    This systematic review aims to identify and compare the existing processes and methodologies that have been published in the literature for defining clinical information models (CIMs) that support the semantic interoperability of electronic health record (EHR) systems. Following the preferred reporting items for systematic reviews and meta-analyses systematic review methodology, the authors reviewed published papers between 2000 and 2013 that covered that semantic interoperability of EHRs, found by searching the PubMed, IEEE Xplore, and ScienceDirect databases. Additionally, after selection of a final group of articles, an inductive content analysis was done to summarize the steps and methodologies followed in order to build CIMs described in those articles. Three hundred and seventy-eight articles were screened and thirty six were selected for full review. The articles selected for full review were analyzed to extract relevant information for the analysis and characterized according to the steps the authors had followed for clinical information modeling. Most of the reviewed papers lack a detailed description of the modeling methodologies used to create CIMs. A representative example is the lack of description related to the definition of terminology bindings and the publication of the generated models. However, this systematic review confirms that most clinical information modeling activities follow very similar steps for the definition of CIMs. Having a robust and shared methodology could improve their correctness, reliability, and quality. Independently of implementation technologies and standards, it is possible to find common patterns in methods for developing CIMs, suggesting the viability of defining a unified good practice methodology to be used by any clinical information modeler. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Modelling of the spallation reaction: analysis and testing of nuclear models

    International Nuclear Information System (INIS)

    Toccoli, C.

    2000-01-01

    The spallation reaction is considered as a 2-step process. First a very quick stage (10 -22 , 10 -29 s) which corresponds to the individual interaction between the incident projectile and nucleons, this interaction is followed by a series of nucleon-nucleon collisions (intranuclear cascade) during which fast particles are emitted, the nucleus is left in a strongly excited level. Secondly a slower stage (10 -18 , 10 -19 s) during which the nucleus is expected to de-excite completely. This de-excitation is performed by evaporation of light particles (n, p, d, t, 3 He, 4 He) or/and fission or/and fragmentation. The HETC code has been designed to simulate spallation reactions, this simulation is based on the 2-steps process and on several models of intranuclear cascades (Bertini model, Cugnon model, Helder Duarte model), the evaporation model relies on the statistical theory of Weiskopf-Ewing. The purpose of this work is to evaluate the ability of the HETC code to predict experimental results. A methodology about the comparison of relevant experimental data with results of calculation is presented and a preliminary estimation of the systematic error of the HETC code is proposed. The main problem of cascade models originates in the difficulty of simulating inelastic nucleon-nucleon collisions, the emission of pions is over-estimated and corresponding differential spectra are badly reproduced. The inaccuracy of cascade models has a great impact to determine the excited level of the nucleus at the end of the first step and indirectly on the distribution of final residual nuclei. The test of the evaporation model has shown that the emission of high energy light particles is under-estimated. (A.C.)

  1. Test Driven Development of Scientific Models

    Science.gov (United States)

    Clune, Thomas L.

    2012-01-01

    Test-Driven Development (TDD) is a software development process that promises many advantages for developer productivity and has become widely accepted among professional software engineers. As the name suggests, TDD practitioners alternate between writing short automated tests and producing code that passes those tests. Although this overly simplified description will undoubtedly sound prohibitively burdensome to many uninitiated developers, the advent of powerful unit-testing frameworks greatly reduces the effort required to produce and routinely execute suites of tests. By testimony, many developers find TDD to be addicting after only a few days of exposure, and find it unthinkable to return to previous practices. Of course, scientific/technical software differs from other software categories in a number of important respects, but I nonetheless believe that TDD is quite applicable to the development of such software and has the potential to significantly improve programmer productivity and code quality within the scientific community. After a detailed introduction to TDD, I will present the experience within the Software Systems Support Office (SSSO) in applying the technique to various scientific applications. This discussion will emphasize the various direct and indirect benefits as well as some of the difficulties and limitations of the methodology. I will conclude with a brief description of pFUnit, a unit testing framework I co-developed to support test-driven development of parallel Fortran applications.

  2. Systematic Multi‐Scale Model Development Strategy for the Fragrance Spraying Process and Transport

    DEFF Research Database (Denmark)

    Heitzig, M.; Rong, Y.; Gregson, C.

    2012-01-01

    The fast and efficient development and application of reliable models with appropriate degree of detail to predict the behavior of fragrance aerosols are challenging problems of high interest to the related industries. A generic modeling template for the systematic derivation of specific fragrance......‐aided modeling framework, which is structured based on workflows for different general modeling tasks. The benefits of the fragrance spraying template are highlighted by a case study related to the derivation of a fragrance aerosol model that is able to reflect measured dynamic droplet size distribution profiles...... aerosol models is proposed. The main benefits of the fragrance spraying template are the speed‐up of the model development/derivation process, the increase in model quality, and the provision of structured domain knowledge where needed. The fragrance spraying template is integrated in a generic computer...

  3. Syndemics of psychosocial problems and HIV risk: A systematic review of empirical tests of the disease interaction concept.

    Science.gov (United States)

    Tsai, Alexander C; Burns, Bridget F O

    2015-08-01

    In the theory of syndemics, diseases co-occur in particular temporal or geographical contexts due to harmful social conditions (disease concentration) and interact at the level of populations and individuals, with mutually enhancing deleterious consequences for health (disease interaction). This theory has widespread adherents in the field, but the extent to which there is empirical support for the concept of disease interaction remains unclear. In January 2015 we systematically searched 7 bibliographic databases and tracked citations to highly cited publications associated with the theory of syndemics. Of the 783 records, we ultimately included 34 published journal articles, 5 dissertations, and 1 conference abstract. Most studies were based on a cross-sectional design (32 [80%]), were conducted in the U.S. (32 [80%]), and focused on men who have sex with men (21 [53%]). The most frequently studied psychosocial problems were related to mental health (33 [83%]), substance abuse (36 [90%]), and violence (27 [68%]); while the most frequently studied outcome variables were HIV transmission risk behaviors (29 [73%]) or HIV infection (9 [23%]). To test the disease interaction concept, 11 (28%) studies used some variation of a product term, with less than half of these (5/11 [45%]) providing sufficient information to interpret interaction both on an additive and on a multiplicative scale. The most frequently used specification (31 [78%]) to test the disease interaction concept was the sum score corresponding to the total count of psychosocial problems. Although the count variable approach does not test hypotheses about interactions between psychosocial problems, these studies were much more likely than others (14/31 [45%] vs. 0/9 [0%]; χ2 = 6.25, P = 0.01) to incorporate language about "synergy" or "interaction" that was inconsistent with the statistical models used. Therefore, more evidence is needed to assess the extent to which diseases interact, either at the

  4. Testing the generalized partial credit model

    OpenAIRE

    Glas, Cornelis A.W.

    1996-01-01

    The partial credit model (PCM) (G.N. Masters, 1982) can be viewed as a generalization of the Rasch model for dichotomous items to the case of polytomous items. In many cases, the PCM is too restrictive to fit the data. Several generalizations of the PCM have been proposed. In this paper, a generalization of the PCM (GPCM), a further generalization of the one-parameter logistic model, is discussed. The model is defined and the conditional maximum likelihood procedure for the method is describe...

  5. Testing the compounding structure of the CP-INARCH model

    OpenAIRE

    Weiß, Christian H.; Gonçalves, Esmeralda; Lopes, Nazaré Mendes

    2017-01-01

    A statistical test to distinguish between a Poisson INARCH model and a Compound Poisson INARCH model is proposed, based on the form of the probability generating function of the compounding distribution of the conditional law of the model. For first-order autoregression, the normality of the test statistics’ asymptotic distribution is established, either in the case where the model parameters are specified, or when such parameters are consistently estimated. As the test statistics’ law involv...

  6. A systematic fault tree analysis based on multi-level flow modeling

    International Nuclear Information System (INIS)

    Gofuku, Akio; Ohara, Ai

    2010-01-01

    The fault tree analysis (FTA) is widely applied for the safety evaluation of a large-scale and mission-critical system. Because the potential of the FTA, however, strongly depends on human skill of analyzers, problems are pointed out in (1) education and training, (2) unreliable quality, (3) necessity of expertise knowledge, and (4) update of FTA results after the reconstruction of a target system. To get rid of these problems, many techniques to systematize FTA activities by applying computer technologies have been proposed. However, these techniques only use structural information of a target system and do not use functional information that is one of important properties of an artifact. The principle of FTA is to trace comprehensively cause-effect relations from a top undesirable effect to anomaly causes. The tracing is similar to the causality estimation technique that the authors proposed to find plausible counter actions to prevent or to mitigate the undesirable behavior of plants based on the model by a functional modeling technique, Multilevel Flow Modeling (MFM). The authors have extended this systematic technique to construct a fault tree (FT). This paper presents an algorithm of systematic construction of FT based on MFM models and demonstrates the applicability of the extended technique by the FT construction result of a cooling plant of nitric acid. (author)

  7. A Systematic Review of Behavioral Interventions to Reduce Condomless Sex and Increase HIV Testing for Latino MSM.

    Science.gov (United States)

    Pérez, Ashley; Santamaria, E Karina; Operario, Don

    2017-12-15

    Latino men who have sex with men (MSM) in the United States are disproportionately affected by HIV, and there have been calls to improve availability of culturally sensitive HIV prevention programs for this population. This article provides a systematic review of intervention programs to reduce condomless sex and/or increase HIV testing among Latino MSM. We searched four electronic databases using a systematic review protocol, screened 1777 unique records, and identified ten interventions analyzing data from 2871 Latino MSM. Four studies reported reductions in condomless anal intercourse, and one reported reductions in number of sexual partners. All studies incorporated surface structure cultural features such as bilingual study recruitment, but the incorporation of deep structure cultural features, such as machismo and sexual silence, was lacking. There is a need for rigorously designed interventions that incorporate deep structure cultural features in order to reduce HIV among Latino MSM.

  8. Life course socio-economic position and quality of life in adulthood: a systematic review of life course models

    Science.gov (United States)

    2012-01-01

    Background A relationship between current socio-economic position and subjective quality of life has been demonstrated, using wellbeing, life and needs satisfaction approaches. Less is known regarding the influence of different life course socio-economic trajectories on later quality of life. Several conceptual models have been proposed to help explain potential life course effects on health, including accumulation, latent, pathway and social mobility models. This systematic review aimed to assess whether evidence supported an overall relationship between life course socio-economic position and quality of life during adulthood and if so, whether there was support for one or more life course models. Methods A review protocol was developed detailing explicit inclusion and exclusion criteria, search terms, data extraction items and quality appraisal procedures. Literature searches were performed in 12 electronic databases during January 2012 and the references and citations of included articles were checked for additional relevant articles. Narrative synthesis was used to analyze extracted data and studies were categorized based on the life course model analyzed. Results Twelve studies met the eligibility criteria and used data from 10 datasets and five countries. Study quality varied and heterogeneity between studies was high. Seven studies assessed social mobility models, five assessed the latent model, two assessed the pathway model and three tested the accumulation model. Evidence indicated an overall relationship, but mixed results were found for each life course model. Some evidence was found to support the latent model among women, but not men. Social mobility models were supported in some studies, but overall evidence suggested little to no effect. Few studies addressed accumulation and pathway effects and study heterogeneity limited synthesis. Conclusions To improve potential for synthesis in this area, future research should aim to increase study

  9. Model Based Analysis and Test Generation for Flight Software

    Science.gov (United States)

    Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep

    2009-01-01

    We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.

  10. Testing the generalized partial credit model

    NARCIS (Netherlands)

    Glas, Cornelis A.W.

    1996-01-01

    The partial credit model (PCM) (G.N. Masters, 1982) can be viewed as a generalization of the Rasch model for dichotomous items to the case of polytomous items. In many cases, the PCM is too restrictive to fit the data. Several generalizations of the PCM have been proposed. In this paper, a

  11. Modelling of the spallation reaction: analysis and testing of nuclear models; Simulation de la spallation: analyse et test des modeles nucleaires

    Energy Technology Data Exchange (ETDEWEB)

    Toccoli, C

    2000-04-03

    The spallation reaction is considered as a 2-step process. First a very quick stage (10{sup -22}, 10{sup -29} s) which corresponds to the individual interaction between the incident projectile and nucleons, this interaction is followed by a series of nucleon-nucleon collisions (intranuclear cascade) during which fast particles are emitted, the nucleus is left in a strongly excited level. Secondly a slower stage (10{sup -18}, 10{sup -19} s) during which the nucleus is expected to de-excite completely. This de-excitation is performed by evaporation of light particles (n, p, d, t, {sup 3}He, {sup 4}He) or/and fission or/and fragmentation. The HETC code has been designed to simulate spallation reactions, this simulation is based on the 2-steps process and on several models of intranuclear cascades (Bertini model, Cugnon model, Helder Duarte model), the evaporation model relies on the statistical theory of Weiskopf-Ewing. The purpose of this work is to evaluate the ability of the HETC code to predict experimental results. A methodology about the comparison of relevant experimental data with results of calculation is presented and a preliminary estimation of the systematic error of the HETC code is proposed. The main problem of cascade models originates in the difficulty of simulating inelastic nucleon-nucleon collisions, the emission of pions is over-estimated and corresponding differential spectra are badly reproduced. The inaccuracy of cascade models has a great impact to determine the excited level of the nucleus at the end of the first step and indirectly on the distribution of final residual nuclei. The test of the evaporation model has shown that the emission of high energy light particles is under-estimated. (A.C.)

  12. A test of inflated zeros for Poisson regression models.

    Science.gov (United States)

    He, Hua; Zhang, Hui; Ye, Peng; Tang, Wan

    2017-01-01

    Excessive zeros are common in practice and may cause overdispersion and invalidate inference when fitting Poisson regression models. There is a large body of literature on zero-inflated Poisson models. However, methods for testing whether there are excessive zeros are less well developed. The Vuong test comparing a Poisson and a zero-inflated Poisson model is commonly applied in practice. However, the type I error of the test often deviates seriously from the nominal level, rendering serious doubts on the validity of the test in such applications. In this paper, we develop a new approach for testing inflated zeros under the Poisson model. Unlike the Vuong test for inflated zeros, our method does not require a zero-inflated Poisson model to perform the test. Simulation studies show that when compared with the Vuong test our approach not only better at controlling type I error rate, but also yield more power.

  13. FROM ATOMISTIC TO SYSTEMATIC COARSE-GRAINED MODELS FOR MOLECULAR SYSTEMS

    KAUST Repository

    Harmandaris, Vagelis

    2017-10-03

    The development of systematic (rigorous) coarse-grained mesoscopic models for complex molecular systems is an intense research area. Here we first give an overview of methods for obtaining optimal parametrized coarse-grained models, starting from detailed atomistic representation for high dimensional molecular systems. Different methods are described based on (a) structural properties (inverse Boltzmann approaches), (b) forces (force matching), and (c) path-space information (relative entropy). Next, we present a detailed investigation concerning the application of these methods in systems under equilibrium and non-equilibrium conditions. Finally, we present results from the application of these methods to model molecular systems.

  14. Tests for the Assessment of Sport-Specific Performance in Olympic Combat Sports: A Systematic Review With Practical Recommendations.

    Science.gov (United States)

    Chaabene, Helmi; Negra, Yassine; Bouguezzi, Raja; Capranica, Laura; Franchini, Emerson; Prieske, Olaf; Hbacha, Hamdi; Granacher, Urs

    2018-01-01

    The regular monitoring of physical fitness and sport-specific performance is important in elite sports to increase the likelihood of success in competition. This study aimed to systematically review and to critically appraise the methodological quality, validation data, and feasibility of the sport-specific performance assessment in Olympic combat sports like amateur boxing, fencing, judo, karate, taekwondo, and wrestling. A systematic search was conducted in the electronic databases PubMed, Google-Scholar, and Science-Direct up to October 2017. Studies in combat sports were included that reported validation data (e.g., reliability, validity, sensitivity) of sport-specific tests. Overall, 39 studies were eligible for inclusion in this review. The majority of studies (74%) contained sample sizes sport-specific tests (intraclass correlation coefficient [ICC] = 0.43-1.00). Content validity was addressed in all included studies, criterion validity (only the concurrent aspect of it) in approximately half of the studies with correlation coefficients ranging from r = -0.41 to 0.90. Construct validity was reported in 31% of the included studies and predictive validity in only one. Test sensitivity was addressed in 13% of the included studies. The majority of studies (64%) ignored and/or provided incomplete information on test feasibility and methodological limitations of the sport-specific test. In 28% of the included studies, insufficient information or a complete lack of information was provided in the respective field of the test application. Several methodological gaps exist in studies that used sport-specific performance tests in Olympic combat sports. Additional research should adopt more rigorous validation procedures in the application and description of sport-specific performance tests in Olympic combat sports.

  15. Decoding β-decay systematics: A global statistical model for β- half-lives

    International Nuclear Information System (INIS)

    Costiris, N. J.; Mavrommatis, E.; Gernoth, K. A.; Clark, J. W.

    2009-01-01

    Statistical modeling of nuclear data provides a novel approach to nuclear systematics complementary to established theoretical and phenomenological approaches based on quantum theory. Continuing previous studies in which global statistical modeling is pursued within the general framework of machine learning theory, we implement advances in training algorithms designed to improve generalization, in application to the problem of reproducing and predicting the half-lives of nuclear ground states that decay 100% by the β - mode. More specifically, fully connected, multilayer feed-forward artificial neural network models are developed using the Levenberg-Marquardt optimization algorithm together with Bayesian regularization and cross-validation. The predictive performance of models emerging from extensive computer experiments is compared with that of traditional microscopic and phenomenological models as well as with the performance of other learning systems, including earlier neural network models as well as the support vector machines recently applied to the same problem. In discussing the results, emphasis is placed on predictions for nuclei that are far from the stability line, and especially those involved in r-process nucleosynthesis. It is found that the new statistical models can match or even surpass the predictive performance of conventional models for β-decay systematics and accordingly should provide a valuable additional tool for exploring the expanding nuclear landscape.

  16. 2-D Model Test Study of the Suape Breakwater, Brazil

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Burcharth, Hans F.; Sopavicius, A.

    This report deals with a two-dimensional model test study of the extension of the breakwater in Suape, Brazil. One cross-section was tested for stability and overtopping in various sea conditions. The length scale used for the model tests was 1:35. Unless otherwise specified all values given...

  17. Testing constancy of unconditional variance in volatility models by misspecification and specification tests

    DEFF Research Database (Denmark)

    Silvennoinen, Annastiina; Terasvirta, Timo

    The topic of this paper is testing the hypothesis of constant unconditional variance in GARCH models against the alternative that the unconditional variance changes deterministically over time. Tests of this hypothesis have previously been performed as misspecification tests after fitting a GARCH...... models. An application to exchange rate returns is included....

  18. [The Offer of Medical-Diagnostic Self-Tests on German Language Websites: Results of a Systematic Internet Search].

    Science.gov (United States)

    Kuecuekbalaban, P; Schmidt, S; Muehlan, H

    2018-03-01

    The aim of the current study was to provide an overview of medical-diagnostic self-tests which can be purchased without a medical prescription on German language websites. From September 2014 to March 2015, a systematic internet research was conducted with the following search terms: self-test, self-diagnosis, home test, home diagnosis, quick test, rapid test. 513 different self-tests for the diagnostics of 52 diverse diseases or health risks were identified, including chronic diseases (e. g. diabetes, chronic disease of the kidneys, liver, and lungs), sexually transmitted diseases (e. g. HIV, chlamydia, gonorrhea), infectious diseases (e. g. tuberculosis, malaria, Helicobacter pylori), allergies (e. g. house dust, cats, histamine) and cancer as well as tests for the diagnostics of 12 different psychotropic substances. These were sold by 90 companies in Germany and by other foreign companies. The number of medical-diagnostic self-tests which can be bought without a medical prescription on the Internet has increased enormously in the last 10 years. Further studies are needed for the identification of the determinants of the use of self-tests as well as the impact of the application on the experience and behavior of the user. © Georg Thieme Verlag KG Stuttgart · New York.

  19. Measurement of physical performance by field tests in programs of cardiac rehabilitation: a systematic review and meta-analysis.

    Science.gov (United States)

    Travensolo, Cristiane; Goessler, Karla; Poton, Roberto; Pinto, Roberta Ramos; Polito, Marcos Doederlein

    2018-04-13

    The literature concerning the effects of cardiac rehabilitation (CR) on field tests results is inconsistent. To perform a systematic review with meta-analysis on field tests results after programs of CR. Studies published in PubMed and Web of Science databases until May 2016 were analyzed. The standard difference in means correct by bias (Hedges' g) was used as effect size (g) to measure que amount of modifications in performance of field tests after CR period. Potential differences between subgroups were analyzed by Q-test based on ANOVA. Fifteen studies published between 1996 e 2016 were included in the review, 932 patients and age ranged 54,4 - 75,3 years old. Fourteen studies used the six-minutes walking test to evaluate the exercise capacity and one study used the Shuttle Walk Test. The random Hedges's g was 0.617 (P<0.001), representing a drop of 20% in the performance of field test after CR. The meta-regression showed significantly association (P=0.01) to aerobic exercise duration, i.e., for each 1-min increase in aerobic exercise duration, there is a 0.02 increase in effect size for performance in the field test. Field tests can detect physical modification after CR, and the large duration of aerobic exercise during CR was associated with a better result. Copyright © 2018 Sociedade Portuguesa de Cardiologia. Publicado por Elsevier España, S.L.U. All rights reserved.

  20. Evaluation models and criteria of the quality of hospital websites: a systematic review study

    OpenAIRE

    Jeddi, Fatemeh Rangraz; Gilasi, Hamidreza; Khademi, Sahar

    2017-01-01

    Introduction Hospital websites are important tools in establishing communication and exchanging information between patients and staff, and thus should enjoy an acceptable level of quality. The aim of this study was to identify proper models and criteria to evaluate the quality of hospital websites. Methods This research was a systematic review study. The international databases such as Science Direct, Google Scholar, PubMed, Proquest, Ovid, Elsevier, Springer, and EBSCO together with regiona...

  1. Systematic Assessment of Neutron and Gamma Backgrounds Relevant to Operational Modeling and Detection Technology Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Archer, Daniel E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hornback, Donald Eric [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Jeffrey O. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nicholson, Andrew D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Patton, Bruce W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Peplow, Douglas E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Miller, Thomas Martin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ayaz-Maierhafer, Birsen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-01-01

    This report summarizes the findings of a two year effort to systematically assess neutron and gamma backgrounds relevant to operational modeling and detection technology implementation. The first year effort focused on reviewing the origins of background sources and their impact on measured rates in operational scenarios of interest. The second year has focused on the assessment of detector and algorithm performance as they pertain to operational requirements against the various background sources and background levels.

  2. Implementing learning organization components in Ardabil Regional Water Company based on Marquardt systematic model

    OpenAIRE

    Shahram Mirzaie Daryani; Azadeh Zirak

    2015-01-01

    This main purpose of this study was to survey the implementation of learning organization characteristics based on Marquardt systematic model in Ardabil Regional Water Company. Two hundred and four staff (164 employees and 40 authorities) participated in the study. For data collection Marquardt questionnaire was used which its validity and reliability had been confirmed. The results of the data analysis showed that learning organization characteristics were used more than average level in som...

  3. Systematic review and proposal of a field-based physical fitness-test battery in preschool children: the PREFIT battery.

    Science.gov (United States)

    Ortega, Francisco B; Cadenas-Sánchez, Cristina; Sánchez-Delgado, Guillermo; Mora-González, José; Martínez-Téllez, Borja; Artero, Enrique G; Castro-Piñero, Jose; Labayen, Idoia; Chillón, Palma; Löf, Marie; Ruiz, Jonatan R

    2015-04-01

    Physical fitness is a powerful health marker in childhood and adolescence, and it is reasonable to think that it might be just as important in younger children, i.e. preschoolers. At the moment, researchers, clinicians and sport practitioners do not have enough information about which fitness tests are more reliable, valid and informative from the health point of view to be implemented in preschool children. Our aim was to systematically review the studies conducted in preschool children using field-based fitness tests, and examine their (1) reliability, (2) validity, and (3) relationship with health outcomes. Our ultimate goal was to propose a field-based physical fitness-test battery to be used in preschool children. PubMed and Web of Science. Studies conducted in healthy preschool children that included field-based fitness tests. When using PubMed, we included Medical Subject Heading (MeSH) terms to enhance the power of the search. A set of fitness-related terms were combined with 'child, preschool' [MeSH]. The same strategy and terms were used for Web of Science (except for the MeSH option). Since no previous reviews with a similar aim were identified, we searched for all articles published up to 1 April 2014 (no starting date). A total of 2,109 articles were identified, of which 22 articles were finally selected for this review. Most studies focused on reliability of the fitness tests (n = 21, 96%), while very few focused on validity (0 criterion-related validity and 4 (18%) convergent validity) or relationship with health outcomes (0 longitudinal and 1 (5%) cross-sectional study). Motor fitness, particularly balance, was the most studied fitness component, while cardiorespiratory fitness was the least studied. After analyzing the information retrieved in the current systematic review about fitness testing in preschool children, we propose the PREFIT battery, field-based FITness testing in PREschool children. The PREFIT battery is composed of the following

  4. Model-Based GUI Testing Using Uppaal at Novo Nordisk

    Science.gov (United States)

    Hjort, Ulrik H.; Illum, Jacob; Larsen, Kim G.; Petersen, Michael A.; Skou, Arne

    This paper details a collaboration between Aalborg University and Novo Nordiskin developing an automatic model-based test generation tool for system testing of the graphical user interface of a medical device on an embedded platform. The tool takes as input an UML Statemachine model and generates a test suite satisfying some testing criterion, such as edge or state coverage, and converts the individual test case into a scripting language that can be automatically executed against the target. The tool has significantly reduced the time required for test construction and generation, and reduced the number of test scripts while increasing the coverage.

  5. Screening to prevent spontaneous preterm birth: systematic reviews of accuracy and effectiveness literature with economic modelling.

    Science.gov (United States)

    Honest, H; Forbes, C A; Durée, K H; Norman, G; Duffy, S B; Tsourapas, A; Roberts, T E; Barton, P M; Jowett, S M; Hyde, C J; Khan, K S

    2009-09-01

    To identify combinations of tests and treatments to predict and prevent spontaneous preterm birth. Searches were run on the following databases up to September 2005 inclusive: MEDLINE, EMBASE, DARE, the Cochrane Library (CENTRAL and Cochrane Pregnancy and Childbirth Group trials register) and MEDION. We also contacted experts including the Cochrane Pregnancy and Childbirth Group and checked reference lists of review articles and papers that were eligible for inclusion. Two series of systematic reviews were performed: (1) accuracy of tests for the prediction of spontaneous preterm birth in asymptomatic women in early pregnancy and in women symptomatic with threatened preterm labour in later pregnancy; (2) effectiveness of interventions with potential to reduce cases of spontaneous preterm birth in asymptomatic women in early pregnancy and to reduce spontaneous preterm birth or improve neonatal outcome in women with a viable pregnancy symptomatic of threatened preterm labour. For the health economic evaluation, a model-based analysis incorporated the combined effect of tests and treatments and their cost-effectiveness. Of the 22 tests reviewed for accuracy, the quality of studies and accuracy of tests was generally poor. Only a few tests had LR+ > 5. In asymptomatic women these were ultrasonographic cervical length measurement and cervicovaginal prolactin and fetal fibronectin screening for predicting spontaneous preterm birth before 34 weeks. In this group, tests with LR- 5 were absence of fetal breathing movements, cervical length and funnelling, amniotic fluid interleukin-6 (IL-6), serum CRP for predicting birth within 2-7 days of testing, and matrix metalloprotease-9, amniotic fluid IL-6, cervicovaginal fetal fibronectin and cervicovaginal human chorionic gonadotrophin (hCG) for predicting birth before 34 or 37 weeks. In this group, tests with LR- asymptomatic women. Non-steroidal anti-inflammatory agents were the most effective tocolytic agent for reducing

  6. A Model for Random Student Drug Testing

    Science.gov (United States)

    Nelson, Judith A.; Rose, Nancy L.; Lutz, Danielle

    2011-01-01

    The purpose of this case study was to examine random student drug testing in one school district relevant to: (a) the perceptions of students participating in competitive extracurricular activities regarding drug use and abuse; (b) the attitudes and perceptions of parents, school staff, and community members regarding student drug involvement; (c)…

  7. A systematic approach for development of a PWR cladding corrosion model

    International Nuclear Information System (INIS)

    Quecedo, M.; Serna, J.J.; Weiner, R.A.; Kersting, P.J.

    2001-01-01

    A new model for the in-reactor corrosion of Improved (low-tin) Zircaloy-4 cladding irradiated in commercial pressurized water reactors (PWRs) is described. The model is based on an extensive database of PWR fuel cladding corrosion data from fuel irradiated in commercial reactors, with a range of fuel duty and coolant chemistry control strategies which bracket current PWR fuel management practices. The fuel thermal duty with these current fuel management practices is characterized by a significant amount of sub-cooled nucleate boiling (SNB) during the fuel's residence in-core, and the cladding corrosion model is very sensitive to the coolant heat transfer models used to calculate the coolant temperature at the oxide surface. The systematic approach to developing the new corrosion model therefore began with a review and evaluation of several alternative models for the forced convection and SNB coolant heat transfer. The heat transfer literature is not sufficient to determine which of these heat transfer models is most appropriate for PWR fuel rod operating conditions, and the selection of the coolant heat transfer model used in the new cladding corrosion model has been coupled with a statistical analysis of the in-reactor corrosion enhancement factors and their impact on obtaining the best fit to the cladding corrosion data. The in-reactor corrosion enhancement factors considered in this statistical analysis are based on a review of the current literature for PWR cladding corrosion phenomenology and models. Fuel operating condition factors which this literature review indicated could have a significant effect on the cladding corrosion performance were also evaluated in detail in developing the corrosion model. An iterative least squares fitting procedure was used to obtain the model coefficients and select the coolant heat transfer models and in-reactor corrosion enhancement factors. This statistical procedure was completed with an exhaustive analysis of the model

  8. e-Government Maturity Model Based on Systematic Review and Meta-Ethnography Approach

    Directory of Open Access Journals (Sweden)

    Darmawan Napitupulu

    2016-11-01

    Full Text Available Maturity model based on e-Government portal has been developed by a number of researchers both individually and institutionally, but still scattered in various journals and conference articles and can be said to have a different focus with each other, both in terms of stages and features. The aim of this research is conducting a study to integrate a number of maturity models existing today in order to build generic maturity model based on e-Government portal. The method used in this study is Systematic Review with meta-ethnography qualitative approach. Meta-ethnography, which is part of Systematic Review method, is a technique to perform data integration to obtain theories and concepts with a new level of understanding that is deeper and thorough. The result obtained is a maturity model based on e-Government portal that consists of 7 (seven stages, namely web presence, interaction, transaction, vertical integration, horizontal integration, full integration, and open participation. These seven stages are synthesized from the 111 key concepts related to 25 studies of maturity model based e-Government portal. The maturity model resulted is more comprehensive and generic because it is an integration of models (best practices that exists today.

  9. Improving the Diagnosis of Legionella Pneumonia within a Healthcare System through a Systematic Consultation and Testing Program.

    Science.gov (United States)

    Decker, Brooke K; Harris, Patricia L; Muder, Robert R; Hong, Jae H; Singh, Nina; Sonel, Ali F; Clancy, Cornelius J

    2016-08-01

    Legionella testing is not recommended for all patients with pneumonia, but rather for particular patient subgroups. As a result, the overall incidence of Legionella pneumonia may be underestimated. To determine the incidence of Legionella pneumonia in a veteran population in an endemic area after introduction of a systematic infectious diseases consultation and testing program. In response to a 2011-2012 outbreak, the VA Pittsburgh Healthcare System mandated infectious diseases consultations and testing for Legionella by urine antigen and sputum culture in all patients with pneumonia. Between January 2013 and December 2015, 1,579 cases of pneumonia were identified. The incidence of pneumonia was 788/100,000 veterans per year, including 352/100,000 veterans per year and 436/100,000 veterans per year with community-associated pneumonia (CAP) and health care-associated pneumonia, respectively. Ninety-eight percent of patients with suspected pneumonia were tested for Legionella by at least one method. Legionella accounted for 1% of pneumonia cases (n = 16), including 1.7% (12/706) and 0.6% (4/873) of CAP and health care-associated pneumonia, respectively. The yearly incidences of Legionella pneumonia and Legionella CAP were 7.99 and 5.99/100,000 veterans, respectively. The sensitivities of urine antigen and sputum culture were 81% and 60%, respectively; the specificity of urine antigen was >99.97%. Urine antigen testing and Legionella cultures increased by 65% and 330%, respectively, after introduction of our program. Systematic testing of veterans in an endemic area revealed a higher incidence of Legionella pneumonia and CAP than previously reported. Widespread urine antigen testing was not limited by false positivity.

  10. Testing Pearl Model In Three European Sites

    Science.gov (United States)

    Bouraoui, F.; Bidoglio, G.

    The Plant Protection Product Directive (91/414/EEC) stresses the need of validated models to calculate predicted environmental concentrations. The use of models has become an unavoidable step before pesticide registration. In this context, European Commission, and in particular DGVI, set up a FOrum for the Co-ordination of pes- ticide fate models and their USe (FOCUS). In a complementary effort, DG research supported the APECOP project, with one of its objective being the validation and im- provement of existing pesticide fate models. The main topic of research presented here is the validation of the PEARL model for different sites in Europe. The PEARL model, actually used in the Dutch pesticide registration procedure, was validated in three well- instrumented sites: Vredepeel (the Netherlands), Brimstone (UK), and Lanna (Swe- den). A step-wise procedure was used for the validation of the PEARL model. First the water transport module was calibrated, and then the solute transport module, using tracer measurements keeping unchanged the water transport parameters. The Vrede- peel site is characterised by a sandy soil. Fourteen months of measurements were used for the calibration. Two pesticides were applied on the site: bentazone and etho- prophos. PEARL predictions were very satisfactory for both soil moisture content, and pesticide concentration in the soil profile. The Brimstone site is characterised by a cracking clay soil. The calibration was conducted on a time series measurement of 7 years. The validation consisted in comparing predictions and measurement of soil moisture at different soil depths, and in comparing the predicted and measured con- centration of isoproturon in the drainage water. The results, even if in good agreement with the measuremens, highlighted the limitation of the model when the preferential flow becomes a dominant process. PEARL did not reproduce well soil moisture pro- file during summer months, and also under-predicted the arrival of

  11. [The effectiveness of continuing care models in patients with chronic diseases: a systematic review].

    Science.gov (United States)

    Chen, Hsiao-Mei; Han, Tung-Chen; Chen, Ching-Min

    2014-04-01

    Population aging has caused significant rises in the prevalence of chronic diseases and the utilization of healthcare services in Taiwan. The current healthcare delivery system is fragmented. Integrating medical services may increase the quality of healthcare, enhance patient and patient family satisfaction with healthcare services, and better contain healthcare costs. This article introduces two continuing care models: discharge planning and case management. Further, the effectiveness and essential components of these two models are analyzed using a systematic review method. Articles included in this systematic review were all original articles on discharge-planning or case-management interventions published between February 1999 and March 2013 in any of 6 electronic databases (Medline, PubMed, Cinahl Plus with full Text, ProQuest, Cochrane Library, CEPS and Center for Chinese Studies electronic databases). Of the 70 articles retrieved, only 7 were randomized controlled trial studies. Three types of continuity-of-care models were identified: discharge planning, case management, and a hybrid of these two. All three models used logical and systematic processes to conduct assessment, planning, implementation, coordination, follow-up, and evaluation activities. Both the discharge planning model and the case management model were positively associated with improved self-care knowledge, reduced length of stay, decreased medical costs, and better quality of life. This study cross-referenced all reviewed articles in terms of target clients, content, intervention schedules, measurements, and outcome indicators. Study results may be referenced in future implementations of continuity-care models and may provide a reference for future research.

  12. Correcting systematic inflation in genetic association tests that consider interaction effects: application to a genome-wide association study of posttraumatic stress disorder.

    Science.gov (United States)

    Almli, Lynn M; Duncan, Richard; Feng, Hao; Ghosh, Debashis; Binder, Elisabeth B; Bradley, Bekh; Ressler, Kerry J; Conneely, Karen N; Epstein, Michael P

    2014-12-01

    Genetic association studies of psychiatric outcomes often consider interactions with environmental exposures and, in particular, apply tests that jointly consider gene and gene-environment interaction effects for analysis. Using a genome-wide association study (GWAS) of posttraumatic stress disorder (PTSD), we report that heteroscedasticity (defined as variability in outcome that differs by the value of the environmental exposure) can invalidate traditional joint tests of gene and gene-environment interaction. To identify the cause of bias in traditional joint tests of gene and gene-environment interaction in a PTSD GWAS and determine whether proposed robust joint tests are insensitive to this problem. The PTSD GWAS data set consisted of 3359 individuals (978 men and 2381 women) from the Grady Trauma Project (GTP), a cohort study from Atlanta, Georgia. The GTP performed genome-wide genotyping of participants and collected environmental exposures using the Childhood Trauma Questionnaire and Trauma Experiences Inventory. We performed joint interaction testing of the Beck Depression Inventory and modified PTSD Symptom Scale in the GTP GWAS. We assessed systematic bias in our interaction analyses using quantile-quantile plots and genome-wide inflation factors. Application of the traditional joint interaction test to the GTP GWAS yielded systematic inflation across different outcomes and environmental exposures (inflation-factor estimates ranging from 1.07 to 1.21), whereas application of the robust joint test to the same data set yielded no such inflation (inflation-factor estimates ranging from 1.01 to 1.02). Simulated data further revealed that the robust joint test is valid in different heteroscedasticity models, whereas the traditional joint test is invalid. The robust joint test also has power similar to the traditional joint test when heteroscedasticity is not an issue. We believe the robust joint test should be used in candidate-gene studies and GWASs of

  13. The utility of repeat enzyme immunoassay testing for the diagnosis of Clostridium difficile infection: A systematic review of the literature

    Directory of Open Access Journals (Sweden)

    P S Garimella

    2012-01-01

    Full Text Available Over the last 20 years, the prevalence of healthcare-associated Clostridium difficile (C. diff disease has increased. While multiple tests are available for the diagnosis of C. diff infection, enzyme immunoassay (EIA testing for toxin is the most used. Repeat EIA testing, although of limited utility, is common in medical practice. To assess the utility of repeat EIA testing to diagnose C. diff infections. Systematic literature review. Eligible studies performed >1 EIA test for C. diff toxin and were published in English. Electronic searches of MEDLINE and EMBASE were performed and bibliographies of review articles and conference abstracts were hand searched. Of 805 citations identified, 32 were reviewed in detail and nine were included in the final review. All studies except one were retrospective chart reviews. Seven studies had data on number of participants (32,526, and the overall reporting of test setting and patient characteristics was poor. The prevalence of C. diff infection ranged from 9.1% to 18.5%. The yield of the first EIA test ranged from 8.4% to 16.6%, dropping to 1.5-4.7% with a second test. The utility of repeat testing was evident in outbreak settings, where the yield of repeat testing was 5%. Repeat C. diff testing for hospitalized patients has low clinical utility and may be considered in outbreak settings or when the pre-test probability of disease is high. Future studies should aim to identify patients with a likelihood of disease and determine the utility of repeat testing compared with empiric treatment.

  14. INTRAVAL test case 1b - modelling results

    International Nuclear Information System (INIS)

    Jakob, A.; Hadermann, J.

    1991-07-01

    This report presents results obtained within Phase I of the INTRAVAL study. Six different models are fitted to the results of four infiltration experiments with 233 U tracer on small samples of crystalline bore cores originating from deep drillings in Northern Switzerland. Four of these are dual porosity media models taking into account advection and dispersion in water conducting zones (either tubelike veins or planar fractures), matrix diffusion out of these into pores of the solid phase, and either non-linear or linear sorption of the tracer onto inner surfaces. The remaining two are equivalent porous media models (excluding matrix diffusion) including either non-linear sorption onto surfaces of a single fissure family or linear sorption onto surfaces of several different fissure families. The fits to the experimental data have been carried out by Marquardt-Levenberg procedure yielding error estimates of the parameters, correlation coefficients and also, as a measure for the goodness of the fits, the minimum values of the χ 2 merit function. The effects of different upstream boundary conditions are demonstrated and the penetration depth for matrix diffusion is discussed briefly for both alternative flow path scenarios. The calculations show that the dual porosity media models are significantly more appropriate to the experimental data than the single porosity media concepts. Moreover, it is matrix diffusion rather than the non-linearity of the sorption isotherm which is responsible for the tailing part of the break-through curves. The extracted parameter values for some models for both the linear and non-linear (Freundlich) sorption isotherms are consistent with the results of independent static batch sorption experiments. From the fits, it is generally not possible to discriminate between the two alternative flow path geometries. On the basis of the modelling results, some proposals for further experiments are presented. (author) 15 refs., 23 figs., 7 tabs

  15. Testing a Dilaton Gravity Model Using Nucleosynthesis

    International Nuclear Information System (INIS)

    Boran, S.; Kahya, E. O.

    2014-01-01

    Big bang nucleosynthesis (BBN) offers one of the most strict evidences for the Λ-CDM cosmology at present, as well as the cosmic microwave background (CMB) radiation. In this work, our main aim is to present the outcomes of our calculations related to primordial abundances of light elements, in the context of higher dimensional steady-state universe model in the dilaton gravity. Our results show that abundances of light elements (primordial D, 3 He, 4 He, T, and 7 Li) are significantly different for some cases, and a comparison is given between a particular dilaton gravity model and Λ-CDM in the light of the astrophysical observations

  16. Testing Software Development Project Productivity Model

    Science.gov (United States)

    Lipkin, Ilya

    Software development is an increasingly influential factor in today's business environment, and a major issue affecting software development is how an organization estimates projects. If the organization underestimates cost, schedule, and quality requirements, the end results will not meet customer needs. On the other hand, if the organization overestimates these criteria, resources that could have been used more profitably will be wasted. There is no accurate model or measure available that can guide an organization in a quest for software development, with existing estimation models often underestimating software development efforts as much as 500 to 600 percent. To address this issue, existing models usually are calibrated using local data with a small sample size, with resulting estimates not offering improved cost analysis. This study presents a conceptual model for accurately estimating software development, based on an extensive literature review and theoretical analysis based on Sociotechnical Systems (STS) theory. The conceptual model serves as a solution to bridge organizational and technological factors and is validated using an empirical dataset provided by the DoD. Practical implications of this study allow for practitioners to concentrate on specific constructs of interest that provide the best value for the least amount of time. This study outlines key contributing constructs that are unique for Software Size E-SLOC, Man-hours Spent, and Quality of the Product, those constructs having the largest contribution to project productivity. This study discusses customer characteristics and provides a framework for a simplified project analysis for source selection evaluation and audit task reviews for the customers and suppliers. Theoretical contributions of this study provide an initial theory-based hypothesized project productivity model that can be used as a generic overall model across several application domains such as IT, Command and Control

  17. Model techniques for testing heated concrete structures

    International Nuclear Information System (INIS)

    Stefanou, G.D.

    1983-01-01

    Experimental techniques are described which may be used in the laboratory to measure strains of model concrete structures representing to scale actual structures of any shape or geometry, operating at elevated temperatures, for which time-dependent creep and shrinkage strains are dominant. These strains could be used to assess the distribution of stress in the scaled structure and hence to predict the actual behaviour of concrete structures used in nuclear power stations. Similar techniques have been employed in an investigation to measure elastic, thermal, creep and shrinkage strains in heated concrete models representing to scale parts of prestressed concrete pressure vessels for nuclear reactors. (author)

  18. Experimental tests of proton spin models

    International Nuclear Information System (INIS)

    Ramsey, G.P.; Argonne National Lab., IL

    1989-01-01

    We have developed models for the spin-weighted quark and gluon distribution in a longitudinally polarized proton. The model parameters are determined from current algebra sum rules and polarized deep-inelastic scattering data. A number of different scenarios are presented for the fraction of spin carried the constituent parton distributions. A possible long-range experimental program is suggested for measuring various hard scattering processes using polarized lepton and proton beams. With the knowledge gained from these experiments, we can begin to understand the parton contributions to the proton spin. 28 refs., 5 figs

  19. Are chiropractic tests for the lumbo-pelvic spine reliable and valid? A systematic critical literature review

    DEFF Research Database (Denmark)

    Hestbaek, L; Leboeuf-Yde, C

    2000-01-01

    OBJECTIVE: To systematically review the peer-reviewed literature about the reliability and validity of chiropractic tests used to determine the need for spinal manipulative therapy of the lumbo-pelvic spine, taking into account the quality of the studies. DATA SOURCES: The CHIROLARS database......-pelvic spine were included. DATA EXTRACTION: Data quality were assessed independently by the two reviewers, with a quality score based on predefined methodologic criteria. Results of the studies were then evaluated in relation to quality. DATA SYNTHESIS: None of the tests studied had been sufficiently...... evaluated in relation to reliability and validity. Only tests for palpation for pain had consistently acceptable results. Motion palpation of the lumbar spine might be valid but showed poor reliability, whereas motion palpation of the sacroiliac joints seemed to be slightly reliable but was not shown...

  20. Port Adriano, 2D-Model tests

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; Meinert, Palle; Andersen, Thomas Lykke

    the crown wall have been measured. The model has been subjected to irregular waves corresponding to typical conditions offshore from the intended prototype location. Characteristic situations have been video recorded. The stability of the toe has been investigated. The wave-generated forces on the caisson...

  1. Damage modeling in Small Punch Test specimens

    DEFF Research Database (Denmark)

    Martínez Pañeda, Emilio; Cuesta, I.I.; Peñuelas, I.

    2016-01-01

    . Furthermore,Gurson-Tvergaard-Needleman model predictions from a top-down approach are employed to gain insightinto the mechanisms governing crack initiation and subsequent propagation in small punch experiments.An accurate assessment of micromechanical toughness parameters from the SPT...

  2. Testing structural stability in macroeconometric models

    NARCIS (Netherlands)

    Boldea, O.; Hall, A.R.; Hashimzade, N.; Thornton, M.A.

    2013-01-01

    Since the earliest days of macroeconometric analysis, researchers have been concerned about the appropriateness of the assumption that model parameters remain constant over long periods of time; for example see Tinbergen (1939). This concern is also central to the so-called Lucas (1976) critique

  3. Model Testing - Bringing the Ocean into the Laboratory

    DEFF Research Database (Denmark)

    Aage, Christian

    2000-01-01

    Hydrodynamic model testing, the principle of bringing the ocean into the laboratory to study the behaviour of the ocean itself and the response of man-made structures in the ocean in reduced scale, has been known for centuries. Due to an insufficient understanding of the physics involved, however......, the early model tests often gave incomplete or directly misleading results.This keynote lecture deals with some of the possibilities and problems within the field of hydrodynamic and hydraulic model testing....

  4. Several submaximal exercise tests are reliable, valid and acceptable in people with chronic pain, fibromyalgia or chronic fatigue: a systematic review

    Directory of Open Access Journals (Sweden)

    Julia Ratter

    2014-09-01

    [Ratter J, Radlinger L, Lucas C (2014 Several submaximal exercise tests are reliable, valid and acceptable in people with chronic pain, fibromyalgia or chronic fatigue: a systematic review. Journal of Physiotherapy 60: 144–150

  5. Model tests in RAMONA and NEPTUN

    International Nuclear Information System (INIS)

    Hoffmann, H.; Ehrhard, P.; Weinberg, D.; Carteciano, L.; Dres, K.; Frey, H.H.; Hayafune, H.; Hoelle, C.; Marten, K.; Rust, K.; Thomauske, K.

    1995-01-01

    In order to demonstrate passive decay heat removal (DHR) in an LMR such as the European Fast Reactor, the RAMONA and NEPTUN facilities, with water as a coolant medium, were used to measure transient flow data corresponding to a transition from forced convection (under normal operation) to natural convection under DHR conditions. The facilities were 1:20 and 1:5 models, respectively, of a pool-type reactor including the IHXs, pumps, and immersed coolers. Important results: The decay heat can be removed from all parts of the primary system by natural convection, even if the primary fluid circulation through the IHX is interrupted. This result could be transferred to liquid metal cooling by experiments in models with thermohydraulic similarity. (orig.)

  6. Tests and comparisons of gravity models.

    Science.gov (United States)

    Marsh, J. G.; Douglas, B. C.

    1971-01-01

    Optical observations of the GEOS satellites were used to obtain orbital solutions with different sets of geopotential coefficients. The solutions were compared before and after modification to high order terms (necessary because of resonance) and were then analyzed by comparing subsequent observations with predicted trajectories. The most important source of error in orbit determination and prediction for the GEOS satellites is the effect of resonance found in most published sets of geopotential coefficients. Modifications to the sets yield greatly improved orbits in most cases. The results of these comparisons suggest that with the best optical tracking systems and gravity models, satellite position error due to gravity model uncertainty can reach 50-100 m during a heavily observed 5-6 day orbital arc. If resonant coefficients are estimated, the uncertainty is reduced considerably.

  7. Systematics of β and γ parameters of O(6)-like nuclei in the interacting boson model

    International Nuclear Information System (INIS)

    Wang Baolin

    1997-01-01

    By comparing quadrupole moments between the interacting boson model (IBM) and the collective model, a simple calculation for the triaxial deformation parameters β and γ in the O(6)-like nuclei is presented, based on the intrinsic frame in the IBM. The systematics of the β and γ are studied. The realistic cases are calculated for the even-even Xe, Ba and Ce isotopes, and the smooth dependences of the strength ratios θ 3 /κ and the effective charges e 2 on the proton and neutron boson numbers N π and N ν are discovered

  8. Veterans' informal caregivers in the "sandwich generation": a systematic review toward a resilience model.

    Science.gov (United States)

    Smith-Osborne, Alexa; Felderhoff, Brandi

    2014-01-01

    Social work theory advanced the formulation of the construct of the sandwich generation to apply to the emerging generational cohort of caregivers, most often middle-aged women, who were caring for maturing children and aging parents simultaneously. This systematic review extends that focus by synthesizing the literature on sandwich generation caregivers for the general aging population with dementia and for veterans with dementia and polytrauma. It develops potential protective mechanisms based on empirical literature to support an intervention resilience model for social work practitioners. This theoretical model addresses adaptive coping of sandwich- generation families facing ongoing challenges related to caregiving demands.

  9. A magnetorheological actuation system: test and model

    International Nuclear Information System (INIS)

    John, Shaju; Chaudhuri, Anirban; Wereley, Norman M

    2008-01-01

    Self-contained actuation systems, based on frequency rectification of the high frequency motion of an active material, can produce high force and stroke output. Magnetorheological (MR) fluids are active fluids whose rheological properties can be altered by the application of a magnetic field. By using MR fluids as the energy transmission medium in such hybrid devices, a valving system with no moving parts can be implemented and used to control the motion of an output cylinder shaft. The MR fluid based valves are configured in the form of an H-bridge to produce bi-directional motion in an output cylinder by alternately applying magnetic fields in the two opposite arms of the bridge. The rheological properties of the MR fluid are modeled using both Bingham plastic and bi-viscous models. In this study, the primary actuation is performed using a compact terfenol-D rod driven pump and frequency rectification of the rod motion is done using passive reed valves. The pump and reed valve configuration along with MR fluidic valves form a compact hydraulic actuation system. Actuator design, analysis and experimental results are presented in this paper. A time domain model of the actuator is developed and validated using experimental data

  10. Economic Evaluations of Multicomponent Disease Management Programs with Markov Models: A Systematic Review.

    Science.gov (United States)

    Kirsch, Florian

    2016-12-01

    Disease management programs (DMPs) for chronic diseases are being increasingly implemented worldwide. To present a systematic overview of the economic effects of DMPs with Markov models. The quality of the models is assessed, the method by which the DMP intervention is incorporated into the model is examined, and the differences in the structure and data used in the models are considered. A literature search was conducted; the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement was followed to ensure systematic selection of the articles. Study characteristics e.g. results, the intensity of the DMP and usual care, model design, time horizon, discount rates, utility measures, and cost-of-illness were extracted from the reviewed studies. Model quality was assessed by two researchers with two different appraisals: one proposed by Philips et al. (Good practice guidelines for decision-analytic modelling in health technology assessment: a review and consolidation of quality asessment. Pharmacoeconomics 2006;24:355-71) and the other proposed by Caro et al. (Questionnaire to assess relevance and credibility of modeling studies for informing health care decision making: an ISPOR-AMCP-NPC Good Practice Task Force report. Value Health 2014;17:174-82). A total of 16 studies (9 on chronic heart disease, 2 on asthma, and 5 on diabetes) met the inclusion criteria. Five studies reported cost savings and 11 studies reported additional costs. In the quality, the overall score of the models ranged from 39% to 65%, it ranged from 34% to 52%. Eleven models integrated effectiveness derived from a clinical trial or a meta-analysis of complete DMPs and only five models combined intervention effects from different sources into a DMP. The main limitations of the models are bad reporting practice and the variation in the selection of input parameters. Eleven of the 14 studies reported cost-effectiveness results of less than $30,000 per quality-adjusted life-year and

  11. Theoretical Tools and Software for Modeling, Simulation and Control Design of Rocket Test Facilities

    Science.gov (United States)

    Richter, Hanz

    2004-01-01

    A rocket test stand and associated subsystems are complex devices whose operation requires that certain preparatory calculations be carried out before a test. In addition, real-time control calculations must be performed during the test, and further calculations are carried out after a test is completed. The latter may be required in order to evaluate if a particular test conformed to specifications. These calculations are used to set valve positions, pressure setpoints, control gains and other operating parameters so that a desired system behavior is obtained and the test can be successfully carried out. Currently, calculations are made in an ad-hoc fashion and involve trial-and-error procedures that may involve activating the system with the sole purpose of finding the correct parameter settings. The goals of this project are to develop mathematical models, control methodologies and associated simulation environments to provide a systematic and comprehensive prediction and real-time control capability. The models and controller designs are expected to be useful in two respects: 1) As a design tool, a model is the only way to determine the effects of design choices without building a prototype, which is, in the context of rocket test stands, impracticable; 2) As a prediction and tuning tool, a good model allows to set system parameters off-line, so that the expected system response conforms to specifications. This includes the setting of physical parameters, such as valve positions, and the configuration and tuning of any feedback controllers in the loop.

  12. Bayesian models based on test statistics for multiple hypothesis testing problems.

    Science.gov (United States)

    Ji, Yuan; Lu, Yiling; Mills, Gordon B

    2008-04-01

    We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.

  13. Factors Models of Scrum Adoption in the Software Development Process: A Systematic Literature Review

    Directory of Open Access Journals (Sweden)

    Marilyn Sihuay

    2018-05-01

    Full Text Available (Background The adoption of Agile Software Development (ASD, in particular Scrum, has grown significantly since its introduction in 2001. However, in Lima, many ASDs implementations have been not suitable (uncompleted or inconsistent, thus losing benefits obtainable by this approach and the critical success factors in this context are unknown. (Objective To analyze factors models used in the evaluation of the adoption of ASDs, as these factors models can contribute to explaining the success or failure of these adoptions. (Method In this study we used a systematic literature review. (Result Ten models have been identified; their similarities and differences are presented. (Conclusion Each model identified consider different factors, however some of them are shared by five of these models, such as team member attributes, engaging customer, customer collaboration, experience and work environment.

  14. Towards Accurate Modelling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model

    Science.gov (United States)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron K.; Scoccimarro, Roman; Piscionere, Jennifer A.; Wibking, Benjamin D.

    2018-04-01

    Interpreting the small-scale clustering of galaxies with halo models can elucidate the connection between galaxies and dark matter halos. Unfortunately, the modelling is typically not sufficiently accurate for ruling out models statistically. It is thus difficult to use the information encoded in small scales to test cosmological models or probe subtle features of the galaxy-halo connection. In this paper, we attempt to push halo modelling into the "accurate" regime with a fully numerical mock-based methodology and careful treatment of statistical and systematic errors. With our forward-modelling approach, we can incorporate clustering statistics beyond the traditional two-point statistics. We use this modelling methodology to test the standard ΛCDM + halo model against the clustering of SDSS DR7 galaxies. Specifically, we use the projected correlation function, group multiplicity function and galaxy number density as constraints. We find that while the model fits each statistic separately, it struggles to fit them simultaneously. Adding group statistics leads to a more stringent test of the model and significantly tighter constraints on model parameters. We explore the impact of varying the adopted halo definition and cosmological model and find that changing the cosmology makes a significant difference. The most successful model we tried (Planck cosmology with Mvir halos) matches the clustering of low luminosity galaxies, but exhibits a 2.3σ tension with the clustering of luminous galaxies, thus providing evidence that the "standard" halo model needs to be extended. This work opens the door to adding interesting freedom to the halo model and including additional clustering statistics as constraints.

  15. Mixed Portmanteau Test for Diagnostic Checking of Time Series Models

    Directory of Open Access Journals (Sweden)

    Sohail Chand

    2014-01-01

    Full Text Available Model criticism is an important stage of model building and thus goodness of fit tests provides a set of tools for diagnostic checking of the fitted model. Several tests are suggested in literature for diagnostic checking. These tests use autocorrelation or partial autocorrelation in the residuals to criticize the adequacy of fitted model. The main idea underlying these portmanteau tests is to identify if there is any dependence structure which is yet unexplained by the fitted model. In this paper, we suggest mixed portmanteau tests based on autocorrelation and partial autocorrelation functions of the residuals. We derived the asymptotic distribution of the mixture test and studied its size and power using Monte Carlo simulations.

  16. Distress in unaffected individuals who decline, delay or remain ineligible for genetic testing for hereditary diseases: a systematic review.

    Science.gov (United States)

    Heiniger, Louise; Butow, Phyllis N; Price, Melanie A; Charles, Margaret

    2013-09-01

    Reviews on the psychosocial aspects of genetic testing for hereditary diseases typically focus on outcomes for carriers and non-carriers of genetic mutations. However, the majority of unaffected individuals from high-risk families do not undergo predictive testing. The aim of this review was to examine studies on psychosocial distress in unaffected individuals who delay, decline or remain ineligible for predictive genetic testing. Systematic searches of Medline, CINAHL, PsychINFO, PubMed and handsearching of related articles published between 1990 and 2012 identified 23 articles reporting 17 different studies that were reviewed and subjected to quality assessment. Findings suggest that definitions of delaying and declining are not always straightforward, and few studies have investigated psychological distress among individuals who remain ineligible for testing. Findings related to distress in delayers and decliners have been mixed, but there is evidence to suggest that cancer-related distress is lower in those who decline genetic counselling and testing, compared with testers, and that those who remain ineligible for testing experience more anxiety than tested individuals. Psychological, personality and family history vulnerability factors were identified for decliners and individuals who are ineligible for testing. The small number of studies and methodological limitations preclude definitive conclusions. Nevertheless, subgroups of those who remain untested appear to be at increased risk for psychological morbidity. As the majority of unaffected individuals do not undergo genetic testing, further research is needed to better understand the psychological impact of being denied the option of testing, declining and delaying testing. Copyright © 2012 John Wiley & Sons, Ltd.

  17. Should we assess climate model predictions in light of severe tests?

    Science.gov (United States)

    Katzav, Joel

    2011-06-01

    According to Austro-British philosopher Karl Popper, a system of theoretical claims is scientific only if it is methodologically falsifiable, i.e., only if systematic attempts to falsify or severely test the system are being carried out [Popper, 2005, pp. 20, 62]. He holds that a test of a theoretical system is severe if and only if it is a test of the applicability of the system to a case in which the system's failure is likely in light of background knowledge, i.e., in light of scientific assumptions other than those of the system being tested [Popper, 2002, p. 150]. Popper counts the 1919 tests of general relativity's then unlikely predictions of the deflection of light in the Sun's gravitational field as severe. An implication of Popper's above condition for being a scientific theoretical system is the injunction to assess theoretical systems in light of how well they have withstood severe testing. Applying this injunction to assessing the quality of climate model predictions (CMPs), including climate model projections, would involve assigning a quality to each CMP as a function of how well it has withstood severe tests allowed by its implications for past, present, and nearfuture climate or, alternatively, as a function of how well the models that generated the CMP have withstood severe tests of their suitability for generating the CMP.

  18. Measuring and modelling the effects of systematic non-adherence to mass drug administration

    Directory of Open Access Journals (Sweden)

    Louise Dyson

    2017-03-01

    Full Text Available It is well understood that the success or failure of a mass drug administration campaign critically depends on the level of coverage achieved. To that end coverage levels are often closely scrutinised during campaigns and the response to underperforming campaigns is to attempt to improve coverage. Modelling work has indicated, however, that the quality of the coverage achieved may also have a significant impact on the outcome. If the coverage achieved is likely to miss similar people every round then this can have a serious detrimental effect on the campaign outcome. We begin by reviewing the current modelling descriptions of this effect and introduce a new modelling framework that can be used to simulate a given level of systematic non-adherence. We formalise the likelihood that people may miss several rounds of treatment using the correlation in the attendance of different rounds. Using two very simplified models of the infection of helminths and non-helminths, respectively, we demonstrate that the modelling description used and the correlation included between treatment rounds can have a profound effect on the time to elimination of disease in a population. It is therefore clear that more detailed coverage data is required to accurately predict the time to disease elimination. We review published coverage data in which individuals are asked how many previous rounds they have attended, and show how this information may be used to assess the level of systematic non-adherence. We note that while the coverages in the data found range from 40.5% to 95.5%, still the correlations found lie in a fairly narrow range (between 0.2806 and 0.5351. This indicates that the level of systematic non-adherence may be similar even in data from different years, countries, diseases and administered drugs.

  19. A systematic review of qualitative findings on factors enabling and deterring uptake of HIV testing in Sub-Saharan Africa.

    Science.gov (United States)

    Musheke, Maurice; Ntalasha, Harriet; Gari, Sara; McKenzie, Oran; Bond, Virginia; Martin-Hilber, Adriane; Merten, Sonja

    2013-03-11

    Despite Sub-Saharan Africa (SSA) being the epicenter of the HIV epidemic, uptake of HIV testing is not optimal. While qualitative studies have been undertaken to investigate factors influencing uptake of HIV testing, systematic reviews to provide a more comprehensive understanding are lacking. Using Noblit and Hare's meta-ethnography method, we synthesised published qualitative research to understand factors enabling and deterring uptake of HIV testing in SSA. We identified 5,686 citations out of which 56 were selected for full text review and synthesised 42 papers from 13 countries using Malpass' notion of first-, second-, and third-order constructs. The predominant factors enabling uptake of HIV testing are deterioration of physical health and/or death of sexual partner or child. The roll-out of various HIV testing initiatives such as 'opt-out' provider-initiated HIV testing and mobile HIV testing has improved uptake of HIV testing by being conveniently available and attenuating fear of HIV-related stigma and financial costs. Other enabling factors are availability of treatment and social network influence and support. Major barriers to uptake of HIV testing comprise perceived low risk of HIV infection, perceived health workers' inability to maintain confidentiality and fear of HIV-related stigma. While the increasingly wider availability of life-saving treatment in SSA is an incentive to test, the perceived psychological burden of living with HIV inhibits uptake of HIV testing. Other barriers are direct and indirect financial costs of accessing HIV testing, and gender inequality which undermines women's decision making autonomy about HIV testing. Despite differences across SSA, the findings suggest comparable factors influencing HIV testing. Improving uptake of HIV testing requires addressing perception of low risk of HIV infection and perceived inability to live with HIV. There is also a need to continue addressing HIV-related stigma, which is intricately

  20. Upgraded Analytical Model of the Cylinder Test

    Energy Technology Data Exchange (ETDEWEB)

    Souers, P. Clark [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Energetic Materials Center; Lauderbach, Lisa [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Energetic Materials Center; Garza, Raul [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Energetic Materials Center; Ferranti, Louis [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Energetic Materials Center; Vitello, Peter [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Energetic Materials Center

    2013-03-15

    A Gurney-type equation was previously corrected for wall thinning and angle of tilt, and now we have added shock wave attenuation in the copper wall and air gap energy loss. Extensive calculations were undertaken to calibrate the two new energy loss mechanisms across all explosives. The corrected Gurney equation is recommended for cylinder use over the original 1943 form. The effect of these corrections is to add more energy to the adiabat values from a relative volume of 2 to 7, with low energy explosives having the largest correction. The data was pushed up to a relative volume of about 15 and the JWL parameter ω was obtained directly. Finally, the total detonation energy density was locked to the v = 7 adiabat energy density, so that the Cylinder test gives all necessary values needed to make a JWL.

  1. Upgraded Analytical Model of the Cylinder Test

    Energy Technology Data Exchange (ETDEWEB)

    Souers, P. Clark; Lauderbach, Lisa; Garza, Raul; Ferranti, Louis; Vitello, Peter

    2013-03-15

    A Gurney-type equation was previously corrected for wall thinning and angle of tilt, and now we have added shock wave attenuation in the copper wall and air gap energy loss. Extensive calculations were undertaken to calibrate the two new energy loss mechanisms across all explosives. The corrected Gurney equation is recommended for cylinder use over the original 1943 form. The effect of these corrections is to add more energy to the adiabat values from a relative volume of 2 to 7, with low energy explosives having the largest correction. The data was pushed up to a relative volume of about 15 and the JWL parameter ω was obtained directly. The total detonation energy density was locked to the v=7 adiabat energy density, so that the Cylinder test gives all necessary values needed to make a JWL.

  2. Testing search strategies for systematic reviews in the Medline literature database through PubMed.

    Science.gov (United States)

    Volpato, Enilze S N; Betini, Marluci; El Dib, Regina

    2014-04-01

    A high-quality electronic search is essential in ensuring accuracy and completeness in retrieved records for the conducting of a systematic review. We analysed the available sample of search strategies to identify the best method for searching in Medline through PubMed, considering the use or not of parenthesis, double quotation marks, truncation and use of a simple search or search history. In our cross-sectional study of search strategies, we selected and analysed the available searches performed during evidence-based medicine classes and in systematic reviews conducted in the Botucatu Medical School, UNESP, Brazil. We analysed 120 search strategies. With regard to the use of phrase searches with parenthesis, there was no difference between the results with and without parenthesis and simple searches or search history tools in 100% of the sample analysed (P = 1.0). The number of results retrieved by the searches analysed was smaller using double quotations marks and using truncation compared with the standard strategy (P = 0.04 and P = 0.08, respectively). There is no need to use phrase-searching parenthesis to retrieve studies; however, we recommend the use of double quotation marks when an investigator attempts to retrieve articles in which a term appears to be exactly the same as what was proposed in the search form. Furthermore, we do not recommend the use of truncation in search strategies in the Medline via PubMed. Although the results of simple searches or search history tools were the same, we recommend using the latter.

  3. Model tests on dynamic performance of RC shear walls

    International Nuclear Information System (INIS)

    Nagashima, Toshio; Shibata, Akenori; Inoue, Norio; Muroi, Kazuo.

    1991-01-01

    For the inelastic dynamic response analysis of a reactor building subjected to earthquakes, it is essentially important to properly evaluate its restoring force characteristics under dynamic loading condition and its damping performance. Reinforced concrete shear walls are the main structural members of a reactor building, and dominate its seismic behavior. In order to obtain the basic information on the dynamic restoring force characteristics and damping performance of shear walls, the dynamic test using a large shaking table, static displacement control test and the pseudo-dynamic test on the models of a shear wall were conducted. In the dynamic test, four specimens were tested on a large shaking table. In the static test, four specimens were tested, and in the pseudo-dynamic test, three specimens were tested. These tests are outlined. The results of these tests were compared, placing emphasis on the restoring force characteristics and damping performance of the RC wall models. The strength was higher in the dynamic test models than in the static test models mainly due to the effect of loading rate. (K.I.)

  4. Eating disorders among fashion models: a systematic review of the literature.

    Science.gov (United States)

    Zancu, Simona Alexandra; Enea, Violeta

    2017-09-01

    In the light of recent concerns regarding the eating disorders among fashion models and professional regulations of fashion model occupation, an examination of the scientific evidence on this issue is necessary. The article reviews findings on the prevalence of eating disorders and body image concerns among professional fashion models. A systematic literature search was conducted using ProQUEST, EBSCO, PsycINFO, SCOPUS, and Gale Canage electronic databases. A very low number of studies conducted on fashion models and eating disorders resulted between 1980 and 2015, with seven articles included in this review. Overall, results of these studies do not indicate a higher prevalence of eating disorders among fashion models compared to non-models. Fashion models have a positive body image and generally do not report more dysfunctional eating behaviors than controls. However, fashion models are on average slightly underweight with significantly lower BMI than controls, and give higher importance to appearance and thin body shape, and thus have a higher prevalence of partial-syndrome eating disorders than controls. Despite public concerns, research on eating disorders among professional fashion models is extremely scarce and results cannot be generalized to all models. The existing research fails to clarify the matter of eating disorders among fashion models and given the small number of studies, further research is needed.

  5. Model-Based GUI Testing Using Uppaal at Novo Nordisk

    DEFF Research Database (Denmark)

    H. Hjort, Ulrik; Rasmussen, Jacob Illum; Larsen, Kim Guldstrand

    2009-01-01

    This paper details a collaboration between Aalborg University and Novo Nordiskin developing an automatic model-based test generation tool for system testing of the graphical user interface of a medical device on an embedded platform. The tool takes as input an UML Statemachine model and generates...

  6. A Bootstrap Cointegration Rank Test for Panels of VAR Models

    DEFF Research Database (Denmark)

    Callot, Laurent

    functions of the individual Cointegrated VARs (CVAR) models. A bootstrap based procedure is used to compute empirical distributions of the trace test statistics for these individual models. From these empirical distributions two panel trace test statistics are constructed. The satisfying small sample...

  7. Tests for the Assessment of Sport-Specific Performance in Olympic Combat Sports: A Systematic Review With Practical Recommendations

    Directory of Open Access Journals (Sweden)

    Helmi Chaabene

    2018-04-01

    Full Text Available The regular monitoring of physical fitness and sport-specific performance is important in elite sports to increase the likelihood of success in competition. This study aimed to systematically review and to critically appraise the methodological quality, validation data, and feasibility of the sport-specific performance assessment in Olympic combat sports like amateur boxing, fencing, judo, karate, taekwondo, and wrestling. A systematic search was conducted in the electronic databases PubMed, Google-Scholar, and Science-Direct up to October 2017. Studies in combat sports were included that reported validation data (e.g., reliability, validity, sensitivity of sport-specific tests. Overall, 39 studies were eligible for inclusion in this review. The majority of studies (74% contained sample sizes <30 subjects. Nearly, 1/3 of the reviewed studies lacked a sufficient description (e.g., anthropometrics, age, expertise level of the included participants. Seventy-two percent of studies did not sufficiently report inclusion/exclusion criteria of their participants. In 62% of the included studies, the description and/or inclusion of a familiarization session (s was either incomplete or not existent. Sixty-percent of studies did not report any details about the stability of testing conditions. Approximately half of the studies examined reliability measures of the included sport-specific tests (intraclass correlation coefficient [ICC] = 0.43–1.00. Content validity was addressed in all included studies, criterion validity (only the concurrent aspect of it in approximately half of the studies with correlation coefficients ranging from r = −0.41 to 0.90. Construct validity was reported in 31% of the included studies and predictive validity in only one. Test sensitivity was addressed in 13% of the included studies. The majority of studies (64% ignored and/or provided incomplete information on test feasibility and methodological limitations of the sport

  8. The Validity and Responsiveness of Isometric Lower Body Multi-Joint Tests of Muscular Strength: a Systematic Review.

    Science.gov (United States)

    Drake, David; Kennedy, Rodney; Wallace, Eric

    2017-12-01

    Researchers and practitioners working in sports medicine and science require valid tests to determine the effectiveness of interventions and enhance understanding of mechanisms underpinning adaptation. Such decision making is influenced by the supportive evidence describing the validity of tests within current research. The objective of this study is to review the validity of lower body isometric multi-joint tests ability to assess muscular strength and determine the current level of supporting evidence. Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines were followed in a systematic fashion to search, assess and synthesize existing literature on this topic. Electronic databases such as Web of Science, CINAHL and PubMed were searched up to 18 March 2015. Potential inclusions were screened against eligibility criteria relating to types of test, measurement instrument, properties of validity assessed and population group and were required to be published in English. The Consensus-based Standards for the Selection of health Measurement Instruments (COSMIN) checklist was used to assess methodological quality and measurement property rating of included studies. Studies rated as fair or better in methodological quality were included in the best evidence synthesis. Fifty-nine studies met the eligibility criteria for quality appraisal. The ten studies that rated fair or better in methodological quality were included in the best evidence synthesis. The most frequently investigated lower body isometric multi-joint tests for validity were the isometric mid-thigh pull and isometric squat. The validity of each of these tests was strong in terms of reliability and construct validity. The evidence for responsiveness of tests was found to be moderate for the isometric squat test and unknown for the isometric mid-thigh pull. No tests using the isometric leg press met the criteria for inclusion in the best evidence synthesis. Researchers and

  9. GENERATING TEST CASES FOR PLATFORM INDEPENDENT MODEL BY USING USE CASE MODEL

    OpenAIRE

    Hesham A. Hassan,; Zahraa. E. Yousif

    2010-01-01

    Model-based testing refers to testing and test case generation based on a model that describes the behavior of the system. Extensive use of models throughout all the phases of software development starting from the requirement engineering phase has led to increased importance of Model Based Testing. The OMG initiative MDA has revolutionized the way models would be used for software development. Ensuring that all user requirements are addressed in system design and the design is getting suffic...

  10. Systematic Testing should not be a Topic in the Computer Science Curriculum!

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak

    2003-01-01

    In this paper we argue that treating "testing" as an isolated topic is a wrong approach in computer science and software engineering teaching. Instead testing should pervade practical topics and exercises in the computer science curriculum to teach students the importance of producing software...

  11. Guidelines for guideline developers: a systematic review of grading systems for medical tests

    NARCIS (Netherlands)

    Gopalakrishna, Gowri; Langendam, Miranda W.; Scholten, Rob J. P. M.; Bossuyt, Patrick M. M.; Leeflang, Mariska M. G.

    2013-01-01

    A variety of systems have been developed to grade evidence and develop recommendations based on the available evidence. However, development of guidelines for medical tests is especially challenging given the typical indirectness of the evidence; direct evidence of the effects of testing on patient

  12. Electroencephalographic reactivity testing in unconscious patients: a systematic review of methods and definitions

    NARCIS (Netherlands)

    Admiraal, M. M.; van Rootselaar, A.-F.; Horn, J.

    2017-01-01

    Electroencephalographic (EEG) reactivity testing is often presented as a clear-cut element of electrophysiological testing. Absence of EEG reactivity is generally considered an indicator of poor outcome, especially in patients after cardiac arrest. However, guidelines do not clearly describe how to

  13. A Comparison of Implosive Therapy and Systematic Desensitization in the Treatment of Test Anxiety

    Science.gov (United States)

    Smith, Ronald E.; Nye, S. Lee

    1973-01-01

    Both Desensitization and implosive therapy resulted in significant decreases in scores on Sarason's Test Anxiety Scale. However, the desensitization group also demonstrated a significant reduction in state anxiety assessed during simulated testing sessions and a significant increase in grade point average, while the implosive therapy group showed…

  14. Comparison of Relaxation as Self-control and Systematic Desensitization in the Treatment of Test Anxiety

    Science.gov (United States)

    Snyder, Arden L.; Deffenbacher, Jerry L.

    1977-01-01

    Relaxation as self-control and desensitization were compared to a wait-list control in reduction of rest and other anxieties. Active treatments differed significantly from the control treatment. Subjects in both treatments reported less debilitating test anxiety, whereas desensitization subjects showed greater facilitating test anxiety. (Author)

  15. The six-minute walk test in chronic pediatric conditions: a systematic review of measurement properties.

    NARCIS (Netherlands)

    Bartels, B.; Groot, J.F. de; Terwee, C.B.

    2013-01-01

    Background: The Six-Minute Walk Test (6MWT) is increasingly being used as a functional outcome measure for chronic pediatric conditions. Knowledge about its measurement properties is needed to determine whether it is an appropriate test to use. Purpose: The purpose of this study was to

  16. Conformance test development with the Java modeling language

    DEFF Research Database (Denmark)

    Søndergaard, Hans; Korsholm, Stephan E.; Ravn, Anders P.

    2017-01-01

    In order to claim conformance with a Java Specification Request, a Java implementation has to pass all tests in an associated Technology Compatibility Kit (TCK). This paper presents a model-based development of a TCK test suite and a test execution tool for the draft Safety-Critical Java (SCJ......) profile specification. The Java Modeling Language (JML) is used to model conformance constraints for the profile. JML annotations define contracts for classes and interfaces. The annotations are translated by a tool into runtime assertion checks.Hereby the design and elaboration of the concrete test cases...

  17. Collider tests of the Renormalizable Coloron Model

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Yang; Dobrescu, Bogdan A.

    2018-04-01

    The coloron, a massive version of the gluon present in gauge extensions of QCD, has been searched for at the LHC as a dijet or top quark pair resonance. We point out that in the Renormalizable Coloron Model (ReCoM) with a minimal field content to break the gauge symmetry, a color-octet scalar and a singlet scalar are naturally lighter than the coloron because they are pseudo Nambu-Goldstone bosons. Consequently, the coloron may predominantly decay into scalar pairs, leading to novel signatures at the LHC. When the color-octet scalar is lighter than the singlet, or when the singlet mass is above roughly 1 TeV, the signatures consist of multi-jet resonances of multiplicity up to 12, including topologies with multi-prong jet substructure, slightly displaced vertices, and sometimes a top quark pair. When the singlet is the lightest ReCoM boson and lighter than about 1 TeV, its main decays ($W^+W^-$, $\\gamma Z$, $ZZ$) arise at three loops. The LHC signatures then involve two or four boosted electroweak bosons, often originating from highly displaced vertices, plus one or two pairs of prompt jets or top quarks.

  18. Glide back booster wind tunnel model testing

    Science.gov (United States)

    Pricop, M. V.; Cojocaru, M. G.; Stoica, C. I.; Niculescu, M. L.; Neculaescu, A. M.; Persinaru, A. G.; Boscoianu, M.

    2017-07-01

    Affordable space access requires partial or ideally full launch vehicle reuse, which is in line with clean environment requirement. Although the idea is old, the practical use is difficult, requiring very large technology investment for qualification. Rocket gliders like Space Shuttle have been successfullyoperated but the price and correspondingly the energy footprint were found not sustainable. For medium launchers, finally there is a very promising platform as Falcon 9. For very small launchers the situation is more complex, because the performance index (payload to start mass) is already small, versus medium and heavy launchers. For partial reusable micro launchers this index is even smaller. However the challenge has to be taken because it is likely that in a multiyear effort, technology is going to enable the performance recovery to make such a system economically and environmentally feasible. The current paper is devoted to a small unitary glide back booster which is foreseen to be assembled in a number of possible configurations. Although the level of analysis is not deep, the solution is analyzed from the aerodynamic point of view. A wind tunnel model is designed, with an active canard, to enablea more efficient wind tunnel campaign, as a national level premiere.

  19. Economic contract theory tests models of mutualism.

    Science.gov (United States)

    Weyl, E Glen; Frederickson, Megan E; Yu, Douglas W; Pierce, Naomi E

    2010-09-07

    Although mutualisms are common in all ecological communities and have played key roles in the diversification of life, our current understanding of the evolution of cooperation applies mostly to social behavior within a species. A central question is whether mutualisms persist because hosts have evolved costly punishment of cheaters. Here, we use the economic theory of employment contracts to formulate and distinguish between two mechanisms that have been proposed to prevent cheating in host-symbiont mutualisms, partner fidelity feedback (PFF) and host sanctions (HS). Under PFF, positive feedback between host fitness and symbiont fitness is sufficient to prevent cheating; in contrast, HS posits the necessity of costly punishment to maintain mutualism. A coevolutionary model of mutualism finds that HS are unlikely to evolve de novo, and published data on legume-rhizobia and yucca-moth mutualisms are consistent with PFF and not with HS. Thus, in systems considered to be textbook cases of HS, we find poor support for the theory that hosts have evolved to punish cheating symbionts; instead, we show that even horizontally transmitted mutualisms can be stabilized via PFF. PFF theory may place previously underappreciated constraints on the evolution of mutualism and explain why punishment is far from ubiquitous in nature.

  20. Item Response Theory Models for Performance Decline during Testing

    Science.gov (United States)

    Jin, Kuan-Yu; Wang, Wen-Chung

    2014-01-01

    Sometimes, test-takers may not be able to attempt all items to the best of their ability (with full effort) due to personal factors (e.g., low motivation) or testing conditions (e.g., time limit), resulting in poor performances on certain items, especially those located toward the end of a test. Standard item response theory (IRT) models fail to…

  1. Testing Model with "Check Technique" for Physics Education

    Science.gov (United States)

    Demir, Cihat

    2016-01-01

    As the number, date and form of the written tests are structured and teacher-oriented, it is considered that it creates fear and anxiety among the students. It has been found necessary and important to form a testing model which will keep the students away from the test anxiety and allows them to learn only about the lesson. For this study,…

  2. Preferred Reporting Items for a Systematic Review and Meta-analysis of Diagnostic Test Accuracy Studies: The PRISMA-DTA Statement.

    Science.gov (United States)

    McInnes, Matthew D F; Moher, David; Thombs, Brett D; McGrath, Trevor A; Bossuyt, Patrick M; Clifford, Tammy; Cohen, Jérémie F; Deeks, Jonathan J; Gatsonis, Constantine; Hooft, Lotty; Hunt, Harriet A; Hyde, Christopher J; Korevaar, Daniël A; Leeflang, Mariska M G; Macaskill, Petra; Reitsma, Johannes B; Rodin, Rachel; Rutjes, Anne W S; Salameh, Jean-Paul; Stevens, Adrienne; Takwoingi, Yemisi; Tonelli, Marcello; Weeks, Laura; Whiting, Penny; Willis, Brian H

    2018-01-23

    Systematic reviews of diagnostic test accuracy synthesize data from primary diagnostic studies that have evaluated the accuracy of 1 or more index tests against a reference standard, provide estimates of test performance, allow comparisons of the accuracy of different tests, and facilitate the identification of sources of variability in test accuracy. To develop the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) diagnostic test accuracy guideline as a stand-alone extension of the PRISMA statement. Modifications to the PRISMA statement reflect the specific requirements for reporting of systematic reviews and meta-analyses of diagnostic test accuracy studies and the abstracts for these reviews. Established standards from the Enhancing the Quality and Transparency of Health Research (EQUATOR) Network were followed for the development of the guideline. The original PRISMA statement was used as a framework on which to modify and add items. A group of 24 multidisciplinary experts used a systematic review of articles on existing reporting guidelines and methods, a 3-round Delphi process, a consensus meeting, pilot testing, and iterative refinement to develop the PRISMA diagnostic test accuracy guideline. The final version of the PRISMA diagnostic test accuracy guideline checklist was approved by the group. The systematic review (produced 64 items) and the Delphi process (provided feedback on 7 proposed items; 1 item was later split into 2 items) identified 71 potentially relevant items for consideration. The Delphi process reduced these to 60 items that were discussed at the consensus meeting. Following the meeting, pilot testing and iterative feedback were used to generate the 27-item PRISMA diagnostic test accuracy checklist. To reflect specific or optimal contemporary systematic review methods for diagnostic test accuracy, 8 of the 27 original PRISMA items were left unchanged, 17 were modified, 2 were added, and 2 were omitted. The 27-item

  3. Testing of materials and scale models for impact limiters

    International Nuclear Information System (INIS)

    Maji, A.K.; Satpathi, D.; Schryer, H.L.

    1991-01-01

    Aluminum Honeycomb and Polyurethane foam specimens were tested to obtain experimental data on the material's behavior under different loading conditions. This paper reports the dynamic tests conducted on the materials and on the design and testing of scale models made out of these open-quotes Impact Limiters,close quotes as they are used in the design of transportation casks. Dynamic tests were conducted on a modified Charpy Impact machine with associated instrumentation, and compared with static test results. A scale model testing setup was designed and used for preliminary tests on models being used by current designers of transportation casks. The paper presents preliminary results of the program. Additional information will be available and reported at the time of presentation of the paper

  4. Bankruptcy risk model and empirical tests

    Science.gov (United States)

    Podobnik, Boris; Horvatic, Davor; Petersen, Alexander M.; Urošević, Branko; Stanley, H. Eugene

    2010-01-01

    We analyze the size dependence and temporal stability of firm bankruptcy risk in the US economy by applying Zipf scaling techniques. We focus on a single risk factor—the debt-to-asset ratio R—in order to study the stability of the Zipf distribution of R over time. We find that the Zipf exponent increases during market crashes, implying that firms go bankrupt with larger values of R. Based on the Zipf analysis, we employ Bayes’s theorem and relate the conditional probability that a bankrupt firm has a ratio R with the conditional probability of bankruptcy for a firm with a given R value. For 2,737 bankrupt firms, we demonstrate size dependence in assets change during the bankruptcy proceedings. Prepetition firm assets and petition firm assets follow Zipf distributions but with different exponents, meaning that firms with smaller assets adjust their assets more than firms with larger assets during the bankruptcy process. We compare bankrupt firms with nonbankrupt firms by analyzing the assets and liabilities of two large subsets of the US economy: 2,545 Nasdaq members and 1,680 New York Stock Exchange (NYSE) members. We find that both assets and liabilities follow a Pareto distribution. The finding is not a trivial consequence of the Zipf scaling relationship of firm size quantified by employees—although the market capitalization of Nasdaq stocks follows a Pareto distribution, the same distribution does not describe NYSE stocks. We propose a coupled Simon model that simultaneously evolves both assets and debt with the possibility of bankruptcy, and we also consider the possibility of firm mergers. PMID:20937903

  5. Systematically too low values of the cranking model collective inertia parameters

    International Nuclear Information System (INIS)

    Dudek, I.; Dudek, W.; Lukasiak-Ruchowska, E.; Skalski, I.

    1980-01-01

    Deformed Nilsson and Woods-Saxon potentials were employed for generating single particle states used henceforth for calculating the inertia tensor (cranking model and monopole pairing) and the collective energy surfaces (Strutinsky method). The deformation was parametrized in terms of quadrupole and hexadecapole degrees of freedom. The classical energy expression obtained from the inertia tensor and energy surfaces was quantized and the resulting stationary Schroedinger equation was solved using the approximate method. The second Isup(π) = 0 + 2 collective level energies were calculated for the Rare Earth and Actinide nuclei and the results compared with the experimental data. The vibrational level energies agree with the experimental ones much better for spherical nuclei for both single particle potentials; the discrepancies for deformed nuclei overestimate the experimental results by roughly a factor of two. It is argued that coupling of the axially symmetric quadrupole degrees of freedom to non-axial and hexadecapole ones does not affect the conclusions about systematically too low mass parameter values. The alternative explanation of the systematic deviations from the 0 + 2 level energies could be a systematically too high stiffness of the energy surfaces obrained with the Strutinsky method. (orig.)

  6. Provider-initiated testing and counselling programmes in sub-Saharan Africa: a systematic review of their operational implementation.

    Science.gov (United States)

    Roura, Maria; Watson-Jones, Deborah; Kahawita, Tanya M; Ferguson, Laura; Ross, David A

    2013-02-20

    The routine offer of an HIV test during patient-provider encounters is gaining momentum within HIV treatment and prevention programmes. This review examined the operational implementation of provider-initiated testing and counselling (PITC) programmes in sub-Saharan Africa. PUBMED, EMBASE, Global Health, COCHRANE Library and JSTOR databases were searched systematically for articles published in English between January 2000 and November 2010. Grey literature was explored through the websites of international and nongovernmental organizations. Eligibility of studies was based on predetermined criteria applied during independent screening by two researchers. We retained 44 studies out of 5088 references screened. PITC polices have been effective at identifying large numbers of previously undiagnosed individuals. However, the translation of policy guidance into practice has had mixed results, and in several studies of routine programmes the proportion of patients offered an HIV test was disappointingly low. There were wide variations in the rates of acceptance of the test and poor linkage of those testing positive to follow-up assessments and antiretroviral treatment. The challenges encountered encompass a range of areas from logistics, to data systems, human resources and management, reflecting some of the weaknesses of health systems in the region. The widespread adoption of PITC provides an unprecedented opportunity for identifying HIV-positive individuals who are already in contact with health services and should be accompanied by measures aimed at strengthening health systems and fostering the normalization of HIV at community level. The resources and effort needed to do this successfully should not be underestimated.

  7. The effectiveness of psychoeducation and systematic desensitization to reduce test anxiety among first-year pharmacy students.

    Science.gov (United States)

    Rajiah, Kingston; Saravanan, Coumaravelou

    2014-11-15

    To analyze the effect of psychological intervention on reducing performance anxiety and the consequences of the intervention on first-year pharmacy students. In this experimental study, 236 first-year undergraduate pharmacy students from a private university in Malaysia were approached between weeks 5 and 7 of their first semester to participate in the study. The completed responses for the Westside Test Anxiety Scale (WTAS), the Kessler Perceived Distress Scale (PDS), and the Academic Motivation Scale (AMS) were received from 225 students. Out of 225 students, 42 exhibited moderate to high test anxiety according to the WTAS (score ranging from 30 to 39) and were randomly placed into either an experiment group (n=21) or a waiting list control group (n=21). The prevalence of test anxiety among pharmacy students in this study was lower compared to other university students in previous studies. The present study's anxiety management of psychoeducation and systematic education for test anxiety reduced lack of motivation and psychological distress and improved grade point average (GPA). Psychological intervention helped significantly reduce scores of test anxiety, psychological distress, and lack of motivation, and it helped improve students' GPA.

  8. Diagnostic Accuracy of Molecular Amplification Tests for Human African Trypanosomiasis-Systematic Review

    NARCIS (Netherlands)

    Mugasa, Claire M.; Adams, Emily R.; Boer, Kimberly R.; Dyserinck, Heleen C.; Büscher, Philippe; Schallig, Henk D. H. F.; Leeflang, Mariska M. G.

    2012-01-01

    Background: A range of molecular amplification techniques have been developed for the diagnosis of Human African Trypanosomiasis (HAT); however, careful evaluation of these tests must precede implementation to ensure their high clinical accuracy. Here, we investigated the diagnostic accuracy of

  9. Systematic Desensitization Of Test Anxiety: A Comparison Of Group And Individual Treatment

    Science.gov (United States)

    Scissons, Edward H.; Njaa, Lloyd J.

    1973-01-01

    The results indicate the effectiveness of both individual desensitization and group desensitization in the treatment of high test anxiety. More research is needed in comparing the effectiveness of group desensitization and individual desensitization with intratreatment variables. (Author)

  10. Matrix diffusion model. In situ tests using natural analogues

    Energy Technology Data Exchange (ETDEWEB)

    Rasilainen, K. [VTT Energy, Espoo (Finland)

    1997-11-01

    Matrix diffusion is an important retarding and dispersing mechanism for substances carried by groundwater in fractured bedrock. Natural analogues provide, unlike laboratory or field experiments, a possibility to test the model of matrix diffusion in situ over long periods of time. This thesis documents quantitative model tests against in situ observations, done to support modelling of matrix diffusion in performance assessments of nuclear waste repositories. 98 refs. The thesis includes also eight previous publications by author.

  11. Matrix diffusion model. In situ tests using natural analogues

    International Nuclear Information System (INIS)

    Rasilainen, K.

    1997-11-01

    Matrix diffusion is an important retarding and dispersing mechanism for substances carried by groundwater in fractured bedrock. Natural analogues provide, unlike laboratory or field experiments, a possibility to test the model of matrix diffusion in situ over long periods of time. This thesis documents quantitative model tests against in situ observations, done to support modelling of matrix diffusion in performance assessments of nuclear waste repositories

  12. Equation-free analysis of agent-based models and systematic parameter determination

    Science.gov (United States)

    Thomas, Spencer A.; Lloyd, David J. B.; Skeldon, Anne C.

    2016-12-01

    Agent based models (ABM)s are increasingly used in social science, economics, mathematics, biology and computer science to describe time dependent systems in circumstances where a description in terms of equations is difficult. Yet few tools are currently available for the systematic analysis of ABM behaviour. Numerical continuation and bifurcation analysis is a well-established tool for the study of deterministic systems. Recently, equation-free (EF) methods have been developed to extend numerical continuation techniques to systems where the dynamics are described at a microscopic scale and continuation of a macroscopic property of the system is considered. To date, the practical use of EF methods has been limited by; (1) the over-head of application-specific implementation; (2) the laborious configuration of problem-specific parameters; and (3) large ensemble sizes (potentially) leading to computationally restrictive run-times. In this paper we address these issues with our tool for the EF continuation of stochastic systems, which includes algorithms to systematically configuration problem specific parameters and enhance robustness to noise. Our tool is generic and can be applied to any 'black-box' simulator and determines the essential EF parameters prior to EF analysis. Robustness is significantly improved using our convergence-constraint with a corrector-repeat (C3R) method. This algorithm automatically detects outliers based on the dynamics of the underlying system enabling both an order of magnitude reduction in ensemble size and continuation of systems at much higher levels of noise than classical approaches. We demonstrate our method with application to several ABM models, revealing parameter dependence, bifurcation and stability analysis of these complex systems giving a deep understanding of the dynamical behaviour of the models in a way that is not otherwise easily obtainable. In each case we demonstrate our systematic parameter determination stage for

  13. DKIST enclosure modeling and verification during factory assembly and testing

    Science.gov (United States)

    Larrakoetxea, Ibon; McBride, William; Marshall, Heather K.; Murga, Gaizka

    2014-08-01

    The Daniel K. Inouye Solar Telescope (DKIST, formerly the Advanced Technology Solar Telescope, ATST) is unique as, apart from protecting the telescope and its instrumentation from the weather, it holds the entrance aperture stop and is required to position it with millimeter-level accuracy. The compliance of the Enclosure design with the requirements, as of Final Design Review in January 2012, was supported by mathematical models and other analyses which included structural and mechanical analyses (FEA), control models, ventilation analysis (CFD), thermal models, reliability analysis, etc. During the Enclosure Factory Assembly and Testing the compliance with the requirements has been verified using the real hardware and the models created during the design phase have been revisited. The tests performed during shutter mechanism subsystem (crawler test stand) functional and endurance testing (completed summer 2013) and two comprehensive system-level factory acceptance testing campaigns (FAT#1 in December 2013 and FAT#2 in March 2014) included functional and performance tests on all mechanisms, off-normal mode tests, mechanism wobble tests, creation of the Enclosure pointing map, control system tests, and vibration tests. The comparison of the assumptions used during the design phase with the properties measured during the test campaign provides an interesting reference for future projects.

  14. A systematic literature review of open source software quality assessment models.

    Science.gov (United States)

    Adewumi, Adewole; Misra, Sanjay; Omoregbe, Nicholas; Crawford, Broderick; Soto, Ricardo

    2016-01-01

    Many open source software (OSS) quality assessment models are proposed and available in the literature. However, there is little or no adoption of these models in practice. In order to guide the formulation of newer models so they can be acceptable by practitioners, there is need for clear discrimination of the existing models based on their specific properties. Based on this, the aim of this study is to perform a systematic literature review to investigate the properties of the existing OSS quality assessment models by classifying them with respect to their quality characteristics, the methodology they use for assessment, and their domain of application so as to guide the formulation and development of newer models. Searches in IEEE Xplore, ACM, Science Direct, Springer and Google Search is performed so as to retrieve all relevant primary studies in this regard. Journal and conference papers between the year 2003 and 2015 were considered since the first known OSS quality model emerged in 2003. A total of 19 OSS quality assessment model papers were selected. To select these models we have developed assessment criteria to evaluate the quality of the existing studies. Quality assessment models are classified into five categories based on the quality characteristics they possess namely: single-attribute, rounded category, community-only attribute, non-community attribute as well as the non-quality in use models. Our study reflects that software selection based on hierarchical structures is found to be the most popular selection method in the existing OSS quality assessment models. Furthermore, we found that majority (47%) of the existing models do not specify any domain of application. In conclusion, our study will be a valuable contribution to the community and helps the quality assessment model developers in formulating newer models and also to the practitioners (software evaluators) in selecting suitable OSS in the midst of alternatives.

  15. A Systematic Review of the Reliability and Validity of Behavioural Tests Used to Assess Behavioural Characteristics Important in Working Dogs.

    Science.gov (United States)

    Brady, Karen; Cracknell, Nina; Zulch, Helen; Mills, Daniel Simon

    2018-01-01

    Working dogs are selected based on predictions from tests that they will be able to perform specific tasks in often challenging environments. However, withdrawal from service in working dogs is still a big problem, bringing into question the reliability of the selection tests used to make these predictions. A systematic review was undertaken aimed at bringing together available information on the reliability and predictive validity of the assessment of behavioural characteristics used with working dogs to establish the quality of selection tests currently available for use to predict success in working dogs. The search procedures resulted in 16 papers meeting the criteria for inclusion. A large range of behaviour tests and parameters were used in the identified papers, and so behaviour tests and their underpinning constructs were grouped on the basis of their relationship with positive core affect (willingness to work, human-directed social behaviour, object-directed play tendencies) and negative core affect (human-directed aggression, approach withdrawal tendencies, sensitivity to aversives). We then examined the papers for reports of inter-rater reliability, within-session intra-rater reliability, test-retest validity and predictive validity. The review revealed a widespread lack of information relating to the reliability and validity of measures to assess behaviour and inconsistencies in terminologies, study parameters and indices of success. There is a need to standardise the reporting of these aspects of behavioural tests in order to improve the knowledge base of what characteristics are predictive of optimal performance in working dog roles, improving selection processes and reducing working dog redundancy. We suggest the use of a framework based on explaining the direct or indirect relationship of the test with core affect.

  16. Prostate specific antigen testing policy worldwide varies greatly and seems not to be in accordance with guidelines: a systematic review

    Directory of Open Access Journals (Sweden)

    Van der Meer Saskia

    2012-10-01

    Full Text Available Abstract Background Prostate specific antigen (PSA testing is widely used, but guidelines on follow-up are unclear. Methods We performed a systematic review of the literature to determine follow-up policy after PSA testing by general practitioners (GPs and non-urologic hospitalists, the use of a cut-off value for this policy, the reasons for repeating a PSA test after an initial normal result, the existence of a general cut-off value below which a PSA result is considered normal, and the time frame for repeating a test. Data sources. MEDLINE, Embase, PsychInfo and the Cochrane library from January 1950 until May 2011. Study eligibility criteria. Studies describing follow-up policy by GPs or non-urologic hospitalists after a primary PSA test, excluding urologists and patients with prostate cancer. Studies written in Dutch, English, French, German, Italian or Spanish were included. Excluded were studies describing follow-up policy by urologists and follow-up of patients with prostate cancer. The quality of each study was structurally assessed. Results Fifteen articles met the inclusion criteria. Three studies were of high quality. Follow-up differed greatly both after a normal and an abnormal PSA test result. Only one study described the reasons for not performing follow-up after an abnormal PSA result. Conclusions Based on the available literature, we cannot adequately assess physicians’ follow-up policy after a primary PSA test. Follow-up after a normal or raised PSA test by GPs and non-urologic hospitalists seems to a large extent not in accordance with the guidelines.

  17. Testing the reliability and efficiency of the pilot Mixed Methods Appraisal Tool (MMAT) for systematic mixed studies review.

    Science.gov (United States)

    Pace, Romina; Pluye, Pierre; Bartlett, Gillian; Macaulay, Ann C; Salsberg, Jon; Jagosh, Justin; Seller, Robbyn

    2012-01-01

    Systematic literature reviews identify, select, appraise, and synthesize relevant literature on a particular topic. Typically, these reviews examine primary studies based on similar methods, e.g., experimental trials. In contrast, interest in a new form of review, known as mixed studies review (MSR), which includes qualitative, quantitative, and mixed methods studies, is growing. In MSRs, reviewers appraise studies that use different methods allowing them to obtain in-depth answers to complex research questions. However, appraising the quality of studies with different methods remains challenging. To facilitate systematic MSRs, a pilot Mixed Methods Appraisal Tool (MMAT) has been developed at McGill University (a checklist and a tutorial), which can be used to concurrently appraise the methodological quality of qualitative, quantitative, and mixed methods studies. The purpose of the present study is to test the reliability and efficiency of a pilot version of the MMAT. The Center for Participatory Research at McGill conducted a systematic MSR on the benefits of Participatory Research (PR). Thirty-two PR evaluation studies were appraised by two independent reviewers using the pilot MMAT. Among these, 11 (34%) involved nurses as researchers or research partners. Appraisal time was measured to assess efficiency. Inter-rater reliability was assessed by calculating a kappa statistic based on dichotomized responses for each criterion. An appraisal score was determined for each study, which allowed the calculation of an overall intra-class correlation. On average, it took 14 min to appraise a study (excluding the initial reading of articles). Agreement between reviewers was moderate to perfect with regards to MMAT criteria, and substantial with respect to the overall quality score of appraised studies. The MMAT is unique, thus the reliability of the pilot MMAT is promising, and encourages further development. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. TESTING CAPM MODEL ON THE EMERGING MARKETS OF THE CENTRAL AND SOUTHEASTERN EUROPE

    Directory of Open Access Journals (Sweden)

    Josipa Džaja

    2013-02-01

    Full Text Available The paper examines if the Capital Asset Pricing Model (CAPM is adequate for capital asset valuation on the Central and South-East European emerging securities markets using monthly stock returns for nine countries for the period of January 2006 to December 2010. Precisely, it is tested if beta, as the systematic risk measure, is valid on observed markets by analysing are high expected returns associated with high levels of risk, i.e. beta. Also, the efficiency of market indices of observed countries is examined.

  19. Horizontal crash testing and analysis of model flatrols

    International Nuclear Information System (INIS)

    Dowler, H.J.; Soanes, T.P.T.

    1985-01-01

    To assess the behaviour of a full scale flask and flatrol during a proposed demonstration impact into a tunnel abutment, a mathematical modelling technique was developed and validated. The work was performed at quarter scale and comprised of both scale model tests and mathematical analysis in one and two dimensions. Good agreement between model test results of the 26.8m/s (60 mph) abutment impacts and the mathematical analysis, validated the modelling techniques. The modelling method may be used with confidence to predict the outcome of the proposed full scale demonstration. (author)

  20. Geometrical optics modeling of the grating-slit test.

    Science.gov (United States)

    Liang, Chao-Wen; Sasian, Jose

    2007-02-19

    A novel optical testing method termed the grating-slit test is discussed. This test uses a grating and a slit, as in the Ronchi test, but the grating-slit test is different in that the grating is used as the incoherent illuminating object instead of the spatial filter. The slit is located at the plane of the image of a sinusoidal intensity grating. An insightful geometrical-optics model for the grating-slit test is presented and the fringe contrast ratio with respect to the slit width and object-grating period is obtained. The concept of spatial bucket integration is used to obtain the fringe contrast ratio.

  1. Peak Vertical Ground Reaction Force during Two-Leg Landing: A Systematic Review and Mathematical Modeling

    Directory of Open Access Journals (Sweden)

    Wenxin Niu

    2014-01-01

    Full Text Available Objectives. (1 To systematically review peak vertical ground reaction force (PvGRF during two-leg drop landing from specific drop height (DH, (2 to construct a mathematical model describing correlations between PvGRF and DH, and (3 to analyze the effects of some factors on the pooled PvGRF regardless of DH. Methods. A computerized bibliographical search was conducted to extract PvGRF data on a single foot when participants landed with both feet from various DHs. An innovative mathematical model was constructed to analyze effects of gender, landing type, shoes, ankle stabilizers, surface stiffness and sample frequency on PvGRF based on the pooled data. Results. Pooled PvGRF and DH data of 26 articles showed that the square root function fits their relationship well. An experimental validation was also done on the regression equation for the medicum frequency. The PvGRF was not significantly affected by surface stiffness, but was significantly higher in men than women, the platform than suspended landing, the barefoot than shod condition, and ankle stabilizer than control condition, and higher than lower frequencies. Conclusions. The PvGRF and root DH showed a linear relationship. The mathematical modeling method with systematic review is helpful to analyze the influence factors during landing movement without considering DH.

  2. Systematic problems with using dark matter simulations to model stellar halos

    International Nuclear Information System (INIS)

    Bailin, Jeremy; Bell, Eric F.; Valluri, Monica; Stinson, Greg S.; Debattista, Victor P.; Couchman, H. M. P.; Wadsley, James

    2014-01-01

    The limits of available computing power have forced models for the structure of stellar halos to adopt one or both of the following simplifying assumptions: (1) stellar mass can be 'painted' onto dark matter (DM) particles in progenitor satellites; (2) pure DM simulations that do not form a luminous galaxy can be used. We estimate the magnitude of the systematic errors introduced by these assumptions using a controlled set of stellar halo models where we independently vary whether we look at star particles or painted DM particles, and whether we use a simulation in which a baryonic disk galaxy forms or a matching pure DM simulation that does not form a baryonic disk. We find that the 'painting' simplification reduces the halo concentration and internal structure, predominantly because painted DM particles have different kinematics from star particles even when both are buried deep in the potential well of the satellite. The simplification of using pure DM simulations reduces the concentration further, but increases the internal structure, and results in a more prolate stellar halo. These differences can be a factor of 1.5-7 in concentration (as measured by the half-mass radius) and 2-7 in internal density structure. Given this level of systematic uncertainty, one should be wary of overinterpreting differences between observations and the current generation of stellar halo models based on DM-only simulations when such differences are less than an order of magnitude.

  3. Conducting field studies for testing pesticide leaching models

    Science.gov (United States)

    Smith, Charles N.; Parrish, Rudolph S.; Brown, David S.

    1990-01-01

    A variety of predictive models are being applied to evaluate the transport and transformation of pesticides in the environment. These include well known models such as the Pesticide Root Zone Model (PRZM), the Risk of Unsaturated-Saturated Transport and Transformation Interactions for Chemical Concentrations Model (RUSTIC) and the Groundwater Loading Effects of Agricultural Management Systems Model (GLEAMS). The potentially large impacts of using these models as tools for developing pesticide management strategies and regulatory decisions necessitates development of sound model validation protocols. This paper offers guidance on many of the theoretical and practical problems encountered in the design and implementation of field-scale model validation studies. Recommendations are provided for site selection and characterization, test compound selection, data needs, measurement techniques, statistical design considerations and sampling techniques. A strategy is provided for quantitatively testing models using field measurements.

  4. RELAP5 kinetics model development for the Advanced Test Reactor

    International Nuclear Information System (INIS)

    Judd, J.L.; Terry, W.K.

    1990-01-01

    A point-kinetics model of the Advanced Test Reactor has been developed for the RELAP5 code. Reactivity feedback parameters were calculated by a three-dimensional analysis with the PDQ neutron diffusion code. Analyses of several hypothetical reactivity insertion events by the new model and two earlier models are discussed. 3 refs., 10 figs., 6 tabs

  5. Tests of the single-pion exchange model

    International Nuclear Information System (INIS)

    Treiman, S.B.; Yang, C.N.

    1983-01-01

    The single-pion exchange model (SPEM) of high-energy particle reactions provides an attractively simple picture of seemingly complex processes and has accordingly been much discussed in recent times. The purpose of this note is to call attention to the possibility of subjecting the model to certain tests precisely in the domain where the model stands the best chance of making sense

  6. A Dutch test with the NewProd-model

    NARCIS (Netherlands)

    Bronnenberg, J.J.A.M.; van Engelen, M.L.

    1988-01-01

    The paper contains a report of a test of Cooper's NewProd model for predicting success and failure of product development projects. Based on Canadian data, the model has been shown to make predictions which are 84% correct. Having reservations on the reliability and validity of the model on

  7. Scalable Power-Component Models for Concept Testing

    Science.gov (United States)

    2011-08-17

    motor speed can be either positive or negative dependent upon the propelling or regenerative braking scenario. The simulation provides three...the machine during generation or regenerative braking . To use the model, the user modifies the motor model criteria parameters by double-clicking... SYSTEMS ENGINEERING AND TECHNOLOGY SYMPOSIUM MODELING & SIMULATION, TESTING AND VALIDATION (MSTV) MINI-SYMPOSIUM AUGUST 9-11 DEARBORN, MICHIGAN

  8. Dynamic epidemiological models for dengue transmission: a systematic review of structural approaches.

    Directory of Open Access Journals (Sweden)

    Mathieu Andraud

    Full Text Available Dengue is a vector-borne disease recognized as the major arbovirose with four immunologically distant dengue serotypes coexisting in many endemic areas. Several mathematical models have been developed to understand the transmission dynamics of dengue, including the role of cross-reactive antibodies for the four different dengue serotypes. We aimed to review deterministic models of dengue transmission, in order to summarize the evolution of insights for, and provided by, such models, and to identify important characteristics for future model development. We identified relevant publications using PubMed and ISI Web of Knowledge, focusing on mathematical deterministic models of dengue transmission. Model assumptions were systematically extracted from each reviewed model structure, and were linked with their underlying epidemiological concepts. After defining common terms in vector-borne disease modelling, we generally categorised fourty-two published models of interest into single serotype and multiserotype models. The multi-serotype models assumed either vector-host or direct host-to-host transmission (ignoring the vector component. For each approach, we discussed the underlying structural and parameter assumptions, threshold behaviour and the projected impact of interventions. In view of the expected availability of dengue vaccines, modelling approaches will increasingly focus on the effectiveness and cost-effectiveness of vaccination options. For this purpose, the level of representation of the vector and host populations seems pivotal. Since vector-host transmission models would be required for projections of combined vaccination and vector control interventions, we advocate their use as most relevant to advice health policy in the future. The limited understanding of the factors which influence dengue transmission as well as limited data availability remain important concerns when applying dengue models to real-world decision problems.

  9. Systematic model for lean product development implementation in an automotive related company

    Directory of Open Access Journals (Sweden)

    Daniel Osezua Aikhuele

    2017-07-01

    Full Text Available Lean product development is a major innovative business strategy that employs sets of practices to achieve an efficient, innovative and a sustainable product development. Despite the many benefits and high hopes in the lean strategy, many companies are still struggling, and unable to either achieve or sustain substantial positive results with their lean implementation efforts. However, as the first step towards addressing this issue, this paper seeks to propose a systematic model that considers the administrative and implementation limitations of lean thinking practices in the product development process. The model which is based on the integration of fuzzy Shannon’s entropy and Modified Technique for Order Preference by Similarity to the Ideal Solution (M-TOPSIS model for the lean product development practices implementation with respective to different criteria including management and leadership, financial capabilities, skills and expertise and organization culture, provides a guide or roadmap for product development managers on the lean implementation route.

  10. Development of dynamic Bayesian models for web application test management

    Science.gov (United States)

    Azarnova, T. V.; Polukhin, P. V.; Bondarenko, Yu V.; Kashirina, I. L.

    2018-03-01

    The mathematical apparatus of dynamic Bayesian networks is an effective and technically proven tool that can be used to model complex stochastic dynamic processes. According to the results of the research, mathematical models and methods of dynamic Bayesian networks provide a high coverage of stochastic tasks associated with error testing in multiuser software products operated in a dynamically changing environment. Formalized representation of the discrete test process as a dynamic Bayesian model allows us to organize the logical connection between individual test assets for multiple time slices. This approach gives an opportunity to present testing as a discrete process with set structural components responsible for the generation of test assets. Dynamic Bayesian network-based models allow us to combine in one management area individual units and testing components with different functionalities and a direct influence on each other in the process of comprehensive testing of various groups of computer bugs. The application of the proposed models provides an opportunity to use a consistent approach to formalize test principles and procedures, methods used to treat situational error signs, and methods used to produce analytical conclusions based on test results.

  11. Risk Prediction Models for Incident Heart Failure: A Systematic Review of Methodology and Model Performance.

    Science.gov (United States)

    Sahle, Berhe W; Owen, Alice J; Chin, Ken Lee; Reid, Christopher M

    2017-09-01

    Numerous models predicting the risk of incident heart failure (HF) have been developed; however, evidence of their methodological rigor and reporting remains unclear. This study critically appraises the methods underpinning incident HF risk prediction models. EMBASE and PubMed were searched for articles published between 1990 and June 2016 that reported at least 1 multivariable model for prediction of HF. Model development information, including study design, variable coding, missing data, and predictor selection, was extracted. Nineteen studies reporting 40 risk prediction models were included. Existing models have acceptable discriminative ability (C-statistics > 0.70), although only 6 models were externally validated. Candidate variable selection was based on statistical significance from a univariate screening in 11 models, whereas it was unclear in 12 models. Continuous predictors were retained in 16 models, whereas it was unclear how continuous variables were handled in 16 models. Missing values were excluded in 19 of 23 models that reported missing data, and the number of events per variable was models. Only 2 models presented recommended regression equations. There was significant heterogeneity in discriminative ability of models with respect to age (P prediction models that had sufficient discriminative ability, although few are externally validated. Methods not recommended for the conduct and reporting of risk prediction modeling were frequently used, and resulting algorithms should be applied with caution. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Intra- and inter-rater reliability of movement and palpation tests in patients with neck pain: A systematic review.

    Science.gov (United States)

    Jonsson, Anders; Rasmussen-Barr, Eva

    2018-03-01

    Neck pain is common and often becomes chronic. Various clinical tests of the cervical spine are used to direct and evaluate treatment. This systematic review aimed to identify studies examining the intra- and/or interrater reliability of tests used in clinical examination of patients with neck pain. A database search up to April 2016 was conducted in PubMed, CINAHL, and AMED. The Quality Appraisal of Reliability Studies Checklist (QAREL) was used to assess risk of bias. Eleven studies were included, comprising tests of active and passive movement and pain evaluating participants with ongoing neck pain. One study was assessed with a low risk of bias, three with medium risk, while the rest were assessed with high risk of bias. The results showed differing reliabilities for the included tests ranging from poor to almost perfect. In conclusion, active movement and pain for pain or mobility overall presented acceptable to very good reliability (Kappa >0.40); while passive intervertebral tests had lower Kappa values, suggesting poor reliability. It may be a coincidence that the studies indicating very good reliability tended to be of higher quality (low to moderate risk of bias), while studies finding poor reliability tended to be of lower quality (high risk of bias). Regardless, the current recommendation from this review would suggest the clinical use of tests with acceptable reliability and avoiding the use of tests that have been shown to not be reliable. Finally, it is critical that all future reliability studies are of higher quality with low risk of bias.

  13. Systematic review and meta-analysis of studies evaluating diagnostic test accuracy: A practical review for clinical researchers-Part I. general guidance and tips

    International Nuclear Information System (INIS)

    Kim, Kyung Won; Choi, Sang Hyun; Huh, Jimi; Park, Seong Ho; Lee, June Young

    2015-01-01

    In the field of diagnostic test accuracy (DTA), the use of systematic review and meta-analyses is steadily increasing. By means of objective evaluation of all available primary studies, these two processes generate an evidence-based systematic summary regarding a specific research topic. The methodology for systematic review and meta-analysis in DTA studies differs from that in therapeutic/interventional studies, and its content is still evolving. Here we review the overall process from a practical standpoint, which may serve as a reference for those who implement these methods

  14. Using Virtual ATE Model to Migrate Test Programs

    Institute of Scientific and Technical Information of China (English)

    王晓明; 杨乔林

    1995-01-01

    Bacause of high development costs of IC (Integrated Circuit)test programs,recycling existing test programs from one kind of ATE (Automatic Test Equipment) to another or generating directly from CAD simulation modules to ATE is more and more valuable.In this paper,a new approach to migrating test programs is presented.A virtual ATE model based on object-oriented paradigm is developed;it runs Test C++ (an intermediate test control language) programs and TeIF(Test Inftermediate Format-an intermediate pattern),migrates test programs among three kinds of ATE (Ando DIC8032,Schlumberger S15 and GenRad 1732) and generates test patterns from two kinds of CAD 9Daisy and Panda) automatically.

  15. Diagnostic accuracy of tests to detect hepatitis B surface antigen: a systematic review of the literature and meta-analysis

    Directory of Open Access Journals (Sweden)

    Ali Amini

    2017-11-01

    Full Text Available Abstract Background Chronic Hepatitis B Virus (HBV infection is characterised by the persistence of hepatitis B surface antigen (HBsAg. Expanding HBV diagnosis and treatment programmes into low resource settings will require high quality but inexpensive rapid diagnostic tests (RDTs in addition to laboratory-based enzyme immunoassays (EIAs to detect HBsAg. The purpose of this review is to assess the clinical accuracy of available diagnostic tests to detect HBsAg to inform recommendations on testing strategies in 2017 WHO hepatitis testing guidelines. Methods The systematic review was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA guidelines using 9 databases. Two reviewers independently extracted data according to a pre-specified plan and evaluated study quality. Meta-analysis was performed. HBsAg diagnostic accuracy of rapid diagnostic tests (RDTs was compared to enzyme immunoassay (EIA and nucleic-acid test (NAT reference standards. Subanalyses were performed to determine accuracy among brands, HIV-status and specimen type. Results Of the 40 studies that met the inclusion criteria, 33 compared RDTs and/or EIAs against EIAs and 7 against NATs as reference standards. Thirty studies assessed diagnostic accuracy of 33 brands of RDTs in 23,716 individuals from 23 countries using EIA as the reference standard. The pooled sensitivity and specificity were 90.0% (95% CI: 89.1, 90.8 and 99.5% (95% CI: 99.4, 99.5 respectively, but accuracy varied widely among brands. Accuracy did not differ significantly whether serum, plasma, venous or capillary whole blood was used. Pooled sensitivity of RDTs in 5 studies of HIV-positive persons was lower at 72.3% (95% CI: 67.9, 76.4 compared to that in HIV-negative persons, but specificity remained high. Five studies evaluated 8 EIAs against a chemiluminescence immunoassay reference standard with a pooled sensitivity and specificity of 88.9% (95% CI: 87.0, 90.6 and

  16. Supervised and Unsupervised Self-Testing for HIV in High- and Low-Risk Populations: A Systematic Review

    Science.gov (United States)

    Pant Pai, Nitika; Sharma, Jigyasa; Shivkumar, Sushmita; Pillay, Sabrina; Vadnais, Caroline; Joseph, Lawrence; Dheda, Keertan; Peeling, Rosanna W.

    2013-01-01

    Background Stigma, discrimination, lack of privacy, and long waiting times partly explain why six out of ten individuals living with HIV do not access facility-based testing. By circumventing these barriers, self-testing offers potential for more people to know their sero-status. Recent approval of an in-home HIV self test in the US has sparked self-testing initiatives, yet data on acceptability, feasibility, and linkages to care are limited. We systematically reviewed evidence on supervised (self-testing and counselling aided by a health care professional) and unsupervised (performed by self-tester with access to phone/internet counselling) self-testing strategies. Methods and Findings Seven databases (Medline [via PubMed], Biosis, PsycINFO, Cinahl, African Medicus, LILACS, and EMBASE) and conference abstracts of six major HIV/sexually transmitted infections conferences were searched from 1st January 2000–30th October 2012. 1,221 citations were identified and 21 studies included for review. Seven studies evaluated an unsupervised strategy and 14 evaluated a supervised strategy. For both strategies, data on acceptability (range: 74%–96%), preference (range: 61%–91%), and partner self-testing (range: 80%–97%) were high. A high specificity (range: 99.8%–100%) was observed for both strategies, while a lower sensitivity was reported in the unsupervised (range: 92.9%–100%; one study) versus supervised (range: 97.4%–97.9%; three studies) strategy. Regarding feasibility of linkage to counselling and care, 96% (n = 102/106) of individuals testing positive for HIV stated they would seek post-test counselling (unsupervised strategy, one study). No extreme adverse events were noted. The majority of data (n = 11,019/12,402 individuals, 89%) were from high-income settings and 71% (n = 15/21) of studies were cross-sectional in design, thus limiting our analysis. Conclusions Both supervised and unsupervised testing strategies were highly acceptable

  17. Diagnostic testing for celiac disease among patients with abdominal symptoms a systematic review

    NARCIS (Netherlands)

    van der Windt, D.A.W.M.; Jellema, A.P.; Mulder, C.J.J.; Kneepkens, C.M.F.; van der Horst, H.E.

    2010-01-01

    Context: The symptoms and consequences of celiac disease usually resolve with a lifelong gluten-free diet. However, clinical presentation is variable and most patients presenting with abdominal symptoms in primary care will not have celiac disease and unnecessary diagnostic testing should be

  18. Diagnostic testing for celiac disease among patients with abdominal symptoms: a systematic review

    NARCIS (Netherlands)

    van der Windt, Daniëlle A. W. M.; Jellema, Petra; Mulder, Chris J.; Kneepkens, C. M. Frank; van der Horst, Henriëtte E.

    2010-01-01

    The symptoms and consequences of celiac disease usually resolve with a lifelong gluten-free diet. However, clinical presentation is variable and most patients presenting with abdominal symptoms in primary care will not have celiac disease and unnecessary diagnostic testing should be avoided. To

  19. Value of physical tests in diagnosing cervical radiculopathy : a systematic review

    NARCIS (Netherlands)

    Thoomes, Erik J; van Geest, Sarita; van der Windt, Danielle A; Falla, Deborah; Verhagen, Arianne P; Koes, Bart W; Thoomes-de Graaf, Marloes; Kuijper, Barbara; Scholten-Peeters, Wendy Gm; Vleggeert-Lankamp, Carmen L

    Background context In clinical practice, the diagnosis of cervical radiculopathy is based on information from the patient history, physical examination and diagnostic imaging. Various physical tests may be performed, but their diagnostic accuracy is unknown. Purpose To summarize and update the

  20. Towards universal voluntary HIV testing and counselling: a systematic review and meta-analysis of community-based approaches.

    Directory of Open Access Journals (Sweden)

    Amitabh B Suthar

    2013-08-01

    Full Text Available BACKGROUND: Effective national and global HIV responses require a significant expansion of HIV testing and counselling (HTC to expand access to prevention and care. Facility-based HTC, while essential, is unlikely to meet national and global targets on its own. This article systematically reviews the evidence for community-based HTC. METHODS AND FINDINGS: PubMed was searched on 4 March 2013, clinical trial registries were searched on 3 September 2012, and Embase and the World Health Organization Global Index Medicus were searched on 10 April 2012 for studies including community-based HTC (i.e., HTC outside of health facilities. Randomised controlled trials, and observational studies were eligible if they included a community-based testing approach and reported one or more of the following outcomes: uptake, proportion receiving their first HIV test, CD4 value at diagnosis, linkage to care, HIV positivity rate, HTC coverage, HIV incidence, or cost per person tested (outcomes are defined fully in the text. The following community-based HTC approaches were reviewed: (1 door-to-door testing (systematically offering HTC to homes in a catchment area, (2 mobile testing for the general population (offering HTC via a mobile HTC service, (3 index testing (offering HTC to household members of people with HIV and persons who may have been exposed to HIV, (4 mobile testing for men who have sex with men, (5 mobile testing for people who inject drugs, (6 mobile testing for female sex workers, (7 mobile testing for adolescents, (8 self-testing, (9 workplace HTC, (10 church-based HTC, and (11 school-based HTC. The Newcastle-Ottawa Quality Assessment Scale and the Cochrane Collaboration's "risk of bias" tool were used to assess the risk of bias in studies with a comparator arm included in pooled estimates. 117 studies, including 864,651 participants completing HTC, met the inclusion criteria. The percentage of people offered community-based HTC who accepted HTC

  1. Transition between process models (BPMN and service models (WS-BPEL and other standards: A systematic review

    Directory of Open Access Journals (Sweden)

    Marko Jurišić

    2011-12-01

    Full Text Available BPMN and BPEL have become de facto standards for modeling of business processes and imple-mentation of business processes via Web services. There is a quintessential problem of discrep-ancy between these two approaches as they are applied in different phases of lifecycle and theirfundamental concepts are different — BPMN is a graph based language while BPEL is basicallya block-based programming language. This paper shows basic concepts and gives an overviewof research and ideas which emerged during last two years, presents state of the art and possiblefuture research directions. Systematic literature review was performed and critical review wasgiven regarding the potential of the given solutions.

  2. On selection of optimal stochastic model for accelerated life testing

    International Nuclear Information System (INIS)

    Volf, P.; Timková, J.

    2014-01-01

    This paper deals with the problem of proper lifetime model selection in the context of statistical reliability analysis. Namely, we consider regression models describing the dependence of failure intensities on a covariate, for instance, a stressor. Testing the model fit is standardly based on the so-called martingale residuals. Their analysis has already been studied by many authors. Nevertheless, the Bayes approach to the problem, in spite of its advantages, is just developing. We shall present the Bayes procedure of estimation in several semi-parametric regression models of failure intensity. Then, our main concern is the Bayes construction of residual processes and goodness-of-fit tests based on them. The method is illustrated with both artificial and real-data examples. - Highlights: • Statistical survival and reliability analysis and Bayes approach. • Bayes semi-parametric regression modeling in Cox's and AFT models. • Bayes version of martingale residuals and goodness-of-fit test

  3. A Systematic Review of Cost-Effectiveness Models in Type 1 Diabetes Mellitus.

    Science.gov (United States)

    Henriksson, Martin; Jindal, Ramandeep; Sternhufvud, Catarina; Bergenheim, Klas; Sörstadius, Elisabeth; Willis, Michael

    2016-06-01

    Critiques of cost-effectiveness modelling in type 1 diabetes mellitus (T1DM) are scarce and are often undertaken in combination with type 2 diabetes mellitus (T2DM) models. However, T1DM is a separate disease, and it is therefore important to appraise modelling methods in T1DM. This review identified published economic models in T1DM and provided an overview of the characteristics and capabilities of available models, thus enabling a discussion of best-practice modelling approaches in T1DM. A systematic review of Embase(®), MEDLINE(®), MEDLINE(®) In-Process, and NHS EED was conducted to identify available models in T1DM. Key conferences and health technology assessment (HTA) websites were also reviewed. The characteristics of each model (e.g. model structure, simulation method, handling of uncertainty, incorporation of treatment effect, data for risk equations, and validation procedures, based on information in the primary publication) were extracted, with a focus on model capabilities. We identified 13 unique models. Overall, the included studies varied greatly in scope as well as in the quality and quantity of information reported, but six of the models (Archimedes, CDM [Core Diabetes Model], CRC DES [Cardiff Research Consortium Discrete Event Simulation], DCCT [Diabetes Control and Complications Trial], Sheffield, and EAGLE [Economic Assessment of Glycaemic control and Long-term Effects of diabetes]) were the most rigorous and thoroughly reported. Most models were Markov based, and cohort and microsimulation methods were equally common. All of the more comprehensive models employed microsimulation methods. Model structure varied widely, with the more holistic models providing a comprehensive approach to microvascular and macrovascular events, as well as including adverse events. The majority of studies reported a lifetime horizon, used a payer perspective, and had the capability for sensitivity analysis. Several models have been developed that provide useful

  4. A new fit-for-purpose model testing framework: Decision Crash Tests

    Science.gov (United States)

    Tolson, Bryan; Craig, James

    2016-04-01

    Decision-makers in water resources are often burdened with selecting appropriate multi-million dollar strategies to mitigate the impacts of climate or land use change. Unfortunately, the suitability of existing hydrologic simulation models to accurately inform decision-making is in doubt because the testing procedures used to evaluate model utility (i.e., model validation) are insufficient. For example, many authors have identified that a good standard framework for model testing called the Klemes Crash Tests (KCTs), which are the classic model validation procedures from Klemeš (1986) that Andréassian et al. (2009) rename as KCTs, have yet to become common practice in hydrology. Furthermore, Andréassian et al. (2009) claim that the progression of hydrological science requires widespread use of KCT and the development of new crash tests. Existing simulation (not forecasting) model testing procedures such as KCTs look backwards (checking for consistency between simulations and past observations) rather than forwards (explicitly assessing if the model is likely to support future decisions). We propose a fundamentally different, forward-looking, decision-oriented hydrologic model testing framework based upon the concept of fit-for-purpose model testing that we call Decision Crash Tests or DCTs. Key DCT elements are i) the model purpose (i.e., decision the model is meant to support) must be identified so that model outputs can be mapped to management decisions ii) the framework evaluates not just the selected hydrologic model but the entire suite of model-building decisions associated with model discretization, calibration etc. The framework is constructed to directly and quantitatively evaluate model suitability. The DCT framework is applied to a model building case study on the Grand River in Ontario, Canada. A hypothetical binary decision scenario is analysed (upgrade or not upgrade the existing flood control structure) under two different sets of model building

  5. What Makes Hydrologic Models Differ? Using SUMMA to Systematically Explore Model Uncertainty and Error

    Science.gov (United States)

    Bennett, A.; Nijssen, B.; Chegwidden, O.; Wood, A.; Clark, M. P.

    2017-12-01

    Model intercomparison experiments have been conducted to quantify the variability introduced during the model development process, but have had limited success in identifying the sources of this model variability. The Structure for Unifying Multiple Modeling Alternatives (SUMMA) has been developed as a framework which defines a general set of conservation equations for mass and energy as well as a common core of numerical solvers along with the ability to set options for choosing between different spatial discretizations and flux parameterizations. SUMMA can be thought of as a framework for implementing meta-models which allows for the investigation of the impacts of decisions made during the model development process. Through this flexibility we develop a hierarchy of definitions which allows for models to be compared to one another. This vocabulary allows us to define the notion of weak equivalence between model instantiations. Through this weak equivalence we develop the concept of model mimicry, which can be used to investigate the introduction of uncertainty and error during the modeling process as well as provide a framework for identifying modeling decisions which may complement or negate one another. We instantiate SUMMA instances that mimic the behaviors of the Variable Infiltration Capacity (VIC) model and the Precipitation Runoff Modeling System (PRMS) by choosing modeling decisions which are implemented in each model. We compare runs from these models and their corresponding mimics across the Columbia River Basin located in the Pacific Northwest of the United States and Canada. From these comparisons, we are able to determine the extent to which model implementation has an effect on the results, as well as determine the changes in sensitivity of parameters due to these implementation differences. By examining these changes in results and sensitivities we can attempt to postulate changes in the modeling decisions which may provide better estimation of

  6. A systematic review of models to predict recruitment to multicentre clinical trials

    Directory of Open Access Journals (Sweden)

    Cook Andrew

    2010-07-01

    Full Text Available Abstract Background Less than one third of publicly funded trials managed to recruit according to their original plan often resulting in request for additional funding and/or time extensions. The aim was to identify models which might be useful to a major public funder of randomised controlled trials when estimating likely time requirements for recruiting trial participants. The requirements of a useful model were identified as usability, based on experience, able to reflect time trends, accounting for centre recruitment and contribution to a commissioning decision. Methods A systematic review of English language articles using MEDLINE and EMBASE. Search terms included: randomised controlled trial, patient, accrual, predict, enrol, models, statistical; Bayes Theorem; Decision Theory; Monte Carlo Method and Poisson. Only studies discussing prediction of recruitment to trials using a modelling approach were included. Information was extracted from articles by one author, and checked by a second, using a pre-defined form. Results Out of 326 identified abstracts, only 8 met all the inclusion criteria. Of these 8 studies examined, there are five major classes of model discussed: the unconditional model, the conditional model, the Poisson model, Bayesian models and Monte Carlo simulation of Markov models. None of these meet all the pre-identified needs of the funder. Conclusions To meet the needs of a number of research programmes, a new model is required as a matter of importance. Any model chosen should be validated against both retrospective and prospective data, to ensure the predictions it gives are superior to those currently used.

  7. Scrub typhus point-of-care testing: A systematic review and meta-analysis.

    Directory of Open Access Journals (Sweden)

    Kartika Saraswati

    2018-03-01

    Full Text Available Diagnosing scrub typhus clinically is difficult, hence laboratory tests play a very important role in diagnosis. As performing sophisticated laboratory tests in resource-limited settings is not feasible, accurate point-of-care testing (POCT for scrub typhus diagnosis would be invaluable for patient diagnosis and management. Here we summarise the existing evidence on the accuracy of scrub typhus POCTs to inform clinical practitioners in resource-limited settings of their diagnostic value.Studies on POCTs which can be feasibly deployed in primary health care or outpatient settings were included. Thirty-one studies were identified through PubMed and manual searches of reference lists. The quality of the studies was assessed with the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2. About half (n = 14/31 of the included studies were of moderate quality. Meta-analysis showed the pooled sensitivity and specificity of commercially available immunochromatographic tests (ICTs were 66.0% (95% CI 0.37-0.86 and 92.0% (95% CI 0.83-0.97, respectively. There was a significant and high degree of heterogeneity between the studies (I2 value = 97.48%, 95% CI 96.71-98.24 for sensitivity and I2 value = 98.17%, 95% CI 97.67-98.67 for specificity. Significant heterogeneity was observed for total number of samples between studies (p = 0.01, study design (whether using case-control design or not, p = 0.01, blinding during index test interpretation (p = 0.02, and QUADAS-2 score (p = 0.01.There was significant heterogeneity between the scrub typhus POCT diagnostic accuracy studies examined. Overall, the commercially available scrub typhus ICTs demonstrated better performance when 'ruling in' the diagnosis. There is a need for standardised methods and reporting of diagnostic accuracy to decrease between-study heterogeneity and increase comparability among study results, as well as development of an affordable and accurate antigen-based POCT to tackle the

  8. Health literacy and public health: A systematic review and integration of definitions and models

    LENUS (Irish Health Repository)

    Sorensen, Kristine

    2012-01-25

    Abstract Background Health literacy concerns the knowledge and competences of persons to meet the complex demands of health in modern society. Although its importance is increasingly recognised, there is no consensus about the definition of health literacy or about its conceptual dimensions, which limits the possibilities for measurement and comparison. The aim of the study is to review definitions and models on health literacy to develop an integrated definition and conceptual model capturing the most comprehensive evidence-based dimensions of health literacy. Methods A systematic literature review was performed to identify definitions and conceptual frameworks of health literacy. A content analysis of the definitions and conceptual frameworks was carried out to identify the central dimensions of health literacy and develop an integrated model. Results The review resulted in 17 definitions of health literacy and 12 conceptual models. Based on the content analysis, an integrative conceptual model was developed containing 12 dimensions referring to the knowledge, motivation and competencies of accessing, understanding, appraising and applying health-related information within the healthcare, disease prevention and health promotion setting, respectively. Conclusions Based upon this review, a model is proposed integrating medical and public health views of health literacy. The model can serve as a basis for developing health literacy enhancing interventions and provide a conceptual basis for the development and validation of measurement tools, capturing the different dimensions of health literacy within the healthcare, disease prevention and health promotion settings.

  9. A Systematic Approach to Determining the Identifiability of Multistage Carcinogenesis Models.

    Science.gov (United States)

    Brouwer, Andrew F; Meza, Rafael; Eisenberg, Marisa C

    2017-07-01

    Multistage clonal expansion (MSCE) models of carcinogenesis are continuous-time Markov process models often used to relate cancer incidence to biological mechanism. Identifiability analysis determines what model parameter combinations can, theoretically, be estimated from given data. We use a systematic approach, based on differential algebra methods traditionally used for deterministic ordinary differential equation (ODE) models, to determine identifiable combinations for a generalized subclass of MSCE models with any number of preinitation stages and one clonal expansion. Additionally, we determine the identifiable combinations of the generalized MSCE model with up to four clonal expansion stages, and conjecture the results for any number of clonal expansion stages. The results improve upon previous work in a number of ways and provide a framework to find the identifiable combinati